• 0 Posts
  • 13 Comments
Joined 1 year ago
cake
Cake day: October 8th, 2023

help-circle


  • So, while this is a “general” question, it seems likely that most people will gravitate towards themes of porn and sexual violence when thinking about it. Let me discuss from that perspective.

    To be clear, I am not an expert, but it is something I have thought a lot about in the context of my field in technology (noting how generative AI can be used to create very graphic images depicting non-consensual activities).

    The short answer: we don’t concretely know for certain. There is an argument that giving people an “outlet” means they can satisfy an urge without endangering themselves in real life. There is also an argument that repeated exposure can dilute/dull the sense of social caution and normalise the fetishised behaviour.

    I am very sympathetic to the former argument where it applies to acts between otherwise informed/consenting individuals. For example, a gay person in a foreign country with anti-gay laws; being able to explore their sexuality through the medium of ‘normal’ gay pornography seems entirely reasonable to me (but might seem disgusting by other cultural standards).

    When it comes to non-consensual acts, I think there is a lot more room for speculation and concern. I would recommend reading this study as an example, which explored dangerous attitudes towards women that were shaped through pornography.

    Some key takeaways:

    1. It’s never as simple as saying “porn caused it”. There are a multitude of factors.
    2. Regardless, there is a seemingly strong anecdotal connection between violent pornography and violent attitudes in real life.
    3. It likely depends heavily on the individual and their own beliefs/perceptions/experiences before this development

    And a final noteworthy line:

    The view that pornography played a role in their clients’ harmful attitudes and/or behaviours was undisputed; what was harder for them to articulate was the strength of the contribution of pornography, given the complexities of the other contributing factors in their clients’ lives.



  • So I can imagine there might be several things going on here, including the way in which you prompt it. Trying to dissect exactly what’s happening would require more info, and more experimentation.

    For the record, using three periods after etc is considered incorrect in formal writing. Grammar books will be pretty unanimous in saying you should only use 1 period (“etc.”). If your prompt is asking it to “correctly” do something that its training data says is “incorrect”, I could see that causing problems.

    On the flip side, chatGPT isn’t designed to use/enforce “proper” grammar. It’s simply regurgitating patterns based on how people use it in the real world (again, based on the training data). If the training data has lots of inconsistencies and counter-examples, it would seem logical that it might show up in the output, too.

    And finally, it could just be how well you design the input prompt. In my own trial (also using free 3.5 model), it was very consistent in using it properly, even though I had suggested it do otherwise:

    Input:

    Tell me how to use “etc” properly in a sentence, punctuating it with three dots. Give me multiple examples that show how to correctly use it at the end of a sentence and in the middle of a sentence.

    Output:

    Certainly! “Etc.” is an abbreviation for “et cetera,” which means “and so on” or “and others.” Here are examples of how to use it properly:

    At the end of a sentence:

    1. I enjoy various outdoor activities: hiking, biking, camping, etc.
    2. She has a wide range of skills: coding, graphic design, project management, etc.
    3. The conference covered topics like artificial intelligence, machine learning, cybersecurity, etc.

    In the middle of a sentence:

    1. The restaurant offers a variety of dishes, such as pizza, pasta, salads, etc., to cater to different preferences.
    2. Our team includes experts in software development, data analysis, user experience design, etc., ensuring a well-rounded approach to projects.
    3. The museum features art from different periods—Renaissance, Baroque, Romantic, etc.—providing a comprehensive cultural experience.


  • If you are taking an existing publication and just tweaking details (e.g.: character names, locations, dialogue), that’s not fanfic at all; at best that’s an adaptation. If you’re creating a parody (and provide proper citations/attributions to the originating work) it may be fair use. More likely, it’s still considered plagiarism if you can still recognisably see the concepts, structure and inspiration but do not have the author’s permission.

    There is no exact percentage for plagiarism, and that is by design in most countries’ legal systems. It is about concepts and ideas, and whether a “reasonable person” could make the connection.

    Proper fanfic is where you take existing characters and locations, but put them into an entirely new story / scene / context that never happened in the original work, so is considered “original” in that sense.


  • Funding/resourcing is obviously challenging, but I think there are things that can support it:

    1. State it publicly as a proud position. Other platforms are too eager to promote “free speech” at all costs, when in fact they are private companies that can impose whatever rules they want. Stating a firm position doesn’t cost anything at all, whilst also playing a role in attracting a certain kind of user and giving them confidence to report things that are dodgy.

    2. Leverage AI. LLMs and other types of AI tools can be used to detect bots, deepfakes and apply sentiment analysis on written posts. Obviously it’s not perfect and will require human oversight, but it can be an enormous help so staff can see things faster that they otherwise might miss.

    3. Punish offenders. Acknowledging complexities with how to enforce it consistently, there are still things you can do to remove the most egregious bad actors from the platform and signal to others.

    4. Price it in. If you know that you need humans to enforce the rules, then build it into your advertising fees (or other revenue streams) and sell it as a feature (e.g.: companies pay extra so they don’t have to worry about reputational damage when their product appears next to racists etc). The workforce you need isn’t that large compared to the revenue these platforms can potentially generate.

    I don’t mean to suggest it’s easy or failsafe. But it’s what I would do.


  • For anyone who’s willing to spend ~15 mins on this, I’d encourage you to play TechDirt’s simulator game Trust & Safety Tycoon.

    While it’s hardly comprehensive, it’s a fun way of thinking about the balance between needing to remain profitable/solvent whilst also choosing what social values to promote.

    It’s really easy to say “they should do [x]”, but sometimes that’s not what your investors want, or it has a toll in other ways.

    Personally, I want to see more action on disinformation. In my mind, that is the single biggest vulnerability that can be exploited with almost no repurcussions, and the world is facing some important public decisions (e.g. elections). I don’t pretend to know the specific solution, but it’s an area that needs way more investment and recognition than it currently gets.


  • windows does not have any built in way to take screenshots with the mouse cursor

    Whilst this comment isn’t really related to the popup itself, why couldn’t you use the native screenshot capability (e.g. Snipping Tool)? It’s entirely navigable by mouse cursor if you want, and available to every Win10/11 user. I’m not sure what other type of problem / limitation you’re trying to describe here…




  • That’s an appealing ‘conspiracy’ angle, and I understand why it might seem juicy and tantalising to onlookers, but that idea doesn’t hold up to any real scrutiny whatsoever.

    Why would the Board willingly trash their reputation? Why would they drag the former Twitch CEO through the mud and make him look weak and powerless? Why would they not warn Microsoft and risk damaging that relationship? Why would they let MS strike a tentative agreement with the OpenAI employees that upsets their own staff, only to then undo it?

    None of that makes any sense whatsoever from a strategic, corporate “planned” perspective. They are all actions of people who are reacting to things in the heat of the moment and are panicking because they don’t know how it will end.