• LWD@lemm.ee
    link
    fedilink
    arrow-up
    83
    arrow-down
    2
    ·
    11 months ago

    Based on that one Senate hearing, it looks like big companies like Facebook, Discord and Twitter are aiming for the maximum percent of false positives and false negatives when it comes to CSAM.

    The only thing I know about that screenshot is that it used to say “show results anyway” which is probably worse in most cases

    • Uranium3006@kbin.social
      link
      fedilink
      arrow-up
      12
      arrow-down
      1
      ·
      11 months ago

      with any luck this will destroy them and funnel disgruntled users our way, where the servers are too numerous to ever fully take down and many aren’t even US based anyways

      • LWD@lemm.ee
        link
        fedilink
        arrow-up
        13
        ·
        11 months ago

        Unfortunately, I don’t think so. Most of the politicians were virtue signaling, asking questions that were impossible and demanding timetables that they weren’t going to get anyway. One woman actually had some half decent data prepared, but I don’t think anybody else was really taking it seriously.

        Now if there was some legislation passed, specifically stuff that wasn’t KOSA, that would be something else. KOSA seems prepped to simply destroy free speech on the internet, and it would mostly harm smaller social media networks that don’t have lawyers and around-the-clock moderators to police every single comment and post.

      • bionicjoey@lemmy.ca
        link
        fedilink
        arrow-up
        21
        arrow-down
        2
        ·
        11 months ago

        I really hate and avoid when my phone switches into battery saver at 15%, so in my mind 16% is like 1%

        • VindictiveJudge@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          11 months ago

          Don’t phone battery indicators lie to you now so that 0% displayed is actually about 20% specifically because of this?

          • Album@lemmy.ca
            link
            fedilink
            arrow-up
            2
            ·
            11 months ago

            Yes and 100% isn’t 100%

            People and their batteries though… It’s a futile obsession for some. It doesn’t matter how much science or logic you throw at them there’s always something.

            Like how fast charging hasn’t for some time done like a full max rate for the entire time to keep heat within tolerances but still some people think doing the work themselves is somehow better thermal management than modern battery controllers to the point they think it will make a material difference.

          • Trainguyrom@reddthat.com
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            11 months ago

            For a phone, you’re probably going to keep it for less than 5 years, so babying the battery really isn’t worthwhile since the battery will probably outlast how long you keep your phone for if you just charge overnight every night or fully charge it daily

            • pingveno@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              ·
              11 months ago

              Though some of the phone makers are finally getting the message that some of us want to keep a hold of our expensive phones for a long while. My new Pixel 8 has 7 years of security updates, which should work fine for my purposes. I’ll probably replace the battery somewhere in there, though.

          • Plopp@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            11 months ago

            Very important. Keep it between 20-80 is a good idea. I differs between different battery chemistries though.

          • where_am_i@sh.itjust.works
            link
            fedilink
            arrow-up
            2
            ·
            11 months ago

            Just as important. And most phones these days have a setting to prevent it from charging to 100%. E.g. I set mine to stop at 90%.

            • HumanPerson@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 months ago

              I run grapheneos which doesn’t have that. I think if I get a smart plug I could use an automation in Home Assistant to turn the charger off.

          • fkn@lemmy.world
            link
            fedilink
            arrow-up
            0
            arrow-down
            1
            ·
            11 months ago

            For lithium batteries (phone batteries) it’s actually more important than draining to 0. Many studies indicate that the average phone battery should last several thousand cycles while only losing 5-10% of total capacity provided it is never charged above 80%. Minimum % (even down to 0%) and charge rate below 70% is also unrestricted.

            The tl;dr is that everytime you charge to 100% is the same as 50-100 charges to 80%. Draining a lithium chemistry battery to 0 isn’t an issue as long as you don’t leave it in a discharged state (immediately charging).

    • Hildegarde@lemmy.world
      link
      fedilink
      arrow-up
      33
      arrow-down
      7
      ·
      11 months ago

      Here’s a hot tip. If you’re on android, open the developer settings and turn on “demo mode” before taking screenshots. It makes the battery and signal display as 100% so you don’t get judged by internet commenters who don’t go outside.

  • Squizzy@lemmy.world
    link
    fedilink
    arrow-up
    47
    ·
    11 months ago

    I reported loads of content on Instagram, genuinely creepy accounts of “athletic teens” and they all got rejected.

    I got caught in a horrible recommendations loop because I’d like family photos of running and gymnastics for my nieces and cousins.

  • forgotmylastusername@lemmy.ml
    link
    fedilink
    arrow-up
    47
    arrow-down
    6
    ·
    11 months ago

    One the biggest problems with the internet today is bad actors know how to manipulate or dodge the content moderation to avoid punitive consequences. The big social platforms are moderated by the most naive people in the world. It’s either that or willful negligence. Has to be. There’s just no way these tech bros who spent their lives deep in internet culture are so clueless about how to content moderate.

    • blazeknave@lemmy.world
      link
      fedilink
      arrow-up
      32
      arrow-down
      4
      ·
      11 months ago

      I know them. I worked in this industry. They’re not naive. What basis do you have for these comments?

      I think you’re conflating with business executives running said social and gaming companies. Stop calling them techbros. Meta is not a tech startup. They’re a transnational corporation. They have capitalist execs running the companies.

    • Fudoshin ️🏳️‍🌈@feddit.uk
      link
      fedilink
      arrow-up
      11
      ·
      11 months ago

      bad actors know how to manipulate or dodge the content moderation to avoid punitive consequences.

      People have been doing that since the dawn of the internet. People on my old forum in the 90s tried to circumvent profanity filters on phpBB.

      Even now you can get round Lemmy.World filters against “fag-got” by adding a hyphen in it.

      Nothing new under the sun.

    • Jknaraa@lemmy.ml
      link
      fedilink
      arrow-up
      6
      ·
      11 months ago

      The thing is that words can have a very broad range of meaning depending on who uses them and how (among many other factors), but you can’t accurately code all of that into a form that computers can understand. Even ignoring bad actors it makes certain things very difficult, like if you ever want to search for something that just happens to share words with something completely different which is very popular.

    • d-RLY?@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      Auto-moderation is both lazy and is only going to get worse. Not saying there isn’t some value on things being hard-banned (like very specific spam like shit that just keeps responding to everything with the same thing non-stop). But these mega outlets/sites want to just use full automation to ban shit without any human interactions. At least unless you or another corp has connections on the inside to get a person or people to fix it. Just like how they make it so fucking hard to ever reach a person when calling (or trying to even find) a support line.

      This automated shit just blacklists more and more shit and can completely fuck over people that use those sites for income (and they even can’t reach a person when their income is cut off for false reasons and don’t get back-pay for the period of a strike/ban). The bad guys will always just keep moving to a new word or phrase as the old ones get banned. So we as users are actually losing words and phrases and the actual shit is just on to the next one without issues.

  • Lath@kbin.social
    link
    fedilink
    arrow-up
    30
    arrow-down
    1
    ·
    11 months ago

    That’s what you get for all the teabagging you’ve been doing…

  • nicetriangle@kbin.social
    link
    fedilink
    arrow-up
    25
    ·
    edit-2
    11 months ago

    I had a post of mine flagged for multiple days on there because it had an illustration of a woman in a full length wool coat completely covering her and not in any way sexual. Shit is so stupid

  • rawrthundercats@lemmy.ml
    link
    fedilink
    arrow-up
    28
    arrow-down
    3
    ·
    11 months ago

    How do we know they didn’t type something more explicit to get the result and just change what’s in the search bar? Has anyone verified this?

    • 7heo@lemmy.mlOP
      link
      fedilink
      arrow-up
      38
      ·
      11 months ago

      I actually don’t know, I’m not sure it is possible (I never used Instagram, the search might be auto-submitting for all I know) but intentionally flagging yourself as potential child abuser, for clout, is a bit extreme…

  • baatliwala@lemmy.world
    link
    fedilink
    arrow-up
    24
    arrow-down
    2
    ·
    11 months ago

    Barely 2 years ago I noticed that people were posting porn on Insta, and it was publicly visible just because they tagged #cum as #cüm. I don’t think this is possible now, but basically corporations are dumb and people posting disallowed content can be creative as hell.

    • Trainguyrom@reddthat.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      basically corporations are dumb and people posting disallowed content can be creative as hell.

      I generally get the feeling with this kind of thing that it’s not incompetence but either an unwillingness to act quickly or an inability to

  • phx@lemmy.ca
    link
    fedilink
    arrow-up
    17
    ·
    11 months ago

    It’s dumb, but it’s also possible that a combination of those terms hads been adopted by some group distributing CSAM.

    At one point, “cheese pizza” was a term they apparently used on YouTube videos etc due to it having the same abbreviation as CP (Child Pornography).

    Sick fucks ruining everything for everyone

    • d-RLY?@lemmy.ml
      link
      fedilink
      arrow-up
      9
      ·
      11 months ago

      I agree with you is the TL;DR, and the rest is just my mad ranting opinions about companies being allowed to just auto-censor us. So feel free to completely ignore the rest. lol.

      It is like just banning words and phrases just because bad people use them has just become the norm. I really really can’t stand the way that channels on YT constantly have to self-censor basically everything (even if the video is just reporting on or trying to explain bad shit that is or has happened). And it never seems to actually stop the actual issues from happening. Just means the bad people just move on to a new word or phrase that is then itself banned. It isn’t about actually stopping fucked-up shit from happening. It is just about making sure advertisers and other sources of money don’t throw a fit.

      We always hear about how places like China are bad in-part for censoring words and speech. But in the US and other western nations we pretend we are allowed to freely speak uncensored. We have always had censoring of speech, it is just that the real rulers of the country are allowed to do it instead. Keeps the government’s hands free from legally being the enforcers of doing it to us. Shit like CP is fucked, and it should be handled for what it is, but allowing for-profit companies and especially their algorithms/AI to decide what we can and can’t say or search for without any level of human interactions that very much lead to false bans is also fucked.

      It is waaaay too easy for all the mega corps to completely take down channels and block creators from revenue of their own work just completely automated. But the accused channel can’t ever get a real person to both get clear understanding of what and who is attacking them, and to explain why their strike/bans aren’t valid. I have heard that even channels that have gotten written/legal permission from a big studio to use a clip of music or segment from video (music being the worst) will STILL catch automated strikes for copyright violations.

      We don’t need actual government censors, because the mega corps with all the money are allowed to do it for them. We have rights but they don’t really matter if they can say a private company or org made up of people from various mega corps are allowed to do it for them.

    • Schadrach@lemmy.sdf.org
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      At one point, “cheese pizza” was a term they apparently used on YouTube videos etc due to it having the same abbreviation as CP (Child Pornography).

      This in turn was why the Podesta emails led to the whole pizza gate thing - there were a bunch of emails with weird phrasings like going to do cheese pizza for a couple of hours that just aren’t how people talk or write and so internet weirdos thought it was pedo code and then it kinda went insane from there.

  • IzzyScissor@lemmy.world
    link
    fedilink
    arrow-up
    16
    ·
    11 months ago

    Remember, searching for “halo” is banned because it could potentially be linked to pedophilia, but editing a video of the president to look like a pedophile is fine because “it wasnt done with AI.”

      • DAMunzy@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        Biden was edited to look like he was groping his granddaughter for an extended amount of time instead of quickly putting a pin above her breast. It was posted to Facebook/Instagram/Meta. AI wasn’t used.

      • Astro@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        11 months ago

        It’s the “Kids Online Safety Act”. Basically it’s using the old “think of the children!” move, but in reality conservatives are trying to push anything queer back into the dark.