• AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    This is the best summary I could come up with:


    Gaming companies are coordinating with the FBI and Department of Homeland Security to root out so-called domestic violent extremist content, according to a new government report.

    “The Federal Bureau of Investigation (FBI) and the Department of Homeland Security (DHS) have mechanisms to share and receive domestic violent extremism threat-related information with social media and gaming companies,” the GAO says.

    “All I can think of is the awful track record of the FBI when it comes to identifying extremism,” Hasan Piker, a popular Twitch streamer who often streams while playing video games under the handle HasanAbi, says of the mechanisms.

    The GAO’s investigation, which covers September 2022 to January 2024, was undertaken at the request of the House Homeland Security Committee, which asked the government auditor to examine domestic violent extremists’ use of gaming platforms and social media.

    A 2019 internal intelligence assessment jointly produced by the FBI, DHS, the Joint Special Operations Command, and the National Counterterrorism Center and obtained by The Intercept warns that “violent extremists could exploit functionality of popular online gaming platforms and applications.” The assessment lists half a dozen U.S.-owned gaming platforms that it identifies as popular, including Blizzard Entertainment’s Battle.net, Fortnite, Playstation Xbox Live, Steam, and Roblox.

    In 2019, ADL’s then-senior vice president of international affairs, Sharon Nazarian, was asked by Rep. Ted Deutch, D-Fla., if gaming platforms “are monitored” and if there’s “a way AI can be employed to identify those sorts of conversations.”


    The original article contains 1,138 words, the summary contains 239 words. Saved 79%. I’m a bot and I’m open source!

    • umbrella@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      8 months ago

      a way AI can be employed to identify those sorts of conversations.

      i’m sure that’d be done fairly

      • SoupBrick@yiffit.net
        link
        fedilink
        arrow-up
        1
        ·
        8 months ago

        My bet: They’re probably going to initially go after some actual problem accounts, then once they got the PR out, immediately start using it for bad faith surveillance.

        • maynarkh@feddit.nl
          link
          fedilink
          arrow-up
          2
          ·
          8 months ago

          Let me raise that: they have already been doing this for years if not decades, and it’s just a convenient PR stunt that they are doing now so they can do it more openly