Police in England installed an AI camera system along a major road. It caught almost 300 drivers in its first 3 days.::An AI camera system installed along a major road in England caught 300 offenses in its first 3 days.There were 180 seat belt offenses and 117 mobile phone

  • EndlessApollo@lemmy.world
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    12
    ·
    1 year ago

    ITT a bunch of people who have never read an ounce of sci fi (or got entirely the wrong message and think law being enforced by robots is a good thing)

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      10
      ·
      edit-2
      1 year ago

      But the law isn’t enforced by robots the law is enforced by humans. All that’s happening here is that the process of capturing transgressions has been automated. I don’t see how that’s a problem.

      As long as humans are still part of the sentencing process, and they are, then functionally there’s no difference, if a mistake is being made it will be rectified at that time. From the process point of view there isn’t really any difference between being caught by an automated AI camera and being caught by a traffic cop.

      • davidalso@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        2
        ·
        1 year ago

        Although completely reasonable, I fear that your conclusion is inaccessible for most folks.

        And as a pedestrian, I’m all for a system that’s capable of reducing distracted driving.

        • lateraltwo@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          4
          ·
          1 year ago

          How to disincentivize a motorist public is to make driving a stressful affair- currently, it’s other people. Soon, it’ll be catalogs of minor infractions caught, at the millisecond intervals they occur in, forever and the bill to pay it showing up every single week for the rest of your driving lives. Odds are it’s going to be scrapped, made a Boogeyman for a while, and then come back every time people get testy about gas prices

    • CrayonRosary@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      6
      ·
      1 year ago

      Calling an image recognition system a robot enforcing the law is such a stretch you’re going to pull a muscle.

      • EndlessApollo@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        5
        ·
        1 year ago

        It’s going to disproportionately target minorities. ML* isn’t some wonderful impartial observer, it’s subject to all the same biases as the people who made it. Whether the people at the end of the process are impartial or not barely matters either imo, they’re going to get the biased results of the ML looking for criminals so it’s still going to be a flawed system even if the human element is OK. Ffs please don’t support this kind of dystopian shit, Idk how it’s not completely obvious how horrifying this stuff is

        *what people call AI is not intelligent at all. It uses machine learning, the same process as chatbots and autocorrect. AI is a buzzword used by tech bros who are desperate to “invest in the future”

          • EndlessApollo@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            1 year ago

            Face recognition data sets and the like tend to be pretty heavily skewed, they usually have a lot more white people than poc. You can see this when ML image filters turn black people into white people or literal gorillas. Unless the data set properly represents a super diverse set of people (and tbh probably even if it does), there’s going to be a lot of race based false positives/negatives

              • EndlessApollo@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                1 year ago

                That might be the case tbh, but either way that would be bad and discriminatory. I might just be overthinking it, it might not actually be that bad, but I know discrimination like that is super common when it comes to how recognition-based ML is trained

                • ParsnipWitch@feddit.de
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  But how is that different or worse from a human sitting at the side of the road and writing down number plates for example?

                  • EndlessApollo@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    1 year ago

                    Tbh part of my response to this is just knee-jerk reaction, this specific application might not be a bad idea, but I’m terrified of the surveillance state this type of stuff is warming us up for. There’s already talk of cops in US and China and probably other places planning to use ML like this to pore over security footage and find criminals/track people in general. To me this sounds like England’s first dip into that authoritarian pool, a proof of concept to see how viable it is keeping the entire country under 24/7 surveillance

        • CrayonRosary@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 year ago

          The image recognition system detects a cell phone being used and snaps a photo, records the plate number, etc. How exactly does that lead to racism?

          You’re making what amounts to a slippery slope argument, and that’s often a very flawed way of thinking.

    • atzanteol@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      7
      ·
      1 year ago

      According to Sci-fi organ transplants will lead to the creation of monsters who will kill us all for “tampering in God’s domain.”

      Maybe fiction isn’t the best way to determine policy…

    • ShittyRedditWasBetter@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      7
      ·
      edit-2
      1 year ago

      I for one base ALL my global policy on sci Fi novels 🤦‍♂️

      Since the writers are on strike we can have them just write the entire legal code as the writers of black window are actually taken seriously beyond nerds for once.