• Uriel238 [all pronouns]@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    2
    ·
    1 year ago

    When a service is willing to take responsibility for collisions and driving violations, then we know it works. If the guy asleep at the wheel (which he allegedly can do in an autonomous car) is still the one held responsible, then were not there yet.

    That said end-to-end AI totally sounds like equivocal marketing buzz.

    • danhab99@programming.dev
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      1 year ago

      When a service is willing to take responsibility for collisions and driving violations

      Devil’s advocate: it’s kinda hard to pin the responsibility on Tesla when at the end of the day there was a person driving and the driver’s always responsible.

      I’m not disagreeing with you, I’m on team ban-human-drivers

      • Uriel238 [all pronouns]@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Ideally, we’d get to the point where the driver merely directs the vehicle to where it wants to go, and then the computer system works out all the pathfinding and maneuvering, so that yes, any instance where a vehicle avoidably collides with another thing can be regarded as a malfunction.

    • Corkyskog@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      I wonder what happens when the car is on a collision course with a golden retriever and the only way not to hit it would be to damage the car. Or same scenario, but the only way not to hit it, is it to hit an 07 Carolla parked on the side of the road. Not saying humans have superior judgement… just wondering if it will be programmed by the theory of actuarial of philosophical science.

      • ramblinguy@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        That makes me think- will the AI see a kid that’s about to run out from behind a parked car? As a human, if I see a kid run from the house into a row of parked cars, I know he’s still there and will slow down before I get there. But would self driving make that same leap of logic? I’m not sure what the range and capabilities of self driving cars are right now in terms of scanning, but hopefully it would be smart enough to take preventative measures

        • Uriel238 [all pronouns]@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Right now, car AI has trouble both with kids and non-white persons. That said, when it comes to the things it is good at detecting, the cars respond much more quickly. This came up when an official asked about how it detects brake lights, and the project advisor (from Google, I think) explained that the car doesn’t worry about break lights but instantly detects when a car ahead of it rapidly decelerates, and responds immediately.

          I’m pretty sure we can get cars smart enough and sharp enough to drive better than humans. But the recent incident in San Francisco where Cruise driverless taxis blocked an ambulance with a patient in critical condition (resulting in their death), suggests to me we underestimated the layers of logistics necessary to make cars truly autonomous.

          Randal Munroe listed a few more incidents we can expect (Obligatory XKCD).

        • Corkyskog@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          According to other commentors, the need will never arise because the AI cars will be programmed so well it’s impossible to have accidents 🙄… now I see why FSD will never become a reality.

        • ramjambamalam@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          Good question. Neural networks are modelled after how brains learn and process information, so it’s certainly theoretically possible for a neural network (or other machine learning algorithm) to make inferences like that, just like how you’ve learned them with years of experience.

          The biggest challenge in any machine learning is finding enough labelled training data. In fact, a friend of mine contributed to a paper in which (no joke) GTA V was used to generate labelled training data for an automous vehicle. Because it’s a game engine, every object in the game is already digitized, and the 3D modelling is accurate enough to be useful, at least. This vehicle used LIDAR so the actual shaders and such didn’t matter as much as the 3D point cloud.

      • Uriel238 [all pronouns]@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I doubt it. Germany has already implemented (considering implementing) regulations regarding the ethics of autonomous vehicles. As it is, cars are simply trying not to collide with anything and given their reflexes and perception are way faster and more accurate than human beings, they have a better chance of saving both the dog and the other car.

        That said, one of the problems we’re seeing with smart devices (that is devices that are software run rather than controlled by simple mechanics) is that companies are keen to abuse the power that gives them, hence the whole John Deere tractors debacle and the development of right-to-repair laws. Also, some BMWs require rental of some of their features (such as seat warmers) which seems to me as less than ethical.

        So I hope we’ll get to a point where not only is it anyone’s right to jailbreak their devices (including a self-driving car) but there will be several FOSS options we can choose from. And that means someone who programmed them may actually find a process-layer in which hazard prioritization or victim prioritization is considered.

        It is certainly an entertaining idea of speculative fiction that an aggressive driver package is developed, gets popular and then causes a rise in traffic accidents. More likely would be software packages that allow the vehicle to operate despite self-test failures, again leading to a higher traffic collision rate.

      • iminahurry@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        There are many such hypothetical scenarios based on the trolley problem, but the real answer is that a good self driving system will never end up in that situation in the first place.

        So as a dev, you just program to not let that situation arise, then you won’t need to program a solution for that.