AI one-percenters seizing power forever is the real doomsday scenario, warns AI godfather::The real risk of AI isn’t that it’ll kill you. It’s that a small group of billionaires will control the tech forever.

  • clearleaf@lemmy.world
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    4
    ·
    1 year ago

    At first the fear mongering was about how AI is so good that you’ll be able to replace your entire workforce with it for a fraction of the cost, which would be sooo horrible. Pwease investors pwease oh pwease stop investing in my company uwu

    Now they’re straight up saying that the people who invest the most in AI will dominate the world. If tech companies were really all that scared of AI they would be calling for more regulations yet none of these people ever seem to be interested in that at all.

    • Sharklaser@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      3
      ·
      1 year ago

      I think you’ve spotted the grift here. AI investment has faltered quickly, so a final pump before the dump. Get the suckers thinking it’s a no-brainer and dump the shitty stock. Business insider caring for humanity lol

      • Pohl@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        5
        ·
        1 year ago

        Either ML is going to scale in an unpredictable way, or it is a complete dead end when it comes to artificial intelligence. The “godfathers” of ai know it’s a dead end.

        Probabilistic computing based on statistical models has value and will be useful. Pretending it is a world changing AI tech was a grift from day 1. The fact that art, that cannot be evaluated objectively, was the first place it appeared commercially should have been the clue.

        • frezik@midwest.social
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          1 year ago

          ML isn’t a dead end. I mean, if your target is strong AI at human-like intelligence, then maybe, maybe not. If your goal is useful tools for getting shit done, then ML is already a success. Almost every push for AI in the last 60 years has born fruit, even if it didn’t meet its final end goal.

          • Pohl@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            3
            ·
            1 year ago

            That’s pretty much what I meant. ML has a lot of value, promising that it will deliver artificial intelligence is probably hogwash.

            Useful tools? yes. AI? No. But never let the truth get in the way of an investor bonanza.

        • Richard@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          1 year ago

          Probabilistic computing based on statistical models has value and will be useful. Pretending it is a world changing AI tech was a gift from day 1.

          That is literally modelling how your and all our brains work, so no, neuromorphic computing / approximate computing is still the way to go. It’s just that neuromorphic computing does not necessarily equal LLMs. Paired with powerful mixed analogue and digital signal chips based on photonics, we will hopefully at some point be able to make neural networks that can scale the simulation of neurons and synapses to a level that is on par or even superior to thr human brain.

          • Pohl@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            1 year ago

            A claim that we have a computing model that shares a design with the operation of a biological brain is philosophical and conjecture.

            If we had a theory of mind that was complete, it would simply be a matter of counting up the number of transistors required to approximate varying degrees of intelligence. We do not. We have no idea how the computational meat we all possess enables us to translate sensory input into a contiguous sense of self.

            It is totally valid to believe that ML computing is a match to the biological model and that it will cross a barrier at some point. But it is a belief that does not support itself with empirical evidence. At least not yet.

            • Restaldt@lemm.ee
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              1 year ago

              A claim that we have a computing model that shares a design with the operation of a biological brain is philosophical and conjecture

              Mathematical actually. See the 1943 McCulloch and Pitts paper for why Neural networks are called such.

              We use logic and math to approximate neurons

              • SmoothIsFast@citizensgaming.com
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                We have also recently trained a model against a small fly or worm, I can’t remember which it was, and it behaved identically to the original organism, it’s the complicated networks which have multiple sub networks essentially that are our current weak spot.

            • SmoothIsFast@citizensgaming.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              We have literally created small organism brains with neural networks that behaved nearly identical, using ML and neural nets. Idk but i would call that pretty damn good emphical evidence. We do not know the specific mechanism on how a brain generates its weight so to speak chemically and computes but we understand that at its simplest form it is a neuron, with a weight, and depending on that weight/sensitivity what ever you want to call it produces a an output pretty damn consistently. The brain is multiple networks working simultaneously with the ability to self learn, this architecture is what is missing in our ML models now if you wanted general artificial intelligence, but we are missing foundational algoriths for chossing wieghts instead of randomly assigning them and hoping for the best to facilitate memroy and cleaner network integrations. You need specialized networks for each critical function, motor control, emotional regulation, etc then you need a system that can interpret weights or create weights in a way that you can imprint an “image”, for lack of a better term, to create memories. Consciousness would then just be the network that facilitates interpretation from each networks output and decides which systems need to be engaged next or if an end state was reached. Which imo can be clearly demonstrated by split brain individuals.

              If we had a theory of mind that was complete, it would simply be a matter of counting up the number of transistors required to approximate varying degrees of intelligence.

              I think this would be our fundamental lacking to interpret how our brain calculates and uses chemical weights so to speak to vary output. If we can’t judge that efficiency, we can’t just count all the transitors and say it’s this smart because the model could literally be trained to just output the letter s for everything even if its the size of chat gpt. I think we very well could state the capacity and limits of our brains by counting the number of neurons but whether it reaches its potential is dependent on how efficiently it was trained and that is where approximating intelligence becomes insanely difficult.

          • Sharklaser@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Neural networks have been phenomenal in the results they have achieved, out doing support vector machines, random trees, Markov models etc… But I do wonder if there is a bias towards it being able to mimick what the brain does like the other post said, and where are the limits.

            For example in medicine, we want to spot unknown correlations to improve things like drug discovery, stratified medince, strange patterns in disease within a population that suggests unknown factors at play… There might be a mathematical model better that convolutional neural networks that doesn’t mimick the brain, but we maybe need an ai to develop that, maybe like deep thought in hgttg!

    • Peanut@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      edit-2
      1 year ago

      You’re conflating polarized opinions of very different people and groups.

      That being said your antagonism towards investors and wealthy companies is very sound as a foundation.

      Hinton only gave his excessive worry after he left his job. There is no reason to suspect his motives.

      Lecun is the opposite side and believes the danger is in companies hoarding the technology. He is why the open community has gained so much traction.

      OpenAI are simultaneously being criticized for putting AI out for public use, as well a for not being open enough about the architecture, or allowing the public to actually have control of the state of AI developments. That being said they are leaning towards more authoritarian control from united governments and groups.

      I’m mostly geared towards yann lecun and being more open despite the risks, because there is more risk and harm from hindering development of or privatizing the growth of AI technology.

      The reality is that every single direction they try is heavily criticized because the general public has jumped onto a weird AI hate train.

      See artists still complaining about adobe AI regardless of the training data, and hating on the open model community despite giving power to the people who don’t want to join the adobe rent system.