• Rentlar@lemmy.ca
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    2
    ·
    8 months ago

    “Replacing Talent” is not what AI is meant for, yet, it seems to be every penny-pinching, bean counting studio’s long term goal with it.

      • 9488fcea02a9@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        20
        arrow-down
        1
        ·
        8 months ago

        I’m not a developer, but I use AI tools at work (mostly LLMs).

        You need to treat AI like a junior intern… You give it a task, but you still need to check the output and use critical thinking. You cant just take some work from an intern, blindly incorporate it into your presentation, and then blame the intern if the work is shoddy…

        AI should be a time saver for certain tasks. It cannot (currently) replace a good worker.

        • Lmaydev@programming.dev
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          1
          ·
          edit-2
          8 months ago

          As a developer I use it mainly for learning.

          What used to be a Google followed by skimming a few articles or docs pages is now a question.

          It pulls the specific info I need, sources it and allows follow up questions.

          I’ve noticed the new juniors can get up to speed on new tech very quickly nowadays.

          As for code I don’t trust it beyond snippets I can use as a base.

          • FiniteBanjo@lemmy.today
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            3
            ·
            edit-2
            8 months ago

            JFC they’ve certainly got the unethical shills out in full force today. Language Models do not and will never amount to proper human work. It’s almost always a net negative everywhere it is used, final products considered.

              • FiniteBanjo@lemmy.today
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                2
                ·
                8 months ago

                Its intended use is to replace human work in exchange for lower accuracy. There is no ethical use case scenario.

                • Lmaydev@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  8 months ago

                  It’s intended to show case its ability to generate text. How people use it is up to them.

                  As I said it’s great for learning as it’s very accurate when summarising articles / docs. It even sources it so you can read up more if needed.

        • Rickety Thudds@lemmy.ca
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          8 months ago

          It’s clutch for boring emails with several tedious document summaries. Sometimes I get a day’s work done in 4 hours.

          Automation can be great, when it comes from the bottom-up.

          • isles@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            8 months ago

            Honestly, that’s been my favorite - bringing in automation tech to help me in low-tech industries (almost all corporate-type office jobs). When I started my current role, I was working consistently 50 hours a week. I slowly automated almost all the processes and now usually work about 2-3 hours a day with the same outputs. The trick is to not increase outputs or that becomes the new baseline expectation.

        • fidodo@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          8 months ago

          I am a developer and that’s exactly how I see it too. I think AI will be able to write PRs for simple stories but it will need a human to review those stories to give approval or feedback for it to fix it, or manually intervene to tweak the output.

      • Rentlar@lemmy.ca
        link
        fedilink
        English
        arrow-up
        12
        ·
        8 months ago

        I do think given time, AI can improve to the level that it can do nearly all of the same things junior level people in many different sectors can.

        The problem and unfortunate thing for companies I forsee is that it can’t turn juniors into seniors if the AI “replaces” juniors, which means that company will run out of seniors with retirement or will have to pay piles and piles of cash for people just to hire the few non-AI people left with industry knowledge to babysit the AIs.

      • assassinatedbyCIA@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        8 months ago

        The problem is the crazy valuations of AI companies is based on it replacing talent and soon. Supplementing talent is far less exciting and far less profitable.

      • Altima NEO@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        Not even that, it’s a tool. Like the same way Photoshop, or 3ds max are tools . You still need the talent to use the tools.

        • time_fo_that@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          8 months ago

          I saw this the other day and I’m like well fuck might as well go to trade school before it gets saturated like what happened with tech in the last couple years.

          • Defaced@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            8 months ago

            Yeah, the sad thing about Devin AI is that they’re clearly doing it for the money, they have absolutely no intentions on bettering humanity, they just want to build this up and sell it off for that fat entrepreneur paycheck. If they really cared about bettering humanity they would open it up to everyone, but they’re only accepting inquiries from businesses.

      • Thorny_Insight@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        8 months ago

        Current AI*

        I don’t see any reason to expect this to be the case indefinitely. It has been getting better all the time and lately been doing so at a quite rapid pace. In my view it’s just a matter of time untill it surpasses human capabilities. It can already do so in specific narrow fields. Once we reach AGI all bets are off.

        • thundermoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          2
          ·
          8 months ago

          Maybe this comment will age poorly, but I think AGI is a long way off. LLMs are a dead-end, IMO. They are easy to improve with the tech we have today and they can be very useful, so there’s a ton of hype around them. They’re also easy to build tools around, so everyone in tech is trying to get their piece of AI now.

          However, LLMs are chat interfaces to searching a large dataset, and that’s about it. Even the image generators are doing this, the dataset just happens to be visual. All of the results you get from a prompt are just queries into that data, even when you get a result that makes it seem intelligent. The model is finding a best-fit response based on billions of parameters, like a hyperdimensional regression analysis. In other words, it’s pattern-matching.

          A lot of people will say that’s intelligence, but it’s different; the LLM isn’t capable of understanding anything new, it can only generate a response from something in its training set. More parameters, better training, and larger context windows just refine the search results, they don’t make the LLM smarter.

          AGI needs something new, we aren’t going to get there with any of the approaches used today. RemindMe! 5 years to see if this aged like wine or milk.

          • KeenFlame@feddit.nu
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            8 months ago

            How does this amazing prediction engine discovery that basically works like our brain does not fit in a larger solution?

            The way emergent world simulation can be found in the larger models definitely point to this being a cornerstone, as it provides functional value in both image and text recall.

            Nevermid that tools like memgpt doesn’t satisfy long term memory and context windows doesn’t satisfy attention functions properly, I need a much harder sell on LLM technology not proving an important piece of agi

          • Thorny_Insight@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            8 months ago

            Yeah LLMs might very well be a dead-end when it comes to AGI but just like chatGPT seemingly came out of nowhere and took the world by surprise, this might just aswell be the case with an actual AGI aswell. My comment doesn’t really make any claims about the timescale of it but rather just tires to point out the inevitability of it.

    • gravitas_deficiency@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      2
      ·
      edit-2
      8 months ago
      sed “s/studio’s/tech industry c-suite’s/“
      

      As an engineer, the amount of non-engineering idiots in tech corporate leadership trying to apply inappropriate technical solutions to something because it became a buzzword is just absurdly high.

        • FiniteBanjo@lemmy.today
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          edit-2
          8 months ago

          Not really, no, all of the current models built to intended scale are selling it as a product, especially OpenAI, Microsoft, and Google. It was built with a purpose and that purpose was to potentially replace expensive human assets.

      • SpaceCowboy@lemmy.ca
        link
        fedilink
        English
        arrow-up
        7
        ·
        8 months ago

        Yeah, there’s many times I type “class for:” followed by a a dump of SQL, JSON, XML or whatever and it’ll make a class with properties named correctly with the right types. I still have to figure out tricky data relationships and that sort of thing, but the boring tasks of creating interfaces to databases and objects for serializing stuff goes a lot faster now.

        So a much larger percentage of my time is devoted to solving problems rather than doing all the boring grunt work usually involved with getting data in and out of the app.

        • pythonoob@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 months ago

          God I must be an idiot. I was trying to design my own db for simple flash cards with multiple tags, where tags can be stacked with each other to auto build flash card decks. Even with chatgpt helping me build out the interfaces and some functions I was not getting the functionality I wanted at all.

      • Pilferjinx@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        8 months ago

        Yeah current gen AI is still very much a human tool - an assistant - maybe a companion if you stretch it to it’s edge. I for one welcome a personal AI buddy

    • guacupado@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      8 months ago

      Too many people see AI doing work as an either or thing. AI won’t replace people outright, it’ll just reduce the amount of people you need.

      • Trollception@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        8 months ago

        Which in turn replaces people. What happens if a person is 50 percent more productive with AI? Is the company going to let them simply have 50% of the workload they would before, or will they lay off the other unneeded employees?

  • IsThisAnAI@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    3
    ·
    8 months ago

    Folks really didn’t understand how AI will work. It’s not going to be some big we’re dropping 1000 people.

    It’s going to reduce demand over time.

    • dariusj18@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      ·
      8 months ago

      I’ve heard it as “No one is losing their job to AI, but they will lose their jobs to someone who is using AI.”

      • smackjack@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        8 months ago

        Think of AI like computers and spreadsheet software in the early 80s. I bet a lot of accountants were pretty freaked out about what this new technology was going to mean for their jobs.

        Did technology replace those accountants? No, but companies probably didn’t need as many accountants as they did before. AI will likely reduce the number of programmers that a company needs, but it won’t eliminate them

        • dariusj18@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          Really I think it’s kind of the opposite. There are plenty of jobs awaiting higher skilled labor. Just as Excel didn’t hurt accounting, it gave many people who weren’t trained I’m accounting to take on more tasks than they would have.

      • Semi-Hemi-Lemmygod@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        8 months ago

        Case in point: I’m using ChatGPT to help me write cover letters. I make sure to proofread them and sometimes it hallucinates my expertise, but it makes it a lot faster.

        • ObsidianZed@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          8 months ago

          I mean that’s already happening at some big companies now.

          Will it last? My guess is no, but they’ll enjoy saving the money that they would pay human beings in the mean time.

          My hope is just that they’ll suffer losses due to a drop in product quality and start struggling, but let’s face it, the big tech companies are almost never the ones’ that are actually hurt by their decisions.

    • kameecoding@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      8 months ago

      And in that regard it’s no different than any other productivity tool or automation, I have seen software being bought that immediately Eliminated 80 odd jobs.

    • Pyr_Pressure@lemmy.ca
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      8 months ago

      It will start with going from 5 writers to 3, or going from 10 animators to 6.

      Then 10 years from now as it gets more advanced we will be down to maybe 1 writer and 2 animators.

      • QuaternionsRock@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 months ago

        going from 10 animators to 6

        It’s still crazy to me that like half of Across the Spider-Verse was AI generated

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      It’s going to reduce demand over time.

      At least in video games it’s probably going to be more that scope increases while headcount stays the same.

      If most of your budget is labor, and the cost of the good is fixed, with the number of units sold staying around the same, there’s already an equilibrium.

      So companies can either (a) reduce headcount to spend a few years making a game comparable to games today when it releases, or (b) keep the same headcount and release a game that reviews well and is what the market will expect in a few years.

      So for example, you don’t want to reduce the number of writers or voice actors to keep a game with a handful of main NPCs and a bunch of filler NPCs when you can keep the same number of writers and actors but extend their efforts to straight up have entire cities where every NPC has branching voiced dialogue generated by extending the writing and performances of that core team.

      But you still need massive amounts of human generated content to align the generative AI to the world lore, character tone, style of writing, etc.

      Pipelines will change, scope will increase, but the number of people used for a AAA will largely stay the same and may even slightly grow.

    • deur@feddit.nl
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      5
      ·
      edit-2
      8 months ago

      Folks really don’t understand how AI will work. It’s not going to be some big we’re dropping 1000 people.

  • Kissaki@feddit.de
    link
    fedilink
    English
    arrow-up
    15
    ·
    8 months ago

    The article doesn’t say much. So I checked the source for more information. It doesn’t say much more, but IMO in a much better way, in two concise paragraphs. In the sourced financial report, it is in the intro, two paragraphs:

    An example R&D initiative, sponsored by the Innovation team was Project Ava, where a team, initially from Electric Square Malta, attempted to create a 2D game solely using Gen AI. Over the six-month process, the team shared their findings across the Group, highlighting where Gen AI has the potential to augment the game development process, and where it lags behind. Whilst the project team started small, it identified over 400 tools, evaluating and utilising those with the best potential. Despite this, we ultimately utilised bench resource from seven different game development studios as part of the project, as the tooling was unable to replace talent.

    One of the key learnings was that whilst Gen AI may simplify or accelerate certain processes, the best results and quality needed can only be achieved by experts in their field utilising Gen AI as a new, powerful tool in their creative process. As a research project, the game will not be released to the public, but has been an excellent initiative to rapidly spread tangible learnings across the Group, provide insights to clients and it demonstrates the power and level of cross-studio collaboration that currently exists. Alongside Project Ava, the team is undertaking a range of Gen AI R&D projects, including around 3D assets, to ensure that we are able to provide current insights in an ever- evolving part of the market


    The central quote and conclusion being:

    One of the key learnings was that whilst Gen AI may simplify or accelerate certain processes, the best results and quality needed can only be achieved by experts in their field utilising Gen AI as a new, powerful tool in their creative process.

    Which is obvious and expected for anyone familiar with the technology. Of course, experiments and confirming expectations has value too. And I’m certain actually using tools and finding out which ones they can use where is very useful to them specifically.

    • 0xD@infosec.pub
      link
      fedilink
      English
      arrow-up
      4
      ·
      8 months ago

      The overall point may be relatively obvious, but the details are not.

      Which steps of which processes is it good at, and which not? What can be easily integrated into existing tooling? Where is is best completely skipped?

    • FiniteBanjo@lemmy.today
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      Honestly it sounds extremely generous by saying the best results can be achieved by experts with GenAI. In my opinion the best results can be achieved without it entirely.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      1
      arrow-down
      5
      ·
      8 months ago

      Unless you specify that you want a talented output. A lot of people don’t realize that you need to tell AIs what kind of output you want them to give you, if you don’t then they’ll default to something average. That’s the cause of a lot of disappointment with tools like ChatGPT.

      • Spuddlesv2@lemmy.ca
        link
        fedilink
        English
        arrow-up
        9
        ·
        8 months ago

        Ahhh so the secret to using ChatGPT successfully is to tell it to give you good output?

        Like “make sure the code actually works” and “don’t repeat yourself like a fucking idiot” and “don’t hallucinate false information”!

        • Natanael@slrpnk.net
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          edit-2
          8 months ago

          Unironically yes, sometimes. A lot of the best works which its training samples are based on cites the original poster’s qualifications, and this filters into the model where asking for the right qualifications directly can influence it to rely more on high quality input samples when generating its response.

          But it’s still not perfect, obviously. It doesn’t make it stop hallucinating.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            0
            arrow-down
            1
            ·
            8 months ago

            Yeah, you still need to give an AI’s output an editing and review pass, especially if factual accuracy is important. But though some may mock the term “prompt engineering” there really are a bunch of tactics you can use when talking to an AI to get it to do a much better job. The most amusing one I’ve come across is that some AIs will produce better results if you offer to tip them $100 for a good output, even though there’s no way to physically fulfill such a promise. The theory is that the AI’s training data tended to have better stuff associated with situations where people paid for it, so when you tell the AI you’re willing to pay it’ll effectively go “ah, the user is expecting good quality.”

            You shouldn’t have to worry about the really quirky stuff like that unless you’re an AI power-user, but a simple request for high-quality output can go a long way. Assuming you want high quality output. You could also ask an AI for a “cheesy low-quality high-school essay riddled with malapropisms” on a subject, for example, and that would be a different sort of deviation from “average.”

        • KeenFlame@feddit.nu
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          8 months ago

          Absolutely, it’s one of the first curious things you discover when using them, such as stable diffusion “masterpiece” or the famous system prompt leaks from proprietary llms

          It makes sense in how it works but in proprietary use it is mostly handled for you

          Finding the right words and amount is a hilarious exercise that provides pretty good insight in the attention mechanics

          Consider the “let’s work step by step”

          This proved a revolutionary way to system the coders as they then will structure the output better, there’s then more research that happened around why this is so amazingly effective at making the model proof check itself

          Predictions are obviously closely related to the action part of our brains as well, so it makes sense that it would help when you think about it

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          8 months ago

          Literally yes.

          For example about a year ago one of the multi step prompt papers that improved results a bit had the model guess what expert would be best equipped to answer the question in the first pass and then asked it to answer the question as that expert in the second pass and it did a better job than trying to answer it directly.

          The pretraining is a regression towards the mean, so you need to bias it back towards excellence with either fine tuning or in context learning.

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          8 months ago

          Literally yes. You’ll see that OpenAI’s system prompts say ‘please’ and Anthropic’s mentions that helping users makes the AI happy.

          Which makes complete sense if you understand what’s going on with how the models actually work and not the common “Markov chain” garbage armchair experts spout off (the self attention mechanism violates the Markov property characterizing Markov chains in the first place, so if you see people refer to transformers as Markov chains either they don’t know what they are taking about or they think you need an oversimplified explanation).

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 months ago

        I always love watching you comment something that’s literally true regarding LLMs but against the groupthink and get downvoted to hell.

        Clearly people aren’t aware that the pretraining pass is necessarily a regression to the mean and it requires biasing it using either prompt context or a fine tuning pass towards excellence in outputs.

        There’s a bit of irony to humans shitting on ChatGPT for spouting nonsense when so many people online happily spout BS that they think they know but don’t actually know.

        Of course a language model trained on the Internet ends up being confidently incorrect. It’s just a mirror of the human tendencies.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          2
          ·
          8 months ago

          Yeah, these AIs are literally trying to give us what they “think” we expect them to respond with.

          Which does make me a little worried given how frequently our fictional AIs end up in “kill all humans!” Mode. :)

          • kromem@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 months ago

            Which does make me a little worried given how frequently our fictional AIs end up in “kill all humans!” Mode. :)

            This is completely understandable given the majority of discussion of AI in the training data. But it’s inversely correlated to the strength of the ‘persona’ for the models given the propensity for the competing correlation of “I’m not the bad guy” present in the training data. So the stronger the ‘I’ the less ‘Skynet.’

            Also, the industry is currently trying to do it all at once. If I sat most humans in front of a red button labeled ‘Nuke’ every one would have the thought of “maybe I should push that button” but then their prefrontal cortex would kick in and inhibit the intrusive thought.

            We’ll likely see layered specialized models performing much better over the next year or two than a single all in one attempt at alignment.

  • yarr@feddit.nl
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    11
    ·
    8 months ago

    This is a quote that should end in ‘yet’. I am very confident in saying there will be an AAA game released that will be designed and implemented 95%+ by a machine. I am less confident in providing a timeline. If you consider the history of machine learning is ~70 years old (in one sense, one can argue other dates) and you plot the advances from tic-tac-toe to what machines can do today (chess being a prime example), it doesn’t take much vision to see that it won’t be but a matter of time before this is a real thing.

    • Trollception@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      8 months ago

      Sure it may produce a game but much of what makes a game good is making it fun and memorable. If we can eventually create a general AI then absolutely I think such a thing is possible. Otherwise it will be a copypasta mishmash and having a cohesive and fluent design is a huge if.

  • coffinwood@feddit.de
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 months ago

    Add “, yet” to the headline and come back in a year or two.

    Currently AI may fail to produce a video game, but so was the case for images, videos, and texts only a few years ago.

    Failure is a good thing because it’s preceded by attempt.

    • JackGreenEarth@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      8 months ago

      Yeah. Just because it can’t do it now, doesn’t mean it won’t ever. And also refer to my other comment for how this is a bad study as they didn’t even provide any details on the game itself, let alone release the game. But anyone can do a similar study for themselves at home, since AI is free to use!

  • JackGreenEarth@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    The game will not be released to the public as it was just a research project, and Keywords didn’t provide any additional information about what type of 2D game it created.

    So we just have to trust them on this? Yeah, no.

  • Damage@feddit.it
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    “House made entirely of cement is a failure because you still need doors and windows and stuff.”

    • erwan@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      8 months ago

      Just like self driving! In 2010 it was almost there, just needed a few more years…

      • KeenFlame@feddit.nu
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        8 months ago

        I really don’t think there’s more examples of optimistic predictions than there are pessimistic ones.

        The discoveries made in recent years definitely point to an emergent incredibly useful set of tools that it would be amiss to pretend wouldn’t eventually replace junior developers in different disciplines. It’s just that without juniors there will never be any seniors. And someone needs to babysit those juniors. So what we get is not something that can replace an entire workforce in a long long while even if top brass would love that

      • realharo@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        Yes actually (except more than a few years).

        Waymo is already operating a robotaxi service in 3 cities, now they just need to expand and find a way to make it not lose money.

      • Thorny_Insight@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        8 months ago

        Go see videos of how well FSD V12 performs and you’re up for a surprise. Full self driving sucks untill it doesn’t. AIDRIVR puts up good content if you want recommendations.

  • systemglitch@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    7
    ·
    8 months ago

    I look forward to the day it can make a fully functioning game. The best games will mostly be AI created eventually

        • stratoscaster@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          8 months ago

          The reason that your favorite games are your favorite is because they aren’t soulless cash grabs. They’re made by people with imagination, passion, and ingenuity. AI simply can’t create something brand new from existing parts, it can only give it a fresh coat of paint.

          Furthermore, AI will always work like this, because that’s how the models are trained. I don’t think we’ll have a model that learns to create on its own within any of our lifetimes, if ever.

        • Andy@slrpnk.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          8 months ago

          I don’t doubt that AI tools can be used to make great games, but I think part of the reason so many people disagree with you is because:

          1. You claim “The best games will mostly be AI created eventually”, and I think most people question on what basis you think that AI will produce overall better quality. If you said that it’s faster, or can allow indie studios to complete with AAA, that makes sense. Attributing quality to it – at this stage – seems odd.
          2. It’s unlikely, imo, that the best games will be created by AI as opposed to with AI.

          I think using AI throughout the process so that one person can achieve the productivity of a whole team is a credible vision. But to say that games will created “By AI” implies that a generative AI engine will generate the code de novo to a complete game. Which I think is already possible, but it will be very, very hard for such a system to innovate newer games. Because currently, these tools rely on replicating features in their training, so their ability to create quests that match a new genre or to generate dialogue that is funny in the context of the story is going to be very impaired.

          By and large, I think current evidence shows that Human-AI cooperation almost always improves upon AI performance alone, and this is particularly the case when creating things for humans to enjoy.

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          When it comes to AI there’s a lot of people that are confidently incorrect, particularly on Lemmy.

          But as for your original thesis - I’d counter that it’s hybrid development and efforts that will be the biggest hits and most enjoyable to play.

          At least until we have good enough classifiers for what gameplay is fun, what writing is engaging, what art direction is interesting and appealing, etc.

          That said - it would be a very good time to be in the games telemetry business, as they’re sitting on gold whether they are aware of it or not.