• DarkMessiah@lemmy.world
    link
    fedilink
    arrow-up
    99
    arrow-down
    1
    ·
    11 months ago

    Honestly, the best use for AI in coding thus far is to point you in the right direction as to what to look up, not how to actually do it.

    • DJDarren@thelemmy.club
      link
      fedilink
      English
      arrow-up
      28
      ·
      11 months ago

      That’s how I use Chat GPT. Not for coding, but for help on how to get Excel to do things. I guess some of what I want to do are fairly esoteric, so just searching for help doesn’t really turn up anything useful. If I explain to GPT what I’ll trying to do, it’ll give me avenues to explore.

        • gornius@lemmy.world
          link
          fedilink
          arrow-up
          10
          ·
          edit-2
          11 months ago

          I have a spreadsheet with items with their price and quantity bought. I want to include a discount with multiple tiers, based on how much items have been bought, and have a small table where I can define quantity and a discount that applies to that quantity. Which Excel functions should I use?

          Response:

          You can achieve this in Excel using the VLOOKUP or INDEX-MATCH functions along with the IF function.

          Create a table with quantity and corresponding discounts.

          Use VLOOKUP or INDEX-MATCH to find the discount based on the quantity in your main table.

          Use IF to apply different discounts based on quantity tiers.

      • FrostyTrichs@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        11 months ago

        Using AI in this way is what finally pushed me to learn databases instead of trying to make excel do tricks it’s not optimal for anyways.

        I tried a bunch of iterations of various AI resources and even stuff like the Google Sheets integration and most of them just annoyed me into finding better ways to search for what I was trying to do. Eventually I had to stop ignoring the real problem and pivot to software better optimized for the work I was trying to do with it.

      • Baizey@feddit.dk
        link
        fedilink
        arrow-up
        2
        ·
        11 months ago

        That’s exactly how I use it (but for more things than excel), it works pretty well as a documentation ‘searcher’ + template/example maker

    • 0x4E4F@infosec.pub
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      11 months ago

      Yeah, that’s about it. I’ve trown buggy code at it, tell it to check it, says it’ll work just fine… scripts as well. You really can’t trust anything that that thing outputs and it’s more than 1 or 2 lines long (hello world examples excluded, they work just fine in most cases).

      • Ignotum@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        11 months ago

        Have you looked at the project that spins up multiple LLM “identities” where they are “told” the issue to solve, one is asked to generate code for it, the others “critique” it, it generates new code based on the feedback, then it can automatically run it, if it fails it gets the error message so it can fix the issues, and only once it has generated code that works and is “accepted” by the other identities, it is given back to you

        It sounds a bit silly, but it turns out to work quite well apparently, critiquing code is apparently easier than generating it, and iterating on code based on critiques and runtime feedback is much easier than producing correct code in one go

      • ninjan@lemmy.mildgrim.com
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        11 months ago

        There is a (non-meme) reason why Prompt Engineer is a real title these days. It takes a measure of skill to get the model to focus on and attempt to solve the right question. This becomes even more apparent if you try to generate a product description where a newb will get something filled with superlative lies and a pro will get something better than most human writers in the field can muster for a much lower cost per text (compared to professional writers, often on par or more expensive than content farms). AI is a great tool, but it’s neither the only tool (don’t hammer in screws) nor is it perfect. The best approach is to let the AI do the easy boiler plate 80% then add that human touch to the hard 20% and at most have the AI prepare the structure / stubs.

        • the_sisko@startrek.website
          link
          fedilink
          English
          arrow-up
          8
          ·
          11 months ago

          I’m totally willing to accept “the world is changing and new skills are necessary” but at the same time, are a prompt engineer’s skills transferrable across subject domains?

          It feels to me like “prompt engineering” skills are just skills to compliment the expertise you already have. Like the skill of Google searching. Or learning to use a word processor. These are skills necessary in the world today, but almost nobody’s job is exclusively to Google, or use a word processor. In reality, you need to get something done with your tool, and you need to know shit about the domain you’re applying that tool to. You can be an excellent prompt engineer, and I guess an LLM will allow you to BS really well, but subject matter experts will see through the BS.

          I know I’m not really strongly disagreeing, but I’m just pushing back on the idea of prompt engineer as a job (without any other expertise).

          • ninjan@lemmy.mildgrim.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 months ago

            We’re not talking small organizations here, nor small projects. In those cases it’s true that you can’t “only” do prompt engineering but where I see it is in larger orgs where you bring into the team the know how about how to prompt efficiently, how to do refinement, where to do variable substitution and how, etc etc. The closest analogy is specific tech skills, like say DBs, for a small firm its just something one backend dude knows decently, at a large firm there are several DBAs and they help teams tackle complex DB questions. Same with say Search, first Solr and nowadays Elastic. Or for that matter Networks, in many cases there might be absolutely no one at the whole firm that knows anything more than the basics because you have another company doing it for you.

            • the_sisko@startrek.website
              link
              fedilink
              English
              arrow-up
              2
              ·
              11 months ago

              The closest analogy is specific tech skills, like say DBs, for a small firm its just something one backend dude knows decently, at a large firm there are several DBAs and they help teams tackle complex DB questions. Same with say Search, first Solr and nowadays Elastic.

              Yeah I mean I guess we’re saying the same thing then :)

              I don’t think prompt engineering could be somebody’s only job, just a skill they bring to the job, like the examples you give. In those cases, they’d still need to be a good DBA, or whatever the specific role is. They’re a DBA who knows prompt engineering, etc.

          • cannache@slrpnk.net
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            11 months ago

            To be fair, in my mind most AI is kind of half baked potential terminator style nightmare fuel for the average person

        • 0x4E4F@infosec.pub
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          To be honest, I just gave up on it regarding code. Now I use it mostly for getting info into one place when I know it’s scattered all over the web.

    • guy@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      11 months ago

      I’ve found it’s best use to me as a glorified auto-complete. It knows pretty well what I want to type before I get a chance to type it. Yes, I don’t trust stuff it comes up with on its own though, then I need to Google it

    • infamousta@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      ·
      11 months ago

      Yeah, I find it works really well for brainstorming and “rubber-ducking” when I’m thinking about approaches to something. Things I’d normally do in a conversation with a coworker when I really am looking more for a listener than for actual feedback.

      I can also usually get useful code out of it that would otherwise be tedious or fiddly to write myself. Things like “take this big enum and write a function that converts the members to human-friendly strings.”

    • Karyoplasma@discuss.tchncs.de
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      Tell ChatGPT you want to do the project as an exercise and that it should not write any pseudocode. It will then give you a high-level breakdown which is usually a decent guide line.

    • PrMinisterGR@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      11 months ago

      All the hype are grifters and Google trying to convince people this isn’t just a search engine assistant.

      • criticalimpact@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        Well we need something now that google is absolute dogshit at providing useful results XD Maybe not AI though

  • Norgur@kbin.social
    link
    fedilink
    arrow-up
    59
    arrow-down
    5
    ·
    11 months ago

    It’s almost as if the LLMs that got hyped to the moon and back are just word calculators doing stochastic calculations one word at a time… Oh wait…

    No, seriously: all they are good for is making things sound fancy.

    • dependencyinjection@discuss.tchncs.de
      link
      fedilink
      arrow-up
      22
      arrow-down
      1
      ·
      11 months ago

      A little reductive.

      We use CoPilot at work and whilst it isn’t doing my job for me, it’s saving me a lot of time. Think of it like Intellisense, but better.

      If my senior engineer, who I seem like a toddler when compared to can find it useful and foot the bill for it, then it certainly has value.

      • Norgur@kbin.social
        link
        fedilink
        arrow-up
        13
        arrow-down
        6
        ·
        11 months ago

        It’s not reductive. It’s absolutely how those LLMs work. The fact that it’s good at guessing as long as your inputs follow a pattern only underlines that.

        • QuaternionsRock@lemmy.world
          link
          fedilink
          arrow-up
          7
          ·
          11 months ago

          No, seriously: all they are good for is making things sound fancy.

          This is the part of your comment is reductive. The first part just explains how LLMs work, albeit sarcastically.

        • Honytawk@lemmy.zip
          link
          fedilink
          arrow-up
          10
          arrow-down
          5
          ·
          edit-2
          11 months ago

          If it was only guessing, it would never be able to create a single functioning program. Which it has, numerously.

          This isn’t some infinite monkeys on typewriter stuff.

          It writes and can check itself if it is correct.

          I’ve seen ChatGPT write an entire Website in Wordpress, including setting up a MySQL database for users, by a user stating their wishes vocally in a microphone and then not touching the computer once.

          How is that guessing?

          • Norgur@kbin.social
            link
            fedilink
            arrow-up
            21
            arrow-down
            2
            ·
            11 months ago

            No, it does not “check itself”. You mixed up “completely random guesses” and stochastically calculated guesses… ChatGPT has.an obscenely large corpus of training data that was further refined by a blatant disregard for copyright and tons and tons of exploited workers in low wage countries, right?

            So imagine the topic “setup Wordpress”. ChatGPT has just about every article indexed that’s on the internet about this. Word for word. So it’s able to assign a number to each word and calculate the probability of each word following every other word it scanned. Since WordPress follows a very clear pattern as to how it’s set up, those probabilities will be very clear cut.
            The details the user entered can be stitched in because ChatGPT can very easily detect variables given the huge amount of data. Imagine a CREATE USER MySQL command. ChatGPTs sources will be almost identical up until it comes to the username which suddenly leads to a drop on certainty regarding the next Word. So there’s your variable. Now stitch in the word the user typed after the word “User” and bobs your uncle.

            ChatGPT can “write programs” because programming (just as human language) follows clear patterns that become pretty distinct if the amount of data you analyze becomes large enough.

            ChatGPT does not check anything it spurts out. It just generates a word and calculates which word is most likely to follow that one.

            It only knows which sources of it’s training data it should xluse because those were sorted and categorized by humans slaving away in Africa and Asia, doing all the categories by hand.

    • SkyNTP@lemmy.ml
      link
      fedilink
      arrow-up
      22
      arrow-down
      1
      ·
      11 months ago

      No, seriously: all they are good for is making things sound fancy.

      This is the danger though.

      If “boomers” are making the mistake of thinking that AI is capable of great things, “zoomers” are making the mistake of thinking society is built on anything more than some very simple beliefs in a lot of stupid people, and all it takes to make society collapse is to convince a few of these stupid people that their ideas are any good.

      • cannache@slrpnk.net
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        11 months ago

        I disagree with the last part. Society and it’s many systems is pretty hard to collapse even if you managed to drug a huge portion of the population, gave them guns and fed them propaganda. Now imagine throwing heaps of people into jail like in Czech, still standing just more useless. Just look at portions of the middle east. The majority of people just get good at being lazy and defer accountability to the most violent assholes who then become dictators, the rest of society then selectively breeds the “dictators”.

        If this sounds dystopic consider that what you said lead to what this comment is, which is dumb. Now which is dumber?

    • regbin_@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      2
      ·
      11 months ago

      This is what AI actually is. Not the super-intelligent “AI” that you see in movies, those are fiction.

      The NPC you see in video games with a few branches of if-else statements? Yeah that’s AI too.

      • Willer@lemmy.world
        link
        fedilink
        arrow-up
        4
        arrow-down
        13
        ·
        11 months ago

        No companies are only just now realizing how powerful it is and are throttling the shit out of its capabilities to sell it to you later :)

        • Marzepansion@programming.dev
          link
          fedilink
          English
          arrow-up
          15
          ·
          11 months ago

          “we purposefully make it terrible, because we know it’s actually better” is near to conspiracy theory level thinking.

          The internal models they are working on might be better, but they are definitely not making their actual product that’s publicly available right now shittier. It’s exactly the thing they released, and this is its current limitations.

          This has always been the type of output it would give you, we even gave it a term really early on, hallucinations. The only thing that has changed is that the novelty has worn off so you are now paying a bit more attention to it, it’s not a shittier product, you’re just not enthralled by it anymore.

          • UndercoverUlrikHD@programming.dev
            link
            fedilink
            arrow-up
            2
            arrow-down
            8
            ·
            11 months ago

            Researchers have shown that the performance of the public GPT models have decreased, likely due to OpenAI trying to optimise energy efficiency and adding filters to what they can say.

            I don’t really care about why it, so I won’t speculate, but let’s not pretend the publicly available models aren’t purposefully getting restricted either.

            • Marzepansion@programming.dev
              link
              fedilink
              English
              arrow-up
              6
              ·
              edit-2
              11 months ago

              likely due to OpenAI trying to optimise energy efficiency and adding filters to what they can say.

              Which is different than

              No companies are only just now realizing how powerful it is and are throttling the shit out of its capabilities to sell it to you later :)

              One is a natural thing that can happen in software engineering, the other is malicious intent without facts. That’s why I said it’s near to conspiracy level thinking. That paper does not attribute this to some deeper cabal of AI companies colluding together to make a shittier product, but enough so that they all are equally more shitty (so none outcompete eachother unfairly), so they can sell the better version later (apparently this doesn’t hurt their brand or credibility somehow?).

              but let’s not pretend the publicly available models aren’t purposefully getting restricted either.

              Sure, not all optimizations are without costs. Additionally you have to keep in mind that a lot of these companies are currently being kept afloat with VC funding. OpenAI isn’t profitable right now (they lost 540 million last year), and if investments go in a downturn (like they have a little while ago in the tech industry), then they need to cut costs like any normal company. But it’s magical thinking to make this malicious by default.

    • ArxCyberwolf@lemmy.ca
      link
      fedilink
      arrow-up
      21
      arrow-down
      3
      ·
      11 months ago

      Exactly. It’s a language learning and text output machine. It doesn’t know anything, its only ability is to output realistic sounding sentences based on input, and will happily and confidently spout misinformation as if it is fact because it can’t know what is or isn’t correct.

    • Siegfried@lemmy.world
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      11 months ago

      Mass effects lore differences between virtual intelligence and artificial intelligence, the first one is programmed to do shit and say things nicely, the second one understands enough to be a menace to civilization… always wondered if this distinction was actually accepted outside the game.

      *Terms could be mixed up cause I played in German (VI and KI)

    • might_steal_your_cat@lemm.ee
      link
      fedilink
      arrow-up
      6
      ·
      11 months ago

      There are many definitions of AI (eg. there is some mathematical model used), but machine learning (which is used in the large language models) is considered a part of the scientific field called AI. If someone says that something is AI, it usually means that some technique from the field AI has been applied there. Even though the term AI doesn’t have much to do with the term intelligence as most of the people perceive it, I think the usage here is correct. (And yes, the whole scientific field should have been called differently.)

      • BluesF@feddit.uk
        link
        fedilink
        arrow-up
        7
        arrow-down
        4
        ·
        11 months ago

        But it isn’t artificial intelligence. It isn’t even an attempt to make artificial “intelligence”. It is artificial talking. Or artificial writing.

    • marzhall@lemmy.world
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      11 months ago

      Lol, the AI effect in practice - the minute a computer can do it, it’s no longer intelligence.

      A year ago if you had told me you had a computer program that could write greentexts compellingly, I would have told you that required “true” AI. But now, eh.

      In any case, LLMs are clearly short of the “SuPeR BeInG” that the term “AI” seems to make some people think of and that you get all these Boomer stories about, and what we’ve got now definitely isn’t that.

      • EatYouWell@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        11 months ago

        The AI effect can’t be a real thing since true AI hasn’t been done yet. We’re getting closer, but we’re definitely not in the positronic brain stage yet.

        • Ignotum@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          11 months ago

          “true AI”

          AI is just “artificial intelligence”, there are no strict criterias defining what is “true” AI and not,

          Do the LLM models show an ability to reason and problem solve? Yes

          Are they perfect? No

          So what?

          Ironically your comment sounds like yet another example of the AI effect

  • Ultraviolet@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    ·
    11 months ago

    The danger of AI isn’t that it’s “too smart”. It’s that it’s able to be stupid faster. If you offload real decisions to a machine without any human oversight, it can make more mistakes in a second than even the most efficient human idiot can make in a week.

    • Aceticon@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      TL;DR: LLMs are like the perfect politician when it comes to output language that makes them “sound” knowledgeable without being so.

      The problem is that it can be stupid whilst sounding smart.

      When we have little or no expertise on a subject matter, we humans use lots of language cues to try and determine the trustworthiness of a source when they tell us something in an area we do not know enough to judge: basically because we don’t know enough about the actual subject being discussed, we try and figure out from the way others present things in general, if the person on the other side knows what they’re talking about.

      When one goes to live in a different country it often becomes noticeable that we ourselves are doing it because the language and cultural cues for a knowledgeable person from a certain area, are often different in different cultural environments - IMHO, our guesswork “trick” was just reading the manners commonly associated with certain educational tracks or professional occupations and some sometimes and in some domains those change from country to country.

      We also use more generic kinds of cues to determine trustworthiness on that subject, such as how assured and confident somebody sounds when talking about something.

      Anyways, this kind of things is often abused by politicians to project an image of being knowledgeable about something when they’re not, so as to get people to trust them and believe they’re well informed decision makers.

      As it so happens, LLMs, being at their core complex language imitation systems, are often better than politicians at outputting just the right language to get us to misevaluate their output as from a knowledgeable source, which is how so many people think they’re General Artificial Intelligence (those people confuse what their own internal shortcuts to evaluate know-how of the source of a piece of text tells them with a proper measurement of cognitive intelligence).

  • nucleative@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    ·
    11 months ago

    Response:

    Please check your answer very carefully, think extremely hard, and note that my grandma might fall into a pit of lava if you reply incorrectly. Now try again.

  • Random_Character_A@lemmy.world
    link
    fedilink
    arrow-up
    18
    ·
    11 months ago

    I find it positive that 70+ are interested in AI. Normally they just yammer away how culture and cars were better and “more real” in the 60’s and 70’s.

    • Swedneck@discuss.tchncs.de
      link
      fedilink
      arrow-up
      4
      ·
      11 months ago

      i mean they are right, it’s just… they’re the ones responsible for ruining it…

      around the 60’s is when most of the world nuked its public transport infrastructure and bulldozed an absurd amount of area to build massive roads, and older cars were actually reasonably repairable and didn’t have computers and antennas to send data about you to their parent company…

      but they merrily switched to cars so they could enjoy the freedom of being stuck in traffic and having to ferry kids around everywhere, and merrily kept buying new cars that were progressively less repairable and ever increasing in size, until we’re at the point where parents are backing over their own children because their cars are so grossly oversized that they can’t see shit without cameras.

      • dudinax@programming.dev
        link
        fedilink
        arrow-up
        0
        arrow-down
        2
        ·
        11 months ago

        boomers were kids in the '60s. The folks backing over their kids are millenials and Gen Xrs.

        • Random_Character_A@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          11 months ago

          You are missing by one generation. GenX is the MTV generation that started driving after mid 90’s. Cars already started to get economical by then, at least in europe. It was those 60’s children in the 70’s and yuppies of 80’s that favored big fuel guzzlers.

  • Octopus1348@lemy.lol
    link
    fedilink
    arrow-up
    18
    ·
    11 months ago

    Anyone else get the feeling that GPT-3.5 is becoming dumber?

    I made an app for myself that can be used to chat with GPT and it also had some extra features that ChatGPT didn’t (but now has). I didn’t use it (only Bing AI sometimes) for some time and now I wanted to use it again. I had to fix some API stuff because the OpenAI module jumped to 1.0.0, but that didn’t affect any prompt (this is important: it’s my app, not ChatGPT, so cannot possibly be a prompt cause if I did nothing) and I didn’t edit what model it used.

    When everything was fixed, I started using it and it was obviously dumber than it was before. It made things up, misspelled the name of a place and other things.

    This can be intentional, so people buy ChatGPT Premium and use GPT-4. At least GPT-4 is cheaper from the API and it’s not a subscription.

    • Smacks@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      11 months ago

      I’ve noticed that too. I recall seeing an article of it detailing how to create a nuclear reactor David Hahn style. I don’t doubt that they’re making it dumber to get people to buy premium now.

  • voxel@sopuli.xyz
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    11 months ago

    it legit suggested me that i should “fix” my lab work by writing that ports are signed (-32k - 32k)

  • Matriks404@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    11 months ago

    Something like this happened to me few times. I posted code, and asked whether ChatGPT could optimize it, and explain how. So it have first explained in points stuff that could be improved (mostly irrelevant or wrong) and then posted the same code I have sent.