Anyone surprised by this wasn’t paying attention. This is the “AI” apocalypse everyone has been wringing their hands over and dumbass executives have been salivating over. This is exactly the problem with LLMs, they produce very convincing looking content, but it’s not actually factual content. You need teams of fact checkers and editors to review all their output if you care at all about accuracy.
As is with software developing, actually writing the stuff down is the easiest part of the work. If you already have someone fact checking and editing… why do you need AI to shit out crap just for the writing? It would be easier to gather the facts first, fact check them, then wrangle them through the AI if you don’t want to hire a writer (+ another pass for editing).
LLMs look like magic on a glance, but people thinking they are going to produce high quality content (or code for god’s sake) are delusional.
Yeah. I’m a programmer. Everyone has been telling me that I’m about to be out of a job any day now because the “AI” is coming for me. I’m really not worried. It’s way harder to correct bad code than it is to just throw it all away and start fresh, and I can’t even imagine how difficult it’s going to be to try to debug whatever garbage some “AI” has spewed out. If you employ a dozen programmers now, if you start using AI to generate your code you’re going to need two dozen programmers to debug and fix it’s output.
The promise with “AI” (more accurately machine learning, as this is not AI) as far as code is concerned is as a sort of smart copy and paste, where you can take a chunk of code and say “duplicate this but with these changes”, and then verify and tweak its output. As a smart refactoring tool it shows a lot of promise, but it’s not like you’re going to sit down and go “write me an app” and suddenly it’s done. Well, unless you want Hello World, and even then I’m sure it would find a way to introduce a bug or two.
Yep, I’ve had plenty of discussion about this on here before. Which was a total waste of time, as idiots don’t listen to facts. They also just keep moving the goal posts.
One developer was like they use AI to do their job all the time, so I asked them how that works. Yeah, they “just” have to throw 20% of the garbage away that’s obviously wrong when writing small scripts, then it’s great!
Or another one who said AI is the only way for them to write code, because their main issue is getting the syntax right (dyslexic). When I told them that the syntax and actually writing the code is the easiest part of my job they shot back that they don’t care, they are going to continue “looking like a miracle worker” due to having AI spit out their scripts…
And yet another one that discussed at length how you obviously can’t magically expect AI to put the right things out. So we went to the topic of code reviews and I tried to tell them: Give a real developer 1000+ line pull requests (like the AI might spit out) and there is a chance of a snowball in hell you’ll get bug free code despite reviews. So now they argued: Duh, you give the AI small bite sized Jira tickets to work on, so you can review it! And if the pull request is too long you tell the AI to make a shorter more human readable one! And then we’re back to square one: The senior developer reviewing the mess of code could just write it faster and more correct themselves.
It’s exhausting how little understanding there is about LLMs and their limitations. They produce a ton of seemingly high quality stuff, but it’s never 100% correct.
It seems to mostly be replacing work that is both repetitive and pointless. I have it writing my contract letters, ‘executive white papers’, and proposals.
The contract letters I can use without edit. The white papers I need to usually redirect it, but the second or third output is good. The proposals it functionally does the job I’d have a co-op do… put stuff on paper so I can realize why it isn’t right, and then write to that. (For the ‘fluffy’ parts of engineering proposals, like the cover letters, I can also use it.)
And yet another one that discussed at length how you obviously can’t magically expect AI to put the right things out. So we went to the topic of code reviews and I tried to tell them: Give a real developer 1000+ line pull requests (like the AI might spit out) and there is a chance of a snowball in hell you’ll get bug free code despite reviews.
Arguably this is comparing apples and oranges here. I agree with you that code reviews aren’t going to be useful for evaluating a big code dump with no context. But I’d also say that a significant amount of software in the world is either written with no code review process or a process that just has a human spitting out the big code dump with no context.
The AI hype is definitely hype, but there’s enough truth there to justify some of the hand-wringing. The guy who told you he only has to throw away the 20% of the code that’s useless is still getting 100% of his work done with maybe 40% of the effort (i.e., very little effort to generate the first AI cut, 20% to figure out the stupid stuff, 20% to fix it). That’s a big enough impact to have significant ripples.
Might not matter. It seems like the way it’s going to go in the short term is that paranoia and economic populism are going to kill the whole thing anyway. We’re just going to effectively make it illegal to train on data. I think that’s both a mistake and a gross misrepresentation of things like copyright, but it seems like the way we’re headed.
Arguably this is comparing apples and oranges here. I agree with you that code reviews aren’t going to be useful for evaluating a big code dump with no context. But I’d also say that a significant amount of software in the world is either written with no code review process or a process that just has a human spitting out the big code dump with no context.
That’s not totally true. Even if a developer throws a massive pull request dump at you, there is a high chance the dev at least ran the program locally and tried it out (at least the happy path).
With AI the code might not even compile. Or it looks good at first glance, but has a disastrous bug in the logic (that is extremely easy to overlook).
As with most code: Writing it takes me maybe 10% of the time, if even that. The main problem is finding the right solution, covering edge cases and so on. And then you spend 190% of the time trying to fix a sneaky bug that got into the code, just because someone didn’t think of a certain case or didn’t pay attention. If AI throws 99% correct code at me it would probably take me longer to properly fix it than to just write it myself from scratch.
Devil’s advocate though. With things like 4GLs, it was still all on the human to come up with the detailed spec. Best case scenario was that you work very hard, write a lot of things down, generate the code, see that it didn’t work and then ???. That “???” at the end was you as the programmer sitting alone in a room trying to figure out what a non-responsive black box might wanted you to have said instead.
It’s qualitatively different if you can just talk to the black box as though it were a programmer. It’s less of a black box at that point. It understands your language, and it understands the code. So you can start with the spec, but when something inevitably doesn’t work, the “???” step doesn’t just come back to you figuring out with no help what you did wrong. You can ask it questions and make suggestions. You can run experiments. Today’s LLMs hit the wall pretty quick there, and maybe they always will. There’s certainly the viewpoint that “all they do is model text and they can’t really learn anything”.
I think that’s fundamentally wrong. I’m a pretty solid programmer. I have a PhD in Computer Science, and I’ve worked as a software engineer and an architect throughout a pretty long career. And everything I’ve ever learned has basically been through language. Through reading, writing, speaking, and listening to English and a few other languages. I think that to say that I can learn what I’ve learned, but it’s fundamentally impossible for a robot to learn it is to resort to mysticism. At some point, we will have AIs that can do what I do today. I think that’s inevitable.
Well, that particular conversation typically happens in relation to something like a business rules engine, or sometimes one of those drag and drop visual programming languages which everyone always touts as letting you get rid of programmers (but in reality just limits you to a really hard to work with programming language), but there is a lot of overlap with the current LLM based hype.
If we ever do get an actual AI, then yes, AI will probably end up writing most of the programs, although it’s possible programmers will still exist in some capacity maybe for the purpose of creating flow charts or something to hand to the AIs. But we’re a long way off from a true AI, so everyone acting like it’s going to happen any day now is as laughable as everyone promising cold fusion was going to happen any day now back in the 70s. Ironically I think we are more likely to see a workable cold fusion before we see true AI, some of the hot fusion experiments happening lately are very promising.
I don’t think this one is even an LLM, it looks like the output of a basic article spinning script that takes an existing article and replaces random words with synonyms.
This seems like the case. One of the first stanzas:
Hunter, initially a extremely regarded highschool basketball participant in Cincinnati, achieved vital success as a ahead for the Bobcats.
Language models are text prediction machines. None of this text is predictable and it contains basic grammatical errors that even small models will almost never make.
Hah, great video. There was a reason why I put quotes around AI in my response because yes, what’s being called AI by everyone is not in fact AI, but most people have never even heard of machine learning let alone understand the difference between it and AI. I’ve seen a trend of people starting to use the term AGI to differentiate between “AI” and actual AI, but I’m not really a fan of that because I think that’s just watering down the term AI.
In the industry ML is considered a subset of AI, as are genetic algorithms and other approaches to developing “intelligence”. That’s why people tend to use AGI now to differentiate, because the fields been evolving (not that I agree with the approach either) . Honestly, you show someone even 10/15 years ago what we can do with RL, computer vision, LLMs and they’d certainly call it AI. I think the real problem is a failure to convey what these things actually are, they’re sold to the public under the term AI only to hype up the brand/business.
Honestly, you show someone even 10/15 years ago what we can do with RL, computer vision, LLMs and they’d certainly call it AI.
Some people trying ELIZA back in the 60s attributed intelligence and even feelings to it. So yeah, turns out humans are rather easy to trick with good presentation.
The danger about current AI is people giving them important tasks to do when they aren’t up to it. To put it in War Games terms, the problem is not Joshua, not even Professor Falken, but the McKittricks of the world.
Anyone surprised by this wasn’t paying attention. This is the “AI” apocalypse everyone has been wringing their hands over and dumbass executives have been salivating over. This is exactly the problem with LLMs, they produce very convincing looking content, but it’s not actually factual content. You need teams of fact checkers and editors to review all their output if you care at all about accuracy.
As is with software developing, actually writing the stuff down is the easiest part of the work. If you already have someone fact checking and editing… why do you need AI to shit out crap just for the writing? It would be easier to gather the facts first, fact check them, then wrangle them through the AI if you don’t want to hire a writer (+ another pass for editing).
LLMs look like magic on a glance, but people thinking they are going to produce high quality content (or code for god’s sake) are delusional.
Yeah. I’m a programmer. Everyone has been telling me that I’m about to be out of a job any day now because the “AI” is coming for me. I’m really not worried. It’s way harder to correct bad code than it is to just throw it all away and start fresh, and I can’t even imagine how difficult it’s going to be to try to debug whatever garbage some “AI” has spewed out. If you employ a dozen programmers now, if you start using AI to generate your code you’re going to need two dozen programmers to debug and fix it’s output.
The promise with “AI” (more accurately machine learning, as this is not AI) as far as code is concerned is as a sort of smart copy and paste, where you can take a chunk of code and say “duplicate this but with these changes”, and then verify and tweak its output. As a smart refactoring tool it shows a lot of promise, but it’s not like you’re going to sit down and go “write me an app” and suddenly it’s done. Well, unless you want Hello World, and even then I’m sure it would find a way to introduce a bug or two.
“Greetings planet”
D’oh!
Yep, I’ve had plenty of discussion about this on here before. Which was a total waste of time, as idiots don’t listen to facts. They also just keep moving the goal posts.
One developer was like they use AI to do their job all the time, so I asked them how that works. Yeah, they “just” have to throw 20% of the garbage away that’s obviously wrong when writing small scripts, then it’s great!
Or another one who said AI is the only way for them to write code, because their main issue is getting the syntax right (dyslexic). When I told them that the syntax and actually writing the code is the easiest part of my job they shot back that they don’t care, they are going to continue “looking like a miracle worker” due to having AI spit out their scripts…
And yet another one that discussed at length how you obviously can’t magically expect AI to put the right things out. So we went to the topic of code reviews and I tried to tell them: Give a real developer 1000+ line pull requests (like the AI might spit out) and there is a chance of a snowball in hell you’ll get bug free code despite reviews. So now they argued: Duh, you give the AI small bite sized Jira tickets to work on, so you can review it! And if the pull request is too long you tell the AI to make a shorter more human readable one! And then we’re back to square one: The senior developer reviewing the mess of code could just write it faster and more correct themselves.
It’s exhausting how little understanding there is about LLMs and their limitations. They produce a ton of seemingly high quality stuff, but it’s never 100% correct.
It seems to mostly be replacing work that is both repetitive and pointless. I have it writing my contract letters, ‘executive white papers’, and proposals.
The contract letters I can use without edit. The white papers I need to usually redirect it, but the second or third output is good. The proposals it functionally does the job I’d have a co-op do… put stuff on paper so I can realize why it isn’t right, and then write to that. (For the ‘fluffy’ parts of engineering proposals, like the cover letters, I can also use it.)
Arguably this is comparing apples and oranges here. I agree with you that code reviews aren’t going to be useful for evaluating a big code dump with no context. But I’d also say that a significant amount of software in the world is either written with no code review process or a process that just has a human spitting out the big code dump with no context.
The AI hype is definitely hype, but there’s enough truth there to justify some of the hand-wringing. The guy who told you he only has to throw away the 20% of the code that’s useless is still getting 100% of his work done with maybe 40% of the effort (i.e., very little effort to generate the first AI cut, 20% to figure out the stupid stuff, 20% to fix it). That’s a big enough impact to have significant ripples.
Might not matter. It seems like the way it’s going to go in the short term is that paranoia and economic populism are going to kill the whole thing anyway. We’re just going to effectively make it illegal to train on data. I think that’s both a mistake and a gross misrepresentation of things like copyright, but it seems like the way we’re headed.
That’s not totally true. Even if a developer throws a massive pull request dump at you, there is a high chance the dev at least ran the program locally and tried it out (at least the happy path).
With AI the code might not even compile. Or it looks good at first glance, but has a disastrous bug in the logic (that is extremely easy to overlook).
As with most code: Writing it takes me maybe 10% of the time, if even that. The main problem is finding the right solution, covering edge cases and so on. And then you spend 190% of the time trying to fix a sneaky bug that got into the code, just because someone didn’t think of a certain case or didn’t pay attention. If AI throws 99% correct code at me it would probably take me longer to properly fix it than to just write it myself from scratch.
People have been saying programming would become redundant since the first 4GL languages came out in the 1980s.
Maybe it’ll actually happen some day… but I see no sign of it so far.
Yep, had this argument a bunch. Conversation basically goes:
Devil’s advocate though. With things like 4GLs, it was still all on the human to come up with the detailed spec. Best case scenario was that you work very hard, write a lot of things down, generate the code, see that it didn’t work and then ???. That “???” at the end was you as the programmer sitting alone in a room trying to figure out what a non-responsive black box might wanted you to have said instead.
It’s qualitatively different if you can just talk to the black box as though it were a programmer. It’s less of a black box at that point. It understands your language, and it understands the code. So you can start with the spec, but when something inevitably doesn’t work, the “???” step doesn’t just come back to you figuring out with no help what you did wrong. You can ask it questions and make suggestions. You can run experiments. Today’s LLMs hit the wall pretty quick there, and maybe they always will. There’s certainly the viewpoint that “all they do is model text and they can’t really learn anything”.
I think that’s fundamentally wrong. I’m a pretty solid programmer. I have a PhD in Computer Science, and I’ve worked as a software engineer and an architect throughout a pretty long career. And everything I’ve ever learned has basically been through language. Through reading, writing, speaking, and listening to English and a few other languages. I think that to say that I can learn what I’ve learned, but it’s fundamentally impossible for a robot to learn it is to resort to mysticism. At some point, we will have AIs that can do what I do today. I think that’s inevitable.
Well, that particular conversation typically happens in relation to something like a business rules engine, or sometimes one of those drag and drop visual programming languages which everyone always touts as letting you get rid of programmers (but in reality just limits you to a really hard to work with programming language), but there is a lot of overlap with the current LLM based hype.
If we ever do get an actual AI, then yes, AI will probably end up writing most of the programs, although it’s possible programmers will still exist in some capacity maybe for the purpose of creating flow charts or something to hand to the AIs. But we’re a long way off from a true AI, so everyone acting like it’s going to happen any day now is as laughable as everyone promising cold fusion was going to happen any day now back in the 70s. Ironically I think we are more likely to see a workable cold fusion before we see true AI, some of the hot fusion experiments happening lately are very promising.
Fix its* output.
Removed by mod
“making ai” these days isn’t so much programming as having access to millions of dollars worth of hardware
Removed by mod
I don’t think this one is even an LLM, it looks like the output of a basic article spinning script that takes an existing article and replaces random words with synonyms.
This seems like the case. One of the first stanzas:
Language models are text prediction machines. None of this text is predictable and it contains basic grammatical errors that even small models will almost never make.
AI doesn’t exist, but it will ruin everything anyway.
https://youtu.be/EUrOxh_0leE?si=voNBJjvvuyzb8oZk
Hah, great video. There was a reason why I put quotes around AI in my response because yes, what’s being called AI by everyone is not in fact AI, but most people have never even heard of machine learning let alone understand the difference between it and AI. I’ve seen a trend of people starting to use the term AGI to differentiate between “AI” and actual AI, but I’m not really a fan of that because I think that’s just watering down the term AI.
In the industry ML is considered a subset of AI, as are genetic algorithms and other approaches to developing “intelligence”. That’s why people tend to use AGI now to differentiate, because the fields been evolving (not that I agree with the approach either) . Honestly, you show someone even 10/15 years ago what we can do with RL, computer vision, LLMs and they’d certainly call it AI. I think the real problem is a failure to convey what these things actually are, they’re sold to the public under the term AI only to hype up the brand/business.
Some people trying ELIZA back in the 60s attributed intelligence and even feelings to it. So yeah, turns out humans are rather easy to trick with good presentation.
“AI is whatever haven’t been done yet”
The danger about current AI is people giving them important tasks to do when they aren’t up to it. To put it in War Games terms, the problem is not Joshua, not even Professor Falken, but the McKittricks of the world.
There’s the problem right there. The MSN homepage ain’t exactly a pinnacle of superlative journalism.
This article wasn’t even remotely convincing, though.