I’ve only had issues with fitgirl repacks i think there’s an optimisation they use for low RAM machines that doesn’t play well with proton
Doing the Lord’s work in the Devil’s basement
I’ve only had issues with fitgirl repacks i think there’s an optimisation they use for low RAM machines that doesn’t play well with proton
That’s a room temp take at best
But then how am I supposed to use your “research” to make imaginary claims on generational attention spans ?
It’s like in poker, show one, show all
I don’t remember people being offended by the word fuck in 2000. Sure on TV it could be considered dicey but on the internet it was pretty fair game
“i have collected some soil samples from the mesolithic age near the Amazon basin which have high sulfur and phosphorus content compared to my other samples. What factors could contribute to this distribution?”
Haha yeah the top execs were tripping balls if they thought some off-the-shelf product would be able to answer this kind of expert questions. That’s like trying to replace an expert craftsman with a 3D printer.
What kind of use-cases was it, where you didn’t find suitable local models to work with ? I’ve found that general “chatbot” things are hit and miss but more domain-constrained tasks (such as extracting structured entities from unstructured text) are pretty reliable even on smaller models. I’m not counting my chickens yet as my dataset is still somewhat small but preliminary testing has been very promising in that regard.
Most projects I’ve been in contact with are very aware of that fact. That’s why telemetry is so big right now. Everybody is building datasets in the hopes of fine tuning smaller, cheaper models once they have enough good quality data.
I doubt these tools will ever get to a level of quality that can confuse a court. They’ll get better, sure, but they’ll never really get there.
Did you listen to that hardcore history episode? It was crazy
Interestingly the pendulum is now swinging the other way. If you look at next.js for example, server generated multi page applications are back on the menu!
I’d place it right around when angular started gaining traction. That’s when it became common to serve just one page and have all the navigation happen in JavaScript.
Good point, i was thinking more about your regular old independent artist trying to make it with their art. Obviously someone who’s an online celebrity depends on generating outrage for clicks, so they are bound to display more divisive, over-the-top opinions.
The only reason people are throwing bitch fits over AI/LLM’s is because it’s the first time the “art” industry is experiencing their own futility.
I would even go further and argue that the art industry doesn’t really care about AI. The people white knighting on the topic are evidently not artists and probably don’t know anybody legitimately living from their art.
The intellectual property angle makes it the most obvious. Typically independent artists don’t care about IP because they don’t have the means to enforce it. They make zero money from their IP and their business is absolutely not geared towards that - they are artists selling art, not patent trolls selling lawsuits. Copying their “style” or “general vibes” is not harming them, just like recording a piano cover of a musician’s song doesn’t make them lose any tickets sales, or sell fewer vinyls (which are the bulk of their revenue).
AI is not coming for the job of your independent illustrator pouring their heart and soul into their projects. It is coming for the job of corporate artists illustrating corporate blogs, and those who work in content farms. Basically swapping shitty human made slop for shitty computer made slop. Same for music - if you know any musician who’s losing business because of Suno, then it’s on them cause Suno is really mediocre.
I have yet to meet any artist with this kind of deep anti-AI sentiment. They are either vaguely anxious about the idea of the thing, but don’t touch it cause they’re busy practicing their craft - or they use the hallucination engines as a tool for inspiration. At any rate there’s no indication that their business has seen much of a slowdown linked to AI.
Truly makes you think. Maybe we do, actually, live in a society 🤔
That’s the problem with imaginary enemies. They have to be both ridiculously incompetent, and on the verge of controlling the whole world. Sounds familiar doesn’t it?
Honestly the use case i’m working on is pretty mind blowing. User records an unstructured voice note like “i am out of item 12, also prices of items 13 & 15 is down to 4 dollars 99, also shipping for all items above 1kg is now 3 dollars 99” and the LLM will search the database for items >1kg (using tool calling) then generate a JSON representing the changes to be made. We use that JSON to make a simple UI where the user can review the changes - then voilà it’s sent to the backend which persists the change in database. In the ideal case the user never even pulls up the virtual keyboard on their phone, it’s just “talk, check, click, done”.
That’s fucking badass thanks for the pointer this might prove useful. In the structured output department i’m hearing great things about dotTxt’s outlines which lets you constrain output according to a regex, but i haven’t tested it yet.
I’m currently a guy working on something like this ! It’s even simpler as you can have structured output on the chatgpt API. Basically you give it a JSON schema and it’s guaranteed to respond with JSON that validates against that schema. Spent a couple weeks hacking at it and i’m positively impressed, I have had clean JSON 100% of the time, and the data extraction is pretty reliable too.
The tooling is actually reaching a sweet spot right now where it makes sense to integrate LLMs in production code (if the use case makes sense and you haven’t just shoe-horned it in for the hype).
Fair point but GM1 is definitely outside of the “big spectacle movie” industry, with a very moderate budget and a big emphasis on story telling. It’s really an amazing movie for sure