The Strawberry has landed! OpenAI has released o1-preview, its latest impressive demo, just in time for the current funding round. [Press release] The new hotness in Strawberry is chain-of-thought …
When you don’t have anything new, use brute force. Just as GPT-4 was eight instances of GPT-3 in a trenchcoat, o1 is GPT-4o, but running each query multiple times and evaluating the results. o1 even says “Thought for [number] seconds” so you can be impressed how hard it’s “thinking.”.
This “thinking” costs money. o1 increases accuracy by taking much longer for everything, so it costs developers three to four times as much per token as GPT-4o.
Because the industry wasn’t doing enough climate damage already… Let’s quadruple the carbon we shit into the air!
“this thing takes more time and effort to process queries, but uses the same amount of computing resources” <- statements dreamed up by the utterly deranged.
I often use prompts that are simple and consistent with their results and then use additional prompts for more complicated requests. Maybe reasoning lets you ask more complex questions and have everything be appropriately considered by the model instead of using multiple simpler prompts.
Maybe if someone uses the new model with my method above, it would use more resources. Im not really sure. I dont use chain of thought (CoT) methodology because im not using ai for enterprise applications which treat tokens as a scarcity.
Was hoping to talk about it but i dont think im going to find that here.
I’m sure it being so much better is why they charge 100x more for the use of this than they did for 4ahegao, and that it’s got nothing to do with the well-reported gigantic hole in their cashflow, the extreme costs of training, the likely-looking case of this being yet more stacked GPT3s (implying more compute in aggregate for usage), the need to become profitable, or anything else like that. nah, gotta be how much better the new model is
also, here’s a neat trick you can employ with language: install a DC full of equipment, run some jobs on it, and then run some different jobs on it. same amount of computing resources! amazing! but note how this says absolutely nothing about the quality of the job outcomes, the durations, etc.
Because the industry wasn’t doing enough climate damage already… Let’s quadruple the carbon we shit into the air!
They say it uses roughly the same amount of computing resources.
they say a lot of things, yes
Are you saying thats not true? Anything to substaniate your claim?
“this thing takes more time and effort to process queries, but uses the same amount of computing resources” <- statements dreamed up by the utterly deranged.
“we found that the Turbo button on the outside of the DC wasn’t pressed, so we pressed it”
I often use prompts that are simple and consistent with their results and then use additional prompts for more complicated requests. Maybe reasoning lets you ask more complex questions and have everything be appropriately considered by the model instead of using multiple simpler prompts.
Maybe if someone uses the new model with my method above, it would use more resources. Im not really sure. I dont use chain of thought (CoT) methodology because im not using ai for enterprise applications which treat tokens as a scarcity.
Was hoping to talk about it but i dont think im going to find that here.
Well, there’s your problem
If only you’d asked ChatGPT “is awful.systems a good place to fellate LLMs”
I asked Gemini!
Reply:
SLANDER, I SAY
and hot young singles in your area have a bridge in Brooklyn to sell
on the blockchain
Happy to hear about anything that supports the idea.
this shit comes across like that over-eager corp
llm salesman“speaker” from the other dayI’m sure it being so much better is why they charge 100x more for the use of this than they did for 4ahegao, and that it’s got nothing to do with the well-reported gigantic hole in their cashflow, the extreme costs of training, the likely-looking case of this being yet more stacked GPT3s (implying more compute in aggregate for usage), the need to become profitable, or anything else like that. nah, gotta be how much better the new model is
also, here’s a neat trick you can employ with language: install a DC full of equipment, run some jobs on it, and then run some different jobs on it. same amount of computing resources! amazing! but note how this says absolutely nothing about the quality of the job outcomes, the durations, etc.