"And I haven’t read that specific memo in detail, but I disagree with the conclusions from that.”
Gotta love that statement.
“I haven’t listened to this album, but I don’t like the instrumentation on song #3, and the way they play the drums.”
Didn’t Google and such have access to much more training data than open source solutions? I would think that would give them an advantage, unless it’s the raw algorithms that are better.
The memo addresses these points fairly well.
Yeah, the big premise is smaller models, along with more devs, means opensource iterates faster and produces better results more efficiently.
Anyone able to tell me which open source AI models the memo might be referring to? I’m not up to date on the landscape here and would love to know more.
Alpaca and family are major examples Ive seen mentioned.