☆ Yσɠƚԋσʂ ☆@lemmy.ml to Machine Learning@lemmy.mlEnglish · edit-25 months agoBy using the same techniques Google used to solve Go (MTCS and backprop), Llama8B gets 96.7% on math benchmark GSM8K. That’s better than GPT-4, Claude and Gemini, with 200x fewer parameters!arxiv.orgexternal-linkmessage-square2fedilinkarrow-up111arrow-down13file-text
arrow-up18arrow-down1external-linkBy using the same techniques Google used to solve Go (MTCS and backprop), Llama8B gets 96.7% on math benchmark GSM8K. That’s better than GPT-4, Claude and Gemini, with 200x fewer parameters!arxiv.org☆ Yσɠƚԋσʂ ☆@lemmy.ml to Machine Learning@lemmy.mlEnglish · edit-25 months agomessage-square2fedilinkfile-text
Fewer
Fine