- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
By June, “for reasons that are not clear,” ChatGPT stopped showing its step-by-step reasoning.
By June, “for reasons that are not clear,” ChatGPT stopped showing its step-by-step reasoning.
The big pre-training is pretty much fixed. The fine tuning is continuously being tweaked, and as shown, can have dramatic effects on the results.
The model itself just does what it does. It is, in effect, and ‘internet completer’. But if you don’t want it to just happily complete what it found on the internet (homophobia, racism, and all), you have to put extra layers in to avoid that. And those layers are somewhat hand-crafted, sometimes conflicting, and therefore unlikely to give everyone what they consider to be excellent results.
Ok but, regardless, they can just turn back the clock to when it performed better right? Use the parameters that were set two months ago? Or is it impossible to roll that back?
Better for one obscure use case? Or just ‘better’? That’s the real issue here. OpenAI have an agenda (publicly, a helpful assistant, privately, who knows…). They’re not really interested in a system that can identify prime numbers.