Are you seriously trying to push your ChatGPT “tool” in response to an article about language models like this one having substantial issues? “Not guaranteed” - yes, obviously, that’s the point of the article - and from a quick look at your code, I don’t see how this nonsense addresses any of that.
Its not chatgpt that’s just default config u can use the API endpoint to point to any chatgpt api compatible llm. Its can utilise ddg to search for web results then gives u an answer based on that. Most importantly it shows u the full log and u get to read it as it happens like bingAI but transparent so checking it is right there. I’ll add some screenshots to the readme.
Are you seriously trying to push your ChatGPT “tool” in response to an article about language models like this one having substantial issues? “Not guaranteed” - yes, obviously, that’s the point of the article - and from a quick look at your code, I don’t see how this nonsense addresses any of that.
Its not chatgpt that’s just default config u can use the API endpoint to point to any chatgpt api compatible llm. Its can utilise ddg to search for web results then gives u an answer based on that. Most importantly it shows u the full log and u get to read it as it happens like bingAI but transparent so checking it is right there. I’ll add some screenshots to the readme.
Since the issue with hallucinations is shared by all LLMs, not just ChatGPT, this doesn’t change anything.
Its transparent in its operation allows u to see what its thinking and catch errors and use ur own fine tuning that isn’t censored.