Engine models
"Shall I use GPT-3.5 or GPT4?"
This is one of the most frequently asked questions we receive. And there is no single answer; it depends on the case!
GPT-3.5 is cheaper but can hallucinate more and sometimes takes some liberties with following the prompt. The quality of the output is generally good.
GPT-4 provides higher quality of the output, especially for complex questions. It's just a bit more expensive (6x in our pricing, which is the most accessible out there). Moreover, it follows the prompt much better and is very, very unlikely to hallucinate.
So, in a nutshell:
- Use GPT-3.5 if the documentation is easy, straightforward, and if some random hallucinations don't harm you (e.g., internal knowledge of policies, documentation, etc.).
- Use GPT-4 if the documentation is more complex and providing a great answer is very important to you (customer support, lead generation, internal complex documents).
Test it with your prompts and sources, and see the differences!
------
'Extended context' allows the chatbot to retrieve more sources from your documentation. If you have proper documentation (i.e., exhaustive and exclusive), then in most cases, you don't need the extended context. How do you understand if you need an extended context, then?
- You have a lot of documentation – I can't give exact numbers because it also depends on its quality, but let's say that you have tons of pages of documentation.
- Your documentation is not structured to be exclusively informative, meaning that the same information can be retrieved in different places. In this case, the chatbot will retrieve the most relevant sources, but it's better to have more sources (and therefore an 'extended context') to provide OpenAI with more information for a better answer.
Since it uses more tokens, the price is a bit higher than for the 'non-extended context': GPT-3.5 extended context uses 2 credits per message; GPT-4 extended context uses 10 credits per message."
Like always, try it out to find the best solution for your situation.