What advanced AI models does Perplexity Pro unlock?

Perplexity Pro gives you access to the latest models from OpenAI and Anthropic. Here’s a breakdown of each model (the list changes often). All context windows of around 32k tokens are the same for our models.

Default: Our default model has been optimised for the fastest results and for web browsing with dedicated fine tuning to ensure it performs best with quick searches.

GPT-4 Turbo: Open AI’s famous model that powers ChatGPT, renowned for it’s reasoning and natural language processing capabilities, displaying human-level performance on various professional and academic benchmarks

Claude 3: You can select from both Sonnet and Opus models, Opus has very advanced capabilities and is considered the most advanced LLM available. Please be aware that there's a limit to how many queries you can use with Opus (don't worry, you'll get them back quick if they were to deplete)

Sonar 32k: Based on the LlaMa 3 open sourced modelled and trained in house, ensuring it's compatible with all of our answer engine's cutting edge search tech.

GPT-4o: The newest. update to the GPT family of models, this universal multimodal model is impressively fast.

The best approach to decide which models is best suited for you, would require exposure and testing and comparing queries to fully understand the models. In the end, the right choice depends on considering all the evidence in terms of your specific needs and goals.

What’s a token?

A token is the smallest unit into which text data can be broken down for an AI model to process. Tokens serve as the bridge between raw human language and a format that AI models can understand and generate. You can use this tokeniser to understand how many characters equal a token.

Are there specific guidelines when using any third party models like Claude or GPT?

No, you can use the models the same way you would use them in their native chatbot environment.