Update 14-08-25:#
After outrage from users and cancellations, ChatGPT restored the ability to switch models as well the legacy 4o model.
Understanding the Model Variants#
I have been using ChatGPT-5 for the last few days and felt I should write down a few thoughts of what works. ChatGPT-5 broke existing workflows and demands more processing time with vague instructions. I wrote down the problems with the switch and removing older legacy models under: The Problem With Proprietary LLM Providers Removing Model Access without recourse
The token usage is aggressive, likely pushing users toward costlier plans. The model switcher is built-in rather than user-controlled. Now there are only three models available and the model switcher works regardless of the model selected. This is another bug. Here are summaries of the models:
The GPT-5 Pro model is slow and heavy. Without strong guidance it will get confused, despite being marketed as “research grade.” The lighter GPT-5 is the fast conversational one that replaced GPT-4o. The GPT-5 Thinking version is essentially Pro with an additional reasoning step, which means even slower responses but more layered analysis if prompted correctly.
The “show legacy models” toggle in settings appears broken, preventing access to older, more reliable models. Refresh page and sign out and sign in does not work.
Turn Off Memory#
I turned off memory and received an immediate performance boost. I think with Memory enabled, ChatGPT burns through tokens and slows processing significantly. I would prefer to start with a clean slate for better performance and cost control. In the future, maybe this gets better. At this current point, I prefer to handle context manually rather than letting the ChatGPT pull random information from previous conversations. I follow the purging or archiving old chats regularly to prevent context pollution.
Provide Instructions Upfront and have a clear strategy#
GPT-5 requires explicit strategy and instruction-setting as an extra step. Never assume it understands your intent from brief prompts. Vague prompts will waste both tokens and time. GPT-5, especially the Pro variant, tends to drift into irrelevant territory if I am not specific. The more ambiguity you give it, the more it will overcomplicate the response. This is even more critical with GPT-5 Pro because it will run long reasoning chains before giving you an answer. Manual model switching and thinking steps are mandatory for complex tasks. Always ask it to flag when particular actions fail.
Optimized Prompts for Different Use Cases#
- For writing tasks with summaries:
Write plainly with an idea per paragraph. No fillers and end sections with clear takeaways and specific next steps.
- For quick answers:
Give me a precise answer and prioritize speed. Don't use reasoning if I didn't ask for it. Use simpler words over exhaustive analysis.
- For complex analysis:
Apply thinking mode and get to the root cause of things. Give me references but only after you verified them.
- For answers requiring verification, demand Fact-Checking Tags inside responses. This forces the model to categorize its certainty level and helps me identify what needs verification.
Tag your answers with Fact, Opinions, Speculation, Don't Know
In my testing, JSON responses work better than plain text. Ensure your JSON prompts include formatting, schema, and error handling specifications.
CSV work and research performance declined significantly. I needed to prompt multiple times with strong, specific instructions.
Create new columns for factual responses when working with data. When uncertain, explicitly write that in the designated column.
Comparison with Claude#
I regularly Switch between Claude and GPT-5 based on task requirements. Claude performs better for analytical work and structured responses, but the latest usage limits enforced in the last couple of weeks, I run out of usage very fast. I prefer GPT-5 Pro because first I rarely hit limits, It is there when I need multi-step reasoning and have time to wait. I default to GPT-5 for most conversational tasks. Finally Enable thinking mode only when I need to see the reasoning process, as it consumes more tokens.
Takeaways#
- Disable memory to save tokens and improve speed
- Provide explicit strategy and instructions for every request
- Use task-specific prompt templates
- Tag responses by certainty level for fact-checking
- Switch models based on task complexity and time constraints
- Handle context manually rather than relying on system memory