A short reading recommendation: a strong post by Mike Caulfield on how to get more out of LLMs in real-world research. The key idea is simple and applies to ChatGPT, Gemini, and any other tools: first “ground” your query in Wikipedia and its sources, then expand the search to primary and domain-specific publications. This two-step route gives models context, terms, and facts—and noticeably improves answer quality on non-new (non-SOTA) questions.
Why it’s worth adding to your workflow:
- Fewer hallucinations: a unified scaffold of concepts and proper names at the start stabilizes the model’s reasoning.
- A fast prompt “skeleton”: use Wikipedia’s table of contents and footnotes to build a list of entities, dates, and primary sources—and instruct the LLM to check against them as you expand.
- Verifiability: the approach aligns with his Deep Background “superprompt” line—an emphasis on sources and a traceable verification path, which is useful for analysis and fact-checking.
Link to the post: Mike Caulfield’s article on Substack. Recommended for anyone who uses LLMs daily for search and analysis: a low-effort upgrade to your queries that pays off in answer quality.
This post is based on Mike Caulfield’s insights: Read the original article here