This is an experiment in AI-based fact-checking and contextualization. I have used specialized AI-based tool I built -- SIFT Toolbox -- to annotate over 50 claims in the MAHA introduction (and only the introduction so far; pages 7-19), using a human-in-the-loop process that on average takes about 20 minutes a claim. This allowed me to check the report fairly deeply, even though this work is (at this point) merely a weekend hobby of mine (it has been a very long weekend, tbh).
While sometimes the annotations function as fact-checks, they more generally aim to bring broader context to the report: not everything checked is "wrong" -- part of the experiment was to see what happens when tools allow you to check nearly everything. To see the annotations, you can access "context reports" by clicking the links in the document below. Output has been checked only in the briefest way (generally less than 15 minutes a claim) so please see these context files as a first pass at the issues described, and check all facts if the stakes of error are high. Go to report/annotation start
If you’re coming here from somewhere other than my substack and don’t know who I am, I’ve spent more than a decade working on how to use search to contextualize artifacts, events, and claims. With Sam Wineburg I wrote the definitive book on that subject (University of Chicago Press, endorsed by Maria Ressa, the Nobel Prize winner). There are features delivered on every Google search result that are inspired by my work. My SIFT method is used in hundreds of universities, and over the past decade has become the primary way that information literacy in taught in U.S. universities and around much of the world, and the Google Super Searchers curriculum I co-developed with Google has been translated into a dozen languages and is one of the most successful information literacy initiatives in history.
I’ve recently begun exploring how AI can assist with providing quick context to claims, quotes, media, and events. This is my latest experiment in that area. I am currently doing this as personal exploration and have no institutional support or funding.
-- Mike Caulfield, June 8, 2025
Go to report/annotation start
Prompting layer used for this project is here. It is designed to run on a paid version of Claude Sonnet 3.7 or 4. (Paid account is necessary so it will execute the necessary searches). It can also run fairly well on ChatGPT o3. I would not run it on other platforms or models due to link hallucination issues.