Experiments in grounded exploration
Others have called this a fact-checking prompt; I see it as a "contextualization prompt." The model searches for conflicting sources, categorizes evidence through a Toulmin framework, and produces structured reports with verified facts, corrections, and source assessments. The approach models a research assistant suitable for students, one that aids investigation while leaving conclusions to the researcher.
A prompt and Claude Code skill that "de-LLMifies" prose by applying Francis Christensen's generative rhetoric, building sentences through coordinate and subordinate layers. It offers tunable presets from TIGHT to BAROQUE, controlling clause depth and branching.
Get scene-level character data, episode identification, and cast lookups for whatever scene you're watching or trying to remember. Put in a movie name and a scene description or hit the die, then click locate scene. A capability from LLMs not possible with traditional search.
A custom GPT (initially called "Toulminator") that performs Toulmin argument analysis on claims and social media posts. Upload a screenshot or paste text, and it surfaces the implied warrant, evaluates the backing for evidence, and identifies weak spots in the reasoning.
The original Check, Please! fact-checking walkthroughs, step-by-step demonstrations of applying the SIFT method to real claims circulating online, built with a custom walkthrough tool developed in AutoHotkey.
Rita Allen Misinformation Solutions Prize. An automated fact-checking demonstration tool that won a $25,000 prize at the Rita Allen Foundation's Misinformation Solutions Forum, funding the development of Check, Please!
A WordPress theme that functioned as a federated personal wiki, described as "social bookmarks, wikified." Users created cards representing discrete ideas, linked them into cardboxes, and forked content across individually owned sites.
A research project on how to get better information on film from LLMs, and a proof of concept on how to write auto-raters of LLM content that can generate their own test sets recursively. Processing AI-generated statements across 9 models and 900+ films.
Early prototypes of annotating fact-checks applied to film claims, testing how different AI models handle specific scenes and quotes. The scratch space that developed the overlay format before it was systematized in Rewinder.
An annotating fact-check of the "Make America Healthy Again" report, 50 claims processed over a long weekend using Deep Background, producing an interactive document where color-coded annotations let readers examine each claim in context.
Rebuilding the SIFT method for the AI era. The new approach, developed as "three moves" for AI (get it in, track it down, follow up), treats AI as "Excel for critical thinking" and "a portal, not a portrait."
The information evaluation framework that replaced checklist approaches like CRAAP across hundreds of universities: Stop, Investigate the source, Find better coverage, Trace claims. Covered by The New York Times, NPR, and the Wall Street Journal.
A book with Sam Wineburg (University of Chicago Press), bringing the SIFT method and lateral reading techniques to a mainstream audience. Endorsed by Maria Ressa (Nobel Peace Prize), Guy Kawasaki, and Francis Fukuyama.
An information literacy training program developed with Google and the Public Library Association, built on SIFT. Translated into a dozen languages, it has trained 11,000+ educators in India alone.
An open, modular course teaching SIFT through interactive lessons, released under CC-BY licensing, adopted across hundreds of institutions, funded by the Rita Allen Foundation prize.
One of the first open textbooks on practical, web-native fact-checking for college students. Winner of the 2018 MERLOT Classics Award, it was the direct precursor to SIFT.
A multi-institutional project with AASCU's American Democracy Project that taught students to fact-check claims online, leading directly to the development of the SIFT method.
An experiment in human-in-the-loop LLM wiki production for personal sensemaking, combining LLM drafting with human editorial review. Currently 300+ topic pages covering films by De Palma, Kaufman, Hyams, Sargent, and others.
An LLM-produced self-correcting wiki that improves through multiple passes and reintegrations, producing more accurate and expansive content over time. Across 900+ films, 2,800+ reports, and thousands of questions, the system used answers from each pass to automatically generate new paths for investigation, feeding corrections and discoveries back into subsequent rounds.
The Stanford dLRN keynote that articulated two models of the web, the topological garden against the chronological stream, launching the modern digital gardens movement. Recognized by MIT Technology Review as a foundational text.
Extensive work with Ward Cunningham's reimagination of the wiki, where every page has a fork button creating a "chorus of voices" rather than a consensus engine. This work led directly to The Garden and the Stream.