The Interview Method: Using LLMs to Extract Human Expertise

By ● min read

Large language models (LLMs) have become powerful tools for generating text, but their effectiveness often hinges on the quality and completeness of the context we provide. For complex tasks—like designing a new software feature or drafting a technical specification—the required context can span several pages. Traditionally, a human expert must painstakingly write that context. But an emerging alternative turns the process upside down: instead of having a human write for the LLM, the LLM interviews the human. This approach, sometimes called the interrogatory LLM, promises to make knowledge capture more efficient, accurate, and accessible.

How the Interrogatory LLM Works

At its core, the interrogatory LLM method involves prompting the model to act as an interviewer. The LLM asks the human a series of questions, learning about the domain, the requirements, and the intended outcomes. Once the interview is complete, the LLM synthesizes that information into a well-structured context document—ready to be fed into another LLM session (or the same one) to execute the actual task.

The Interview Method: Using LLMs to Extract Human Expertise
Source: martinfowler.com

A key insight comes from Harper Reed, who first popularized this technique. He insists that the LLM should ask only one question at a time. This prevents overwhelming the human and ensures each answer is focused. In practice, many users find they need to repeatedly remind the LLM of this rule, as models tend to default to multi-question prompts.

Applications in Software Development

Creating Context for Complex Tasks

When building a new feature, the LLM requires a wealth of information: user-facing descriptions, implementation guidelines, data sources, and integration points. Instead of a human drafting all that in markdown, the LLM can conduct an interview. The human provides high-level goals and answers clarifying questions, while the LLM fills in gaps and structures the output. This collaborative process can save hours and often yields more thorough context than a rushed manual write-up.

Reviewing Documents Through Dialogue

The same technique can be applied to document validation. Give the LLM an existing specification, then ask it to interview a subject-matter expert to verify accuracy. People often find reading and critiquing a dense document tedious, but a conversation with an LLM feels more natural. The model can ask pointed questions and cross-reference answers with the document, flagging inconsistencies or omissions. This is especially valuable when the original document is poorly written—the interview process can compensate for documentation weaknesses.

It's even possible to chain multiple interrogatory sessions: one LLM builds a document, then additional sessions interview different experts to review and refine it.

Beyond LLM Context: Capturing Any Knowledge

While the approach is valuable for feeding LLMs, its utility extends further. Many people—even brilliant domain experts—struggle with writing. Putting thoughts into coherent prose is a skill not everyone possesses. The interrogatory LLM offers a way out: instead of forcing someone to write, let the LLM interview them. The result may carry the stylistic fingerprints of AI-generated text, which some find off-putting. However, as the original essay notes, "that's better than not having the information itself, either due to rushed writing or no writing at all." For organizations needing to capture institutional knowledge quickly, the trade-off is often worth it.

Best Practices for Effective Interviews

A Human-AI Partnership for Knowledge Work

The interrogatory LLM transforms the relationship from human-supplicant-to-model to a collaborative dialogue. It leverages the model's ability to ask probing questions while relying on the human's unique knowledge and judgment. As LLMs become more conversational, this method could become a standard part of the workflow for software design, documentation, and any domain where expertise needs to be extracted and structured. By turning writing into a conversation, we lower the barrier for contribution and ensure richer, more accurate context—whether for a machine or for other humans.

Tags:

Recommended

Discover More

Intel's Low Power Mode Daemon Eyes Integration into the Linux KernelKindle Jailbreaking Surges as Users Unlock Hidden Capabilities Bypassing Amazon RestrictionsA Step-by-Step Guide to Adding Rich Structured Data to Your Web Pages with the Block ProtocolCVE-2023-33538: Command Injection Attacks Target TP-Link Routers with Mirai Botnet PayloadsThe GitHub Merge Queue Incident: How a Flawed Feature Flag Caused Silent Code Deletion