![]() |
| Graph representation of notes in the LLM Wiki. Green are labels, red correspond to the character Zinjaro, and blue correspond to the character Soren |
Over the past few weeks, the AI community has been all abuzz about a post from Andrej Karpathy about LLM wikis. I've seen a handful of videos about it and decided to give it a try with my DnD session notes. This is an alternative to a traditional RAG (Retrieval Augmented Generation), such as the one I did in one of my prior blog posts- Data Science: Querying DnD Session Notes with Vector Databases and AI. While a RAG uses a search engine (a vector database) to retrieve information, the LLM wiki approach uses an AI agent to browse through files like a human would.
Here are the guidelines Karpathy presented: https://gist.github.com/karpathy/442a6bf555914893e9891c11519de94f
The gist of this is that one offloads the organization and querying of a wiki to an AI agent. An agent empowered with a good system prompt and skills can identify relevant documents and synthesize answers without requiring the full setup of a traditional RAG. This is far easier to set up, though it may fail or become prohibitively expensive when dealing with millions of documents (at that point, the RAG approach is superior).
A lot of people have been praising this approach, so I wanted to give it a try. And since I had prior examples of queries against a RAG, I figured I would do the exact same calls and compare those against this local LLM setup and against a Gemini-NotebookLM approach with the full documents.







