What llms.txt is
An llms.txt file is plain text in markdown format, sitting at /llms.txt on the root domain. It contains the business name, a short summary, and a curated list of important URLs with brief descriptions of what each page covers.
There is a companion file called llms-full.txt that contains the full body content of the most important pages, concatenated into a single markdown document. The split mirrors the difference between a table of contents and the full book — the short file gives an AI assistant a quick overview, the long file gives it the actual content if it wants to read in detail.
Both files are published publicly. AI crawlers can request them like any other URL. There is no special authentication, no submission process, and no "Google Search Console" equivalent — the file just exists, and assistants that look for it find it.
What llms.txt is not
It's worth being clear about what the file does and doesn't do, because the marketing around llms.txt has gotten ahead of the substance.
- Not a ranking system — there is no algorithm that ranks llms.txt files against each other
- Not a guarantee of AI citation — citation depends on many factors and llms.txt is only one signal
- Not required — ChatGPT, Claude, and Perplexity can read and cite sites that don't have one, and many do
- Not a standard yet — it's a proposed convention with growing but not universal adoption
- Not a replacement for clean HTML, schema markup, or a well-structured site — it complements those, doesn't replace them
Why local businesses should care
Even with the caveats above, there's a reasonable case for adding an llms.txt to a small business site. AI assistants are increasingly used by buyers to find local services. "Find me a roofer in Hays that handles storm damage and has good reviews" is now a real query running through ChatGPT and Perplexity, not just Google.
When an AI assistant decides which business to mention in its answer, it consults the sites it can read. A site that hands the assistant a clean, structured summary of who it is, what it does, and where it operates is easier for the assistant to summarize correctly. A site that requires the assistant to infer all of that from a JavaScript-heavy page is more likely to be skipped or misrepresented.
The upside is asymmetric. Adding an llms.txt takes an hour of work and costs nothing to maintain. If even one buyer per quarter ends up calling because an AI assistant cited the business correctly, the file paid for itself. The downside risk is essentially zero.
What to include
A useful llms.txt for a small business has a consistent shape:
- Business name as an H1, followed by a one-line tagline
- A short summary paragraph: what the business does, who it serves, where it operates, and the principal's name
- An organized list of the most important pages: home, about, services (each major service), case studies, contact
- For each link: a clean URL and a one-sentence description of what the page covers
- Sections grouped logically — services together, case studies together, trust pages together
- Optional: hours, phone, address, and primary categories at the bottom for a quick fact lookup
Real example
Preisser Solutions publishes both files publicly. You can see them at:
preissersolutions.com/llms.txt — the short guided map
preissersolutions.com/llms-full.txt — the long-form bundle of the most important page content
Looking at a working pair tends to make the structure clearer than reading specifications. Both files are kept in sync with the site automatically as part of the build, so there's no drift between what the public pages say and what the AI-readable files claim.
How it works with schema and clean HTML
An llms.txt file is one signal in a larger pattern of AI-readable site design. It does its job best when the rest of the site is also clean and structured.
Schema.org markup — particularly LocalBusiness, Person, FAQPage, and Article types — gives AI assistants typed, machine-readable facts about the business. JSON-LD schema blocks are read by both Google and AI assistants and significantly improve how well a site can be understood.
Clean HTML — content that renders in the initial server response without requiring JavaScript to execute — is increasingly important. Some AI crawlers run JavaScript; many do not. A site that hides its content behind a client-side render is invisible to a meaningful share of AI traffic.
FAQ blocks, named entities (the business name, the principal's name, the city, suppliers, partners), and direct answer paragraphs near the top of each page all increase the odds of accurate AI citation. The llms.txt is the index that points the assistant to all of this — but the underlying content has to be worth indexing.
How Preisser Solutions implements it
On every site Preisser Solutions builds, llms.txt and llms-full.txt are generated from the same source data that drives the live pages. There is no separate file to maintain — the AI-readable layer updates automatically when the site updates.
The default structure: a short summary block, a sectioned list of the most important pages (home, about, services, case studies, blog, contact), and a clean fact block with hours, phone, address, and core categories. The long file bundles the body content of the most important pages in markdown, formatted for easy parsing by an LLM.
The implementation is part of the broader AI Search Optimization service — schema markup, named entities, FAQ blocks, llms files, and clean static HTML rendered server-side. The work pays off only when the pieces compose together. Adding an llms.txt to an otherwise unreadable site produces little. Adding one to a well-structured site closes a loop that's otherwise left open.
