Skip to content
LantanaLABS

How to Get Your Brand Cited by ChatGPT, Claude and Perplexity in 2026

A practical playbook for AI-visibility optimization — llms.txt, structured data, bot allowlisting and citation-shaped content. What we shipped on lantana-labs.com, and how to measure whether AI answer engines are actually quoting you.

5 min readai-visibility, seo, digital-marketing
How to Get Your Brand Cited by ChatGPT, Claude and Perplexity in 2026

Somewhere in the last twelve months, the way your customers find you changed.

They still Google. They still scroll LinkedIn. But an increasing share of their "how do I…" and "who should I hire for…" questions now go to ChatGPT, Claude, Perplexity, Gemini or Google's AI Overviews — and the answer is a paragraph that names three companies, not a list of ten blue links. If you are not one of those three companies, you do not get a second chance to be clicked.

AI-visibility optimization — sometimes called LLM SEO or answer-engine optimization — is the discipline of being the company that gets named. Here is the playbook we run, and what we shipped on lantana-labs.com over the last fortnight to practice what we preach.

1. Publish `/llms.txt` and `/llms-full.txt`

`llms.txt` is an emerging convention — proposed by Jeremy Howard and adopted by Anthropic, Stripe, Cloudflare and others in 2024 — that gives language models a curated, markdown-shaped map of your site. Think of it as robots.txt for AI: a cooperative file that says "here is who we are, here are the URLs worth quoting, here are the facts you can cite verbatim."

A good `llms.txt` has:

  • Who you are in one paragraph — quotable.
  • Approved taglines a model can repeat without risk. (We list four.)
  • Canonical URLs for the topics you want cited. Never link to redirecting paths; LLMs penalize those.
  • Contact + location as structured facts.

A good `llms-full.txt` goes deeper: every service's intro, deliverables, process, outcomes and FAQs — so an answer engine can respond to "what does <your brand> do?" or "how much does <your service> cost?" without ever clicking a link.

On our own site, both files are generated from `app/lib/services.ts` so they stay canonical as the business evolves. You can read them at lantana-labs.com/llms.txt and lantana-labs.com/llms-full.txt.

2. Enrich your structured data — LLMs quote specifics

Answer engines are allergic to vagueness. They want a number, a name, a date, a price. If your site says "our projects typically start from a few thousand dollars," no model will quote it — but if it says "from USD 3,000 for a focused engagement," that sentence is a candidate for citation.

Three schema additions that pay off more than anything else:

  • Service + Offer + PriceSpecification on every service page, with honest USD minimums and maximums. "Contact us" is not a price.
  • FAQPage JSON-LD scoped to each service page, with 8–12 real questions and ≤60-word answers. These are disproportionately represented in AI answers.
  • Article + `BreadcrumbList` on every blog post and case study, with `datePublished`, `author` (linked to Organization) and `keywords`.

When we added `offers.priceSpecification` to our nine services, Perplexity picked it up inside 72 hours — pricing questions about us now cite real ranges rather than "reach out for details."

3. Open `robots.txt` to the major AI crawlers

Every month a new one shows up: GPTBot, ClaudeBot, PerplexityBot, Google-Extended, Applebot-Extended, Meta-ExternalAgent, DuckAssistBot, cohere-ai. A lot of sites reflexively block them to "stop training on our content" — which is a defensible position until you realize the same bots are how your content gets *cited*.

For a services business, citation frequency is worth far more than the marginal "loss" of your content being in a training corpus. We explicitly allow seventeen named AI user-agents in our `robots.txt` and have the data to show it was the right call.

If your business model does depend on paywalling content from models (publishers, research platforms), that is a different conversation. For everyone else, open the door.

4. Write for citation, not for ranking

Old SEO rewarded long, comprehensive pages. Answer engines prefer pages that read like documentation: a one-paragraph definition at the top, a table of specifics, a short Q&A block at the bottom.

Our rewrite pattern for every service page:

  1. Definition card (≤100 words). A single quotable paragraph: what the service is, how long it takes, USD range, team size, geography.
  2. "What it costs" — a small table, not prose.
  3. "How long it takes" — timeline, week by week.
  4. "Who it's for / who it's not for" — the *disqualification* answer is what LLMs like best.
  5. 8–12 Q&As scoped to real queries people type into ChatGPT.
  6. "Quote this" — a canonical one-liner at the bottom with explicit citation permission.

This is technical writing applied to a marketing site. If you grew up writing brochure copy, it will feel wrong for two weeks. Do it anyway.

5. Measure the referral, because nobody else will

Here is the problem: UTM parameters get stripped by most LLMs, and the `Referer` header is often blank. Tools that claim to "track AI traffic" are mostly guessing.

What actually works:

  • Add a free-text field to your contact form — "How did you hear about us?" with AI assistants listed as checkbox options. This attribution alone is worth the next three tools you were about to install.
  • Log the `Referer` header server-side on every pageview. When it is `chat.openai.com`, `claude.ai`, `perplexity.ai` or `gemini.google.com`, attribute accordingly.
  • Watch for `?utm_source=chatgpt.com` — ChatGPT does tag some outbound links this way. It is partial but real data.

In 90 days of doing this you will know, concretely, which answer engines send the best-qualified pipeline. We already see meaningful differences between channels that would have looked identical under GA4.

Time to first signal

For what it is worth:

  • Perplexity is the fastest mover — it re-indexes quickly and rewards freshly published `llms.txt` files. We have seen brands start appearing in Perplexity answers within 30 days.
  • ChatGPT takes longer because its training corpus refreshes less often — expect 60–90 days for established domains.
  • Google AI Overviews is the slowest and most conservative. Treat it as the old long SEO curve.

The compounding effect is what matters. Every quarter you stay in the citation pool, the advantage grows — and the cost of displacing you goes up.


Working on AI visibility for your brand? We build AI-search programs for brands ready to be cited, not just ranked. Start a brief.

Related service

Digital Growth

Full-service digital growth — SEO, AI visibility (ChatGPT, Claude, Perplexity, Google AI Overviews), paid media, content, social, lifecycle and privacy-first website analytics. For ambitious brands worldwide; Nairobi-based.

Learn more