Lawrence Hitches Written by Lawrence Hitches | AI SEO Consultant | April 19, 2026 | 8 min read

Every agency in Australia right now is doing the same thing: taking their existing SEO service deck, replacing "rankings" with "citations", adding a slide about AI Overviews, and calling it an AEO programme.

Most of those programmes are failing. Not because AI search is impossible to influence. Because the mental model is wrong from the start.

I've been running AI search visibility work across StudioHawk's client portfolio since early 2025. The single biggest pattern I see in failing programmes isn't tactics. It's the assumption that AI visibility is SEO with different metrics. It's not. It's a different game with different rules, and the sooner you rebuild from that premise, the sooner the work starts producing results.

The core trap: importing the SEO mental model into AI search

Here's how traditional SEO works. You target keywords. You optimise pages to rank for those keywords. You track positions. You improve what's falling.

Every part of that model breaks when applied to AI search.

There are no keywords to rank for. LLMs don't maintain a keyword index. There's no position 1 through 10 to occupy. There's no fixed list of queries to optimise against. Every user prompt is different, every LLM response is personalised, and the surface area is effectively infinite.

When I talk to clients who've "tried AEO," the failure mode is almost always the same. They've set up prompt tracking (a list of 20 branded queries, manually checked weekly). They've added FAQ schema to their pages. They've written some "answer-first content." Six months later, the needle hasn't moved, and they've concluded AI search isn't worth the effort.

The problem wasn't the effort. The problem was the frame.

Rankings don't exist in LLMs: what the surface area actually looks like

When someone searches Google for "enterprise SEO agency Melbourne," there's a SERP. There are 10 organic results. Position 1 exists, and you can track it.

When someone asks ChatGPT the same question, there is no SERP. The response is generated from the model's knowledge plus real-time retrieval. Different users asking the same prompt on different days, different devices, or different versions of the model get different answers.

This matters because prompt tracking is not rank tracking.

If you check 20 branded prompts manually each week and record whether your brand appeared, you're not measuring your AI visibility. You're measuring one data point in an infinite distribution. The statistical noise in that method is so large that a brand appearing in 14 of your 20 checks versus 16 of 20 is not a meaningful signal either way.

Research from Graphite confirms this: AI visibility measurement requires 200+ prompt runs per query to produce statistically valid results. The typical "track 20 prompts weekly" approach has confidence intervals so wide that you literally cannot tell if you've improved or not.

Visibility can't be manufactured the way SEO rankings could

Here's the uncomfortable part for anyone who built a career on link-building and technical optimisation.

In traditional SEO, you could manufacture rankings. Enough links from enough relevant sites, combined with solid on-page optimisation, and you could move a page up the SERP for a targeted keyword. It wasn't cheap or easy, but it was a direct mechanical process.

AI visibility doesn't work that way. LLMs gravitate toward entities the world already knows about. The training data is the web. The retrieval layer is search results. Both reflect actual brand presence, earned over time, not manufactured on a timeline.

Eli Schwartz put this clearly: "LLMs learn about your brand from the world. If the world doesn't know you, neither will an LLM."

That statement sounds obvious when you read it. But its implications are significant. If 85% of what LLMs learn about your brand comes from earned media (third-party coverage, citations, mentions, reviews, press), and only 13% comes from your own published content, then most of what SEO teams actually control is the minority signal.

The primary lever for AI visibility is digital PR and brand authority building. Not on-page optimisation. For most agencies, that's a service they've never sold as the centrepiece of an AI visibility programme.

Why links are not citations (and who benefits from the confusion)

One of the most common misframings I see in AI search content: "earn citations like you earn links."

It sounds right. Citations are mentions of your brand or content in AI responses. Links are mentions of your site on other pages. Both involve getting referenced somewhere you didn't control. Similar enough?

Not at all.

A link passes PageRank and relevance signal through a specific anchor text relationship. It's a structured, mechanical signal with a clear algorithmic pathway.

An AI citation is a language model deciding that your content best answers a specific prompt. It's downstream of retrieval rank, heading match, content structure, entity association, training data coverage, and a dozen other variables that are not addressable through link building tactics.

The reason the "citations as links" framing is so prevalent is that it gives agencies something familiar to sell. "We'll build you AI citations through our outreach programme." It maps AI search onto an existing service line. But the pathway from a piece of coverage to an AI citation is fundamentally different from the pathway from a backlink to a ranking improvement.

Building brand presence for AI isn't link building with a new name. It's a different brief.

LLMs are personalised: why the same prompt returns different answers

I want to make this concrete because it's the thing most SEO practitioners don't fully internalise until they've run the same prompt 50 times and seen how much the answers vary.

Ask ChatGPT "what's the best SEO agency in Melbourne" today. Write down the answer. Ask it again tomorrow from a different account. Ask it from your phone on mobile. Ask it from a colleague's laptop. You'll get different brand combinations, different emphasis, different phrasing.

The personalisation layer in LLMs means that your brand's AI visibility is not a fixed state that can be tracked with a handful of weekly checks. It's a probability distribution across a vast range of prompt variants, user contexts, model versions, and retrieval results.

This is why measuring AI visibility correctly requires statistical methodology, not manual spot checks. And it's why the "we improved from 3 mentions to 7 mentions this month" reports that agencies are sending clients are, in most cases, statistically meaningless.

The terminology shift: what's actually winning

The naming debate in this space is more than semantic. The term you use signals your mental model of the problem.

Aleyda Solis surveyed terminology adoption across the SEO industry in April 2026:

  • AI Search Optimisation: 36% adoption
  • GEO (Generative Engine Optimisation): 18%
  • LLMO (Large Language Model Optimisation): 5%
  • AEO (Answer Engine Optimisation): 4%

"AI Search Optimisation" is winning because it's honest about what the work actually is: optimising for visibility in AI-powered search. It doesn't imply a mechanical ranking process (GEO, AEO) or a technical model-manipulation process (LLMO). It correctly frames the problem as visibility in a new class of search interfaces.

I use "AI Search Optimisation" in client conversations for the same reason. It sets better expectations from the start. We're working on brand visibility in AI search. We are not manufacturing positions in a ranking system that doesn't exist.

AEO needs SEO as its foundation, not as its template

None of this means traditional SEO is irrelevant to AI visibility. It's the opposite.

Google indexing is the entry ticket to ChatGPT retrieval. Aleyda confirmed this experimentally: an unindexed page is invisible to ChatGPT regardless of how well-structured it is. Pages need to rank in Google's top 10 to enter the retrieval pool. That's a traditional SEO problem.

The distinction is: SEO is the foundation for AI visibility, not the model for it. You need ranking to get retrieved. You don't get cited just because you rank. The citation layer requires different work: brand authority, content structure, entity associations, third-party coverage.

SEO and GEO are not rivals. But they're not the same either. Teams that conflate them end up doing more of the same SEO work and being confused when AI visibility doesn't improve.

What you actually need to measure

If prompt tracking isn't the answer, what is?

The metrics that actually correlate with improving AI visibility are mostly brand marketing metrics.

Branded search volume. If people are searching your brand name on Google, LLMs are seeing that signal reflected in the training data and retrieval layer. Branded search growth is one of the clearest proxies for improving AI visibility.

Third-party mention share. How often does your brand appear in coverage, reviews, forum discussions, and industry content you didn't produce? This is the 85% of training signal you can actually influence at scale.

AI referral traffic in GA4. Real traffic from claude.ai/referral, chatgpt.com, and perplexity.ai is the only direct measurement of AI search generating actual visits. It's undercounting the real volume, but it's real. Track it properly.

Statistical citation tracking (200+ runs per query). If you're going to track prompt-level visibility, do it with enough sample size to be meaningful. Tools like Graphite run the statistical methodology properly. One-off manual checks are noise.

The irony is that most of these metrics belong to a brand marketing brief, not an SEO brief. That's not a failure of SEO. It's the correct framing of what AI visibility actually requires.

Frequently asked questions

What's the difference between AEO and SEO?

SEO optimises for keyword rankings in search engines with defined result sets. AEO (or more accurately, AI Search Optimisation) works toward brand visibility in LLM-generated responses, which have no fixed rankings, are personalised per user, and are driven primarily by brand authority and earned media rather than on-page signals. SEO is a prerequisite for AI visibility (ranking is the retrieval gate), but the optimisation layer above that is fundamentally different.

Should I use AEO, GEO, LLMO, or AI Search Optimisation?

AI Search Optimisation is the most accurate and most widely adopted term (36% industry adoption vs 4% for AEO). It correctly describes the problem: optimising for visibility in AI-powered search. GEO and AEO both imply a mechanical ranking process that doesn't exist in LLMs. Use the term that sets the right expectations with clients.

Is prompt tracking reliable for measuring AI visibility?

Not at low sample sizes. Research from Graphite shows you need 200+ prompt runs per query to generate statistically valid AI visibility data. Weekly manual checks of 20 prompts produce confidence intervals so wide that positive or negative changes cannot be meaningfully detected. If you're reporting on AI visibility, use proper statistical methodology or accept that you're reporting noise.

Do backlinks work for AI search visibility?

Backlinks help you rank in Google, which helps you get retrieved by ChatGPT. At the retrieval layer, traditional link-building has an indirect effect via ranking. But links don't directly drive AI citations. What drives citations is content structure, entity density, brand authority, and third-party coverage. The 85% of AI brand knowledge that comes from earned media is not the same as link equity.

Why do LLMs show different results to different users?

LLMs generate responses rather than retrieving a fixed result. Personalisation, retrieval variation, model version differences, and the probabilistic nature of language generation all contribute to response variability. This is why the same prompt can return different brand mentions across sessions. AI visibility is a probability, not a position. Measuring it requires statistical sampling, not spot checks.

Soaring Above Search

Weekly AI search insights from the front line. One newsletter. Six sections. Everything that actually moved this week, with a practitioner's take.

Lawrence Hitches
Lawrence Hitches AI SEO Consultant, Melbourne

Chief of Staff at StudioHawk, Australia's largest dedicated SEO agency. Specialising in AI search visibility, technical SEO, and organic growth strategy. Leading a team of 115+ across Melbourne, Sydney, London, and the US. Book a free consultation →