I work on AI search visibility for businesses that already have an SEO function but haven't adapted to the citation era. The approach combines traditional SEO foundations (because 88.46% of ChatGPT citations come from sites that already rank in Google per the April 2026 Ahrefs 1.4M-prompt study) with proprietary AI-citation engineering (because the other 11.54% is where the moat lives). Five steps, four frameworks, and a short list of things I refuse to do.
This page is the methodology behind every engagement I run. Read it before booking a call so you know exactly what you're hiring.
The 5-step engagement process
Step 1: AI Visibility Audit
Every engagement opens with a structured probe of how AI engines currently see your brand. I run 20 target prompts across ChatGPT, Perplexity, Claude, and Google Gemini. That is 80 individual probes per month. Each probe logs whether your brand appeared at all, in what position, in what context, and which competitors were named alongside.
Per the SparkToro and Gumshoe Jan 2026 study (2,961 prompts across 100 runs each), AI ranking position is statistically meaningless. Visibility frequency is the metric that matters. Same brand list appears less than 1% of the time at exactly the same position. The right question is not what position your brand sits at. The right question is whether your brand appears at all across 60 to 100 runs of the same prompt.
The audit also covers traditional SEO baseline checks (the 35-item checklist from the SEO baseline framework: title patterns, schema engineering, internal link mechanics, alt text density, robots.txt stance, llms.txt deployment), but the AI visibility data is the lead because it's where the strategic conversation begins.
Step 2: Strategy + Roadmap
The audit gives the data. Strategy translates it into a 90-day execution plan with clear weekly priorities. I write the strategy as a working document, not a deliverable to file. It contains: the central entity declaration, the topical map (Koray Tugberk Gubur's vocabulary: contextual vector, source context, quality node concentration), the cluster bet (deepen one terrain, do not pivot sideways), the funnel target (newsletter, lead form, or direct booking depending on the business), and the explicit list of what we will not do.
The strategy gets revisited monthly. Reality has a vote. If the data shifts, the plan shifts.
Step 3: Schema + Engineering Layer
The technical foundations most consultancies skip. Per the Svalbardi engineering teardown (April 2026, the textbook reference example for topical authority): most ecommerce sites give Google 3 to 5 schema fields. Svalbardi gives Google about 25. The same gap exists on most B2B and consultancy sites.
I work through the 10-item engineering checklist (#26 to #35 in the SEO baseline framework): Organization schema with founder.knowsAbout and full sameAs, Article schema with articleBody field for clean LLM ingestion, Product or ProfessionalService schema with complete fields, hub and brand-trust pages with WebPage and BreadcrumbList minimum, internal link mechanics in body (3 to 7 links, first-link-exact-match rule), cross-cluster bridge links, image alt text as topical reinforcement, title pattern with brand suffix consistency, AI crawler stance declared (Position 1 block training or Position 2 allow training, but deliberate either way), and the agents.md plus UCP layer for ecommerce sites.
Step 4: Content + Topical Authority
This is where most of the time goes. Per Koray's framework, topical authority requires: source-context purity (every page reinforces the central entity), quality node concentration (delete pages that don't earn their keep, do not add more), macro-then-micro article structure (flat H2s prominence-ordered, then H2-with-H3 expansion sections), knowledge-base vocabulary consistency (one canonical phrasing per entity across the cluster), and AOR-to-Core authority routing (educational pages link into the commercial money page).
I work to a strict cap: 2 new pages plus 1 refresh per week maximum. Volume is the trap. Per Lily Ray's repeated theses across 28 SlideShare decks 2018 to 2026: scaling content without editorial oversight is a Helpful Content Update violation. Quality node ratio matters more than page count.
Step 5: Measurement
What gets measured improves. The standing measurement stack: Google Search Console weekly review (clicks, impressions, position, click concentration risk), Bing Webmaster Tools monthly CSV harvest (Bing surfaces niche LLM-shaped queries that GSC drowns out), AI citation probe monthly (the 20 prompts repeated, tracked over time), brand search ratio (S over U per Google patent US 9,031,929), AIO presence checks on top 10 pages each month.
Reports are short. One page per month. The data, the calls I made on the data, the calls coming up next month.
The 4 proprietary frameworks
Info-gain workflow
The cached structural template I use for every article and every refresh. Snippet-lead intro (50 to 150 words bolded that directly answers the article's primary query, this is the AI citation lever per the Svalbardi reference), main H2s prominence-ordered, AI search angle H2 specific to the topic, FAQ section with 5 to 7 question-answer pairs, Sources & Further Reading block with 3 to 5 authoritative links, Keep Reading block with 3 to 4 internal links to the cluster.
The template plus the build automation took article refresh time from 4 to 7 hours per page down to 15 to 30 minutes. The template is the unlock. Cognitive cycles go to substance instead of structure.
AI Agent UX audit
The 10-step checklist for whether AI agents (Claude Computer Use, OpenAI Operator, Microsoft Copilot Vision, Google Gemini agents) can actually use your website. Built from Google's web.dev April 2026 agent-friendly websites guidance. Available as a free interactive tool at agent-friendliness-audit. Most sites score below 60 on first audit. The fixes are mostly accessibility best practices that have been ignored for a decade.
Bing-as-signal
Pull a fresh Bing Webmaster Tools CSV monthly. Bing surfaces niche, long-tail, and LLM-shaped queries that GSC's volume drowns out. With ChatGPT search and Microsoft Copilot now running on Bing's index, position in Bing correlates with AI-mediated traffic in a way it didn't pre-2024. The May 3 2026 harvest on this site delivered 4 new pages worth of opportunity from a single CSV export. Triage by tier: pos 1-15 with no existing page is a build-new opportunity, clustered queries are an aggregator-page opportunity, striking distance is a refresh.
Practitioner aggregator pages
The reusable content format that captures multiple LLM-shaped reference queries in one page. Summarise the official sources (Google Search Central, Bing Webmaster, Schema.org, MDN), add practitioner notes per source, link to the canonical doc. First validated instance was seo-official-guidelines-cheatsheet, which captured 11 LLM-shaped reference queries (positioned 3 to 12 in Bing) in one 1,776-word page. Reusable for AI search documentation, schema reference, ranking factors, tool comparisons, algorithm history.
What I won't do
The contrarian section. I am explicit about this so prospects can self-select.
I won't sell schema as a magic AI citation lever. Per Pedro Dias (theinference.io, May 2026) and the Princeton GEO paper (Aggarwal et al, KDD 2024), schema is not a direct AI citation lever. The transformer architecture has no schema parser inside the model. The 9 optimisation methods Princeton tested and found effective were content-side: citations from credible sources, quotations, statistics, fluency, prose clarity. Schema was not in the tested list. I implement schema for Knowledge Graph and entity disambiguation work (settled), and to reduce parsing tax for retrieval-side LLMs that read JSON-LD (defensible). I do not sell schema as the citation lever it isn't.
I won't push "AEO is the new SEO" as a positioning frame. Per Eli Schwartz (productledseo.com, April 2026): AEO is not SEO 2.0. AEO is a contextual layer on top of foundational SEO. The 88% Ahrefs finding (pages cited by ChatGPT come from the Google search index) confirms the priority order. Strong SEO comes first. AI citation engineering layers on top.
I won't build link farms or run aggressive guest-post campaigns. Google's spam systems are sharper than they were in 2018. The cost of detection compounds. I build links the slow way: digital PR, original data studies, genuinely linkable resources, named relationships with industry publications.
I won't promise sub-30-day citation lift on competitive queries. Per Lily Ray's AWIN keynote 2026 evidence pack: AI search citation cycles take 4 to 6 months minimum to show measurable lift. Anyone promising faster is either lying or working a niche where the queries are uncontested. I'd rather lose the deal than make the promise.
I won't sell scaled AI content production as a service. Lily Ray's repeated thesis: scaled AI content without editorial oversight is a Helpful Content Update violation. Site-wide quality classifier downgrade. Recovery cycles take 12+ months. The downside risk is asymmetric.
How an engagement actually unfolds
Week 1 is the audit and the strategy doc. Weeks 2 to 4 are the engineering layer fixes (schema, robots.txt, llms.txt, internal link audit, alt text expansion). Weeks 5 to 8 are the first content publish wave, usually the AI search and visibility cluster gets the depth treatment. Weeks 9 to 12 are measurement, refinement, and the next strategic call.
I work directly with you. There is no junior account manager between you and me. If you need a 10-person agency team executing across content, technical, PR, and analytics simultaneously, the right move is a hybrid: direct consulting access from me plus the option to flex in StudioHawk's 120-person team for execution. We work out the right model in the initial consultation.
FAQ
How is your methodology different from a generic SEO consultant?
Three differences. First, I lead with AI search visibility data (the 20-prompt probe), not Google rankings. Second, I treat the engineering layer (#26 to #35 of the SEO baseline framework) as foundation work, not optional add-ons. Third, I'm explicit about contested findings (schema as AI lever, AEO framing, scaled AI content) where most consultants oversell. The contrarian early-mover positioning is the differentiator.
Do you have a defined deliverable cadence or is it custom per engagement?
The 5-step process is the spine. Custom inside that. Project audits typically deliver in 3 to 4 weeks. Ongoing retainers run on a monthly cadence with a strategy revisit each quarter.
What does the AI Visibility Audit actually look like as an output?
One-page summary plus the raw data. Summary covers: visibility frequency per engine per prompt, where competitors are appearing, the 5 highest-leverage moves to close gaps. Raw data is the 80 monthly probes logged in a tracker spreadsheet you keep ownership of.
How long until I see results?
Traditional SEO results typically take 3 to 6 months. AI search citation lift takes 4 to 6 months minimum per Lily Ray's evidence pack. Some clients see meaningful citation improvements within 4 to 8 weeks of targeted optimisation, but planning for the longer cycle is more honest.
Do you work with brands outside Australia?
Yes. I'm Melbourne-based. Engagements run remotely with periodic in-person sessions for Melbourne-based clients who want them. International clients (UK, US, APAC) are common.
What's the smallest engagement you'll take?
Free 30-minute consultation always. Project audit and strategy from $5,000. Ongoing consulting from $156,000+GST per year. Pricing details: see how much does an AI SEO consultant cost.
Sources & Further Reading
- Google web.dev: Build agent-friendly websites (April 2026)
- Ahrefs: 1.4M ChatGPT prompt citation study (April 2026)
- Pedro Dias: The whole point was the mess (May 2026)
Keep Reading
Soaring Above Search
Weekly AI search insights from the front line. One newsletter. Six sections. Everything that actually moved this week, with a practitioner's take.