The Practical Steps For Fitness Operators To Close The AEO Gap

Ian Mullane, CEO and Founder of Keepme, transitions from research to reality by providing a clear, four-step roadmap to fixing the fitness industry’s AI visibility gap. From updating robots.txt to include the 13 essential AI crawlers to moving schema server-side, Ian outlines the low-lift, high-impact changes that ensure your brand is seen by ChatGPT and Perplexity.
Ian Mullane
Ian Mullane
May 14th, 2026
The Practical Steps For Fitness Operators To Close The AEO Gap

In my two previous articles, I said AEO (Answer Engine Optimization) was not rocket science and could make a near instant improvement. I meant it. The practical starting point for most operators is considerably simpler than the scale of the problem suggests.

Start with your robots.txt file

This is the fastest change with immediate effect. Go to yourdomain.com/robots.txt. If you have one, you will see a list of rules telling automated visitors what they can and cannot access. What you will almost certainly not see is any of the following names: OAI-SearchBot, PerplexityBot, Google-Extended, ChatGPT-User, ClaudeBot, GPTBot.

Add them. Explicitly, by name, with Allow: / rules. There are 13 AI crawlers worth addressing, split across three categories: search crawlers that directly affect your visibility in AI answers, user-triggered agents that browse on behalf of live users, and training crawlers that feed the underlying models. The format is simple. Your web team, or frankly anyone who can edit a text file and upload it to a server, can do this in under an hour.

The reason it matters is that silence is not the same as permission. Defaults that pre-date the question are not a deliberate stance. Naming crawlers explicitly signals to every AI system that you have thought about this. More immediately, it means you cannot accidentally block the systems that matter. In our audit of 901 operators, 49 had done exactly that , blocked AI search crawlers as a side effect of a Cloudflare toggle, and most of them had no idea.

Build the llms files

An llms.txt file is a plain text document you place at the root of your domain, yourdomain.com/llms.txt. It gives AI agents a direct, structured briefing on what your business is, where it operates, and what it offers. Not a web page. Not a sitemap. A machine-readable document written specifically for the systems that are increasingly deciding whether your business gets recommended.

The companion file, llms-full.txt, is the comprehensive version: every location address and phone number, opening hours for each day, membership tiers with pricing, class types, FAQ content written as explicit question and answer pairs. The content that an AI needs to answer specific questions accurately about your business, without having to crawl every page to find it.

These files do not require developer access. They are text files. You need SFTP access or a file manager, a clear picture of your locations and services, and an hour of structured writing. For operators with multiple clubs, particularly those whose sites load location data via JavaScript, meaning AI crawlers cannot see it at all, these files are the primary way to ensure that information is accessible.

If you are already on Cloudflare, turn on Markdown for Agents

Go to your Cloudflare dashboard. Find AI Crawl Control. Toggle Markdown for Agents on. This takes 60 seconds and costs nothing on Pro plans and above. When an AI agent requests your pages using the appropriate content negotiation header, Cloudflare converts the HTML to Markdown before serving it, an 80% reduction in token usage, with your JSON-LD schema preserved in the output. It is an additional layer, not a substitute for everything above, but it is the easiest quick win available to any operator already on Cloudflare.

The structural change that makes everything else work harder

Schema injected through Google Tag Manager is invisible to most AI crawlers. GTM fires after JavaScript executes inside a browser. AI crawlers do not use browsers. By the time GTM would have added your ExerciseGym or LocalBusiness schema blocks, those crawlers have already received the raw HTML and moved on.

The fix is to move schema into the raw HTML response, server-side injection, in the page before it is served to anyone. For WordPress operators with Yoast or RankMath already active, this is an extension of the existing schema graph, not a replacement. For WordPress operators without server-side schema, a lightweight plugin hooks into wp_head and outputs the blocks directly. For non-WordPress sites, it is a template change that puts JSON-LD into the head of each relevant page type.

The schema types that matter for fitness operators are ExerciseGym and LocalBusiness. The former tells AI systems what kind of business this is. The latter, applied per location, tells them where it is, what the hours are, what the phone number is, and what amenities are available. Without these, you are visible to the systems that render JavaScript, primarily Google's own crawlers, and invisible to GPTBot, ClaudeBot, PerplexityBot, and OAI-SearchBot.

The discipline that makes it hold

Schema gets overwritten by theme updates. robots.txt gets regenerated by plugins. llms files go stale when clubs open, close, or change their hours. Membership pricing changes. Class programmes evolve. The operators who stay visible are not the ones who fixed everything once and walked away. They are the ones who check these signals regularly and correct drift before it compounds.

A monthly check on each of these four signals is enough for most operators. What you are looking for is that the robots.txt still names the crawlers you added, that the schema is still in the raw HTML response, that the llms files are still returning 200 and still accurate, and that no new Cloudflare rule has inadvertently blocked access. These are five-minute checks. The alternative is rebuilding your AEO work from scratch every six months because something silently changed.

What we built

When we had audited 901 operators and published the findings, the natural next step would have been to stop there. The research told the story. The problem was documented. We had done the work.

I did not find that satisfying. The industry had a structural gap and a clear way to close it, and most operators neither had the time to do it themselves nor a clear starting point. We had already built the framework, validated the scoring, and knew exactly what good looked like. The question was whether we built the tool that automated it.

Beacon is that tool. It is an agent on Antares, our AI agent platform, and it does what this series has described: it visits a fitness operator's website the way an AI would, scores it across the five-step framework, and generates everything needed to close the gap, an updated robots.txt, ExerciseGym and LocalBusiness schema for every location, the llms.txt index, the llms-full.txt corpus, a WordPress plugin where the site needs one, and a written brief the operator can send to their web team the same day. For a ten-club operator, the whole process takes under three minutes.

If you want to do this work yourself, everything in this series is what you need. If you want to understand how Antares approaches it for operators who want it done for them, that conversation starts at Keepme.