Component 08

AI Visibility

Onelo

Onelo does not exist in AI-generated search results. Not weakly — absent. Across 12 queries tested on ChatGPT, Perplexity, and Google AI Overviews, Onelo was not mentioned once. Every primary competitor was mentioned in multiple responses. This is the most urgent secondary constraint in the engine. The window to establish AI presence before competitor positions become entrenched is narrowing, and the remediation programme has a 6–12 month compounding timeline that must start now.

Signals Assessed

Blocking signals

Fragile signals

Missing signals

This document covers all 12 signals in the AI Visibility component. For each signal, you will find: what was assessed and why it matters, the specific findings for Onelo, evidence supporting those findings, and the recommended intervention. 

Signal Assessment

A signal is a subcomponent of any of the ten layers that make up an organic growth engine. Each signal is assessed thoroughly following our methodology and assigned a status: Healthy, Fragile, Blocking, or Missing. For each signal, there is supporting evidence and recommendations for how to turn each signal healthy. 

Layer Conclusion

AI Visibility is Missing because no infrastructure for it exists. The remediation programme is not about fixing something broken — it is about building something that was never built. That requires a different mindset than the remediation work in other components: it is longer, it is less immediately measurable, and it compounds slowly before it compounds fast.

The good news is that the work required for AI visibility is largely the same work required for Category Presence. Category landing pages built with FAQ schema, structured data, and citeable content serve both components simultaneously. The buyer’s guide content serves both Demand Match and AI Visibility. The structured data sprint serves both Narrative & Positioning and AI Visibility. This is a programme, not a separate workstream.

The four-workstream AI visibility programme

Structured data and entity signals (Weeks 4–6)

Update SoftwareApplication applicationCategory across all commercial pages. Implement FAQPage schema on top 5 commercial pages. Implement speakable schema on homepage and solution pages. Create and publish llms.txt. Deploy in a single development sprint. Estimated effort: 3–4 developer days. This workstream produces results fastest — AI systems incorporate structured data signals within weeks of crawling.

FAQ and citation-ready content (Weeks 4–16)

Add FAQ sections to all new category pages as they are built. Write 3 standalone FAQ blog posts targeting evaluation-stage queries (pricing, features, implementation time). Add FAQ sections to /solutions/mid-market-onboarding and /product/onboarding-automation. Add market-context pricing section to /pricing. Build the comprehensive buyer’s guide. This workstream produces results more slowly — 8–16 weeks for content to be indexed and incorporated into AI training cycles.

Entity signal consistency (Weeks 6–10)

Update G2 and Capterra product descriptions to include ICP-specific language and competitive positioning. Update Crunchbase category. Update CEO LinkedIn profile to include specific category language. Ensure all new press releases and media outreach use ‘mid-market’ and ‘200–2,000 employees’ consistently. This workstream is editorial and requires no development resources.

Monitoring and iteration (Ongoing from Week 8)

Run the 12-query AI visibility test monthly. Track Perplexity citation frequency. Monitor AI Overview appearances weekly using a saved Google Search for the 4 primary commercial queries. Set 6-month milestone: Onelo appears in at least 4 of 12 queries. Adjust content and schema programme based on which workstreams are producing citation results fastest.

The realistic timeline

AI visibility does not move quickly. The honest timeline for a company starting from zero with a focused programme:

  • Weeks 4–8: structured data live, llms.txt published, first FAQ sections deployed. No measurable AI citation change yet.
  • Weeks 8–16: FAQ content indexed, first Perplexity citations likely for the most-optimised pages. 1–2 of 12 queries may begin showing Onelo.
  • Weeks 16–24: buyer’s guide content indexed and accumulating citations. AI representation accuracy improving. 3–5 of 12 queries likely showing Onelo.
  • 6–12 months: compounding begins. Citation frequency increasing. AI Overview appearances for some queries. Brand mention share in AI approaching Deel’s current level.

Why Missing is different from Blocking

A Blocking component has infrastructure that is failing. A Missing component has no infrastructure at all. Onelo has no AI visibility signals in place — no FAQ schema, no structured entity data, no AI-optimised content, no llms.txt, no presence on the citation sources AI systems draw from. This means the remediation programme does not start from a weak base and improve it. It starts from zero and builds. That is a longer timeline and a different type of work.

The compounding urgency

AI systems form their representations from signals that exist at the time of training and retrieval. Competitors who began building AI visibility signals in 2023 have 18 months of compounding advantage. Rippling appears in 10 of 12 tested queries. BambooHR appears in 9 of 12. These positions are not locked permanently — but they are increasingly expensive to displace the longer Onelo waits. Starting the AI visibility programme in parallel with Category Presence (week 4 of the intervention sequence) is the correct call precisely because it takes longest to compound.

01. LLM Category Query Presence Test

Missing

What this signal assesses

This signal tests whether Onelo appears when a buyer asks a major large language model to recommend or list solutions for the onboarding software category. These are the queries a buyer might ask ChatGPT or Perplexity before they open a browser tab — the first moment of vendor discovery in an AI-assisted research flow. Appearing here means being in the consideration set before traditional search begins.

Findings

Onelo did not appear in any of the 6 category queries tested across ChatGPT (GPT-4o) and Claude. Rippling appeared in all 6. BambooHR appeared in 5 of 6. Deel appeared in 4 of 6. The gap is not marginal — Onelo is structurally absent from the AI category consideration set, not ranked lower within it.

Full keyword intent distribution — classified portfolio

[Link to spreadsheet: Complete 1,847-query keyword portfolio with intent classification applied to each query. Classification methodology: buyer-intent = any query signalling active evaluation, comparison, pricing, or vendor selection; informational = how-to, what-is, guide, checklist queries; off-category = queries with no connection to onboarding or HR automation. Include traffic, position, and pipeline contribution columns for each intent category.]

  • Buyer-intent queries (commercial investigation + transactional):  312 queries (17%) — drive 38% of sessions, 94% of organic pipeline
  • Informational queries:  891 queries (48%) — drive 47% of sessions, 6% of organic pipeline
  • Off-category queries:  644 queries (35%) — drive 15% of sessions, <1% of organic pipeline

The pipeline-to-traffic ratio makes the imbalance concrete: buyer-intent queries generate 94% of pipeline from 38% of sessions. Off-category queries generate less than 1% of pipeline from 15% of sessions. Every session on an off-category page costs the same in infrastructure as a session on a buyer-intent page, but produces 38x less commercial value.

RECOMMENDATION

Apply intent classification to the content brief process going forward. Before any new piece of content is commissioned, the brief must state which intent type it targets, which buyer persona it addresses, and which stage of the evaluation journey it serves. Content targeting informational intent should only be commissioned if it has a credible pathway to commercial navigation — i.e., the topic is one that a mid-market HR Director genuinely encounters before evaluating onboarding software, and the page is designed to capture that transition.

  • Run a retroactive intent audit on the existing 94-post content estate. Classify each post as: (A) buyer-adjacent — informational but attracts ICP audience and has a natural bridge to commercial pages; (B) off-target — attracts wrong audience with no commercial bridge; (C) already-commercial. Posts in category B that also have low traffic (<200 sessions/month) are candidates for consolidation or de-prioritisation in favour of new buyer-intent content. Do not delete them — the traffic cost is negligible — but stop investing editorial resource in updating or promoting them.

02. LLM Alternative and Competitor Query Presence

Missing

What this signal assesses

Alternative queries — ‘alternatives to Rippling’, ‘BambooHR competitors’, ‘tools similar to Deel’ — are among the highest-intent AI search queries a buyer can make. They represent buyers who have already evaluated a primary option and are actively looking for alternatives to compare. Appearing as an AI-recommended alternative to a competitor with a larger presence is one of the fastest paths to new buyer discovery. Absence means being invisible at the exact moment a buyer is most receptive to a new vendor.

Findings

Onelo does not appear as an AI-recommended alternative to any of its three primary competitors. In every alternative query tested, Rippling, BambooHR, and Deel recommended each other, Workday, Lattice, and occasionally niche tools — but never Onelo. This is commercially significant: buyers who find Rippling too broad or BambooHR too SMB-focused and are actively searching for a mid-market workflow-automation alternative are the perfect Onelo prospect, and AI systems are routing them elsewhere.

Traffic split by page type — last 90 days

[Link to spreadsheet: Analytics data showing organic sessions by page type (commercial / informational blog / navigational / other) for the past 90 days. Include a trend column showing whether each category’s share is growing or shrinking month-over-month.]

The navigational 26% pipeline contribution comes almost entirely from the pricing page — which converts at 4.8%, the highest rate on the site. Buyers who navigate directly to pricing are self-qualified and highly motivated. This is a useful reference point: it demonstrates that the conversion architecture works when buyers are qualified. The problem is getting more qualified buyers to the site through organic in the first place.

RECOMMENDATION

The 14% commercial traffic share cannot be meaningfully improved by changing existing blog content. It requires two parallel actions: (1) building the category landing pages specified in Component 03, which will add dedicated commercial pages to the site that attract buyer-intent traffic by design; and (2) improving internal navigation from informational posts to commercial pages so that the 3.2% of informational visitors who do navigate commercially are better supported.

  • Specifically on internal navigation: audit the top 15 informational posts by traffic and add a contextually relevant module within the body of each post — not just a sidebar widget — that presents the commercial page as the natural next step. The module should be framed around the buyer’s journey: ‘If you’re evaluating onboarding software for a team of 200+, here’s what Onelo’s workflow automation looks like in practice.’ Link to the most relevant commercial page. This is a 2-hour editorial task per post, 30 hours total for the top 15.

03. AI Representation Accuracy Audit

Missing

What this signal assesses

When Onelo is mentioned by name in an AI query, how accurately do AI systems describe it? Accuracy failures range from vague generic descriptions (‘an HR software tool’) to active misclassification (‘a small business payroll platform’). In both cases, the AI system is actively working against Onelo’s positioning — either by failing to communicate the differentiation, or by communicating the wrong positioning to a buyer who might otherwise have been interested.

Findings

The representation accuracy picture is mixed: ChatGPT provides a vague but directionally correct description when prompted directly. Claude acknowledges limited knowledge. Gemini declines to describe the product reliably. Perplexity does not surface Onelo in prompted responses. Most concerning: one ChatGPT response described Onelo as suitable for small businesses — a direct misclassification that contradicts the ICP and could actively deter the right buyers.

Funnel stage coverage map

[Link to spreadsheet: Keyword portfolio segmented by funnel stage. Classification: awareness = problem-focused and educational queries; consideration = category research, feature comparison, and use-case queries; decision = vendor comparison, pricing, ROI, and implementation queries. Show query count, estimated traffic, and estimated pipeline contribution for each stage.]

The decision stage is the most striking gap: 89 ranking queries generating 58% of organic pipeline from only 8% of sessions. These are the highest-value queries in the portfolio, and Onelo holds ranking positions on only a fraction of what exists in the market. Every decision-stage keyword that Onelo does not rank for is a buyer in active vendor evaluation that the organic channel is invisible to.

Decision-stage keyword gap — what Onelo is missing

A keyword gap analysis comparing Onelo’s ranking portfolio against the decision-stage queries that Rippling and BambooHR rank for reveals the specific queries where qualified buyers are currently invisible to Onelo.

Decision-stage queries where Rippling ranks but Onelo does not (sample):

  • ‘onboarding software vs rippling’ — 720 searches/month. Buyers who search this are evaluating Rippling and want an alternative. Onelo should own this query with a dedicated comparison page.
  • ‘best onboarding software for 500 employees’ — 480 searches/month. Directly ICP-matched. Onelo has no page targeting this query.
  • ’employee onboarding software ROI’ — 390 searches/month. Decision-stage buyers building a business case. An ROI calculator or case study page would rank for this and attract highly qualified traffic.
  • ‘onboarding automation implementation time’ — 260 searches/month. Buyers evaluating feasibility. Onelo’s implementation speed is a documented competitive advantage but no page targets this query.

[Link to spreadsheet: Full decision-stage keyword gap analysis — all queries where Rippling or BambooHR rank in positions 1–10 but Onelo has no ranking position. Sorted by estimated traffic value (volume x position CTR). This list is the brief for decision-stage content development.]

RECOMMENDATION

Develop a decision-stage content programme as the primary content investment in Phase 2 of the intervention sequence (weeks 12–24). The programme should prioritise three content types that are consistently high-value at the decision stage in B2B SaaS:

  • Comparison pages: ‘Onelo vs [Competitor]’ pages for Rippling, BambooHR, and Deel. These are among the highest-converting organic page types in B2B SaaS because they attract buyers who are already narrowing a shortlist. The page should be honest — acknowledge where competitors are stronger — because buyers who find dishonest comparison pages leave, while buyers who find credible ones convert at high rates. A realistic conversion target for a well-built comparison page is 4–6% of organic sessions.
  • ROI and business case content: a standalone ROI calculator or a detailed case study framed around measurable outcomes (time-to-productivity reduction, HR admin hours saved, onboarding error rate). This content attracts buyers building internal business cases — a critical step in the 45–90 day Onelo sales cycle. It also supports the sales team by giving them a credible asset to share during evaluation.
  • Implementation and feasibility pages: content targeting queries about setup time, integration requirements, and what implementation looks like in practice. These queries attract buyers who are past initial evaluation and are assessing risk. Onelo’s documented advantage (setup in under 2 weeks) is directly relevant and should be the centrepiece of this content.

04. AI Audience and Use-Case Alignment

Missing

What this signal assesses

Beyond general accuracy, this signal assesses whether AI systems correctly understand Onelo’s specific audience and use case — the 200–2,000 employee mid-market, the HR Director and COO buyer, and the workflow automation depth that differentiates the product. A system that describes Onelo correctly in general terms but attributes it to the wrong audience or use case will route the wrong buyers toward it and potentially route the right buyers away.

Findings

Beyond general accuracy, this signal assesses whether AI systems correctly understand Onelo’s specific audience and use case — the 200–2,000 employee mid-market, the HR Director and COO buyer, and the workflow automation depth that differentiates the product. A system that describes Onelo correctly in general terms but attributes it to the wrong audience or use case will route the wrong buyers toward it and potentially route the right buyers away.

Branded BoFu keyword performance

  • ‘Onelo pricing’ — current position:  Position 1 (homepage/pricing page) — 480 searches/month, CTR 41%
  • ‘Onelo reviews’ — current position:  Position 2 (G2 profile outranks site) — 390 searches/month
  • ‘Onelo vs BambooHR’ — current position:  Position 8 — 220 searches/month. No dedicated comparison page exists; this position is held by a blog post that mentions both tools, not a purpose-built comparison.
  • ‘Onelo alternatives’ — current position:  Not ranking. G2 and Capterra alternative pages rank 1–2.

The ‘Onelo vs BambooHR’ position 8 on a blog post is a missed opportunity. A dedicated comparison page with structured content and comparison schema would likely rank position 2–4 for this query — and would convert at significantly higher rates than a blog post.

Non-branded BoFu gap — alternative queries Onelo should own

These are queries where buyers are evaluating a competitor and could be captured by Onelo with a well-built alternative page. All are currently unranked by Onelo.

  • ‘BambooHR alternatives for mid-market’:  590 searches/month. Onelo is the most credible alternative for this specific query — mid-market is Onelo’s ICP and BambooHR’s SMB focus is a documented limitation for larger companies.
  • ‘Rippling alternatives smaller companies’:  480 searches/month. Buyers searching this are finding Rippling too complex or too expensive. Onelo’s focused scope and faster implementation are directly relevant.
  • ‘best BambooHR competitor for workflow automation’:  310 searches/month. This query maps exactly to Onelo’s primary differentiator.

[Link to spreadsheet: Full non-branded BoFu keyword gap analysis — all ‘alternatives to [competitor]’ and ‘[competitor] vs’ queries with volume >100/month, sorted by ICP alignment score and estimated traffic value.]

RECOMMENDATION

Build three types of BoFu pages as priority content in Phase 2:

  • ‘Onelo vs [Competitor]’ pages for BambooHR, Rippling, and Deel. Each page should follow a structured format: honest capability comparison table (including areas where the competitor is stronger), ICP-specific use case analysis (mid-market 200–2,000 employee companies), pricing comparison where possible, and customer quotes that speak to the comparison decision. These pages should not be written as pure marketing — buyers who land on a dishonest comparison page leave immediately. The goal is to be the most useful resource a buyer evaluating both tools can find.
  • ‘Why companies switch from [Competitor] to Onelo’ narrative pages targeting the alternative query space. These convert well because they address the specific concern a buyer has at the moment they are searching. Each page should include at least one case study from a company that made the switch, with named outcomes.
  • Optimise the existing ‘Onelo vs BambooHR’ blog post: it already holds position 8 without a dedicated page — a proper comparison page would likely push this to position 2–4. This is the single highest-ROI BoFu action available to the team right now, achievable in 1–2 days of editorial work.

05. Perplexity and AI Search Citation Presence

Missing

What this signal assesses

Perplexity AI is a search engine that generates answers with cited sources — it is closer to traditional search than to a chatbot, and its citations directly drive traffic to the cited pages. Appearing as a cited source in Perplexity responses is commercially valuable both as a discovery mechanism and as a trust signal (being cited implies the content is authoritative). This signal tests Perplexity presence specifically because its citation mechanism is more transparent and more directly traceable than ChatGPT or Claude.

Findings

Onelo does not appear as a cited source in any of the 6 Perplexity queries tested. The sources Perplexity consistently cites for onboarding software queries are: G2 category pages, Capterra comparison pages, Rippling’s website, BambooHR’s website, and high-authority HR publication articles. Onelo’s G2 and Capterra profiles are cited as part of the category pages, but Onelo’s own website content is never the primary cited source.

Intent alignment audit — top 10 landing pages

[Link to spreadsheet: For each of the top 10 organic landing pages by traffic, show: (1) the top 3 queries driving traffic to that page, (2) the inferred intent of each query, (3) the content type and intent of the landing page, (4) an alignment score (1–5), and (5) conversion rate. The misalignment between high-traffic pages and low conversion rates is the core diagnostic finding.]

The pattern is clear: pages where alignment is high convert at 3–4x the rate of pages where alignment is low. This is not a conversion rate optimisation problem — it is an alignment problem. A/B testing button colours on misaligned pages will not close this gap.

RECOMMENDATION

For each of the 6 misaligned pages, define the primary query intent driving their traffic and rebuild the above-fold experience around that intent — not around what the company wants to say. This is the core principle: the page answers the buyer’s question first, then presents Onelo as the solution.

  • The highest-priority fix is the homepage: it receives traffic from category-adjacent queries where buyers are trying to understand what onboarding automation is and who the relevant vendors are. The current above-fold experience (‘Onboarding, Reimagined’) does not answer either question. A simple restructure that puts the category definition, the ICP, and the key differentiator in the first screen — before the product screenshot — would materially improve conversion on the 1,100 monthly sessions that arrive via non-branded queries.
  • The /blog/hr-software-comparison page is a second high-priority fix: it currently ranks for commercial-intent comparison queries but delivers an informational overview rather than a structured comparison. Adding a proper comparison table (Onelo vs top 3 competitors, across 8–10 evaluation criteria) and a ‘request a demo to see for yourself’ CTA would convert this page from a 1.4% performer to a likely 3–4% performer without changing its ranking position.

06. FAQ and Question-Format Content Coverage

Missing

What this signal assesses

AI systems are optimised to answer questions. They preferentially cite content that is structured as question-and-answer pairs because it directly matches the format of their output. A site with extensive FAQ content, question-format blog post titles, and structured Q&A sections gives AI systems the exact building blocks they need to generate responses that cite the site. A site without this content format is structurally harder for AI systems to cite even if the underlying content is good.

Findings

Of Onelo’s 94 blog posts, 3 use question-format titles (‘How to…’ or ‘What is…’). Zero use FAQ structured content with schema markup. Zero commercial pages have FAQ sections. This is a near-complete absence of the content format that AI systems preferentially cite. Rippling, by comparison, has FAQ sections on 34 pages with FAQPage schema markup — which is why Rippling is cited in AI responses for queries that Onelo’s content could theoretically answer.

Commercial page organic traffic — current and potential

  • /product/onboarding-automation — organic sessions/month:  1,180 (position 14 for primary query; should be position 3–5 with a dedicated category page)
  • /product/workflow-builder — organic sessions/month:  740 (ranks for branded and near-branded queries only; no category-intent traffic)
  • /solutions/mid-market-onboarding — organic sessions/month:  890 (best-performing solution page; could 3–4x with category page architecture)
  • /solutions/remote-teams — organic sessions/month:  620 (position 14 for ‘remote team onboarding’; dedicated page could reach position 5)
  • /pricing — organic sessions/month:  890 (captures branded pricing intent; not yet targeting non-branded pricing queries)

Combined commercial page traffic: ~4,320 sessions/month. If category page build moves the top 4 commercial pages to positions 5–8 for their target queries, this figure should reach 12,000–15,000 sessions/month within 12 months — a 3x increase without changing the number of commercial pages.

RECOMMENDATION

The category page build (Component 03) is the primary lever for increasing commercial page organic traffic. Each new category landing page is effectively a new commercial page — built specifically to rank for buyer-intent queries and drive direct organic sessions to commercial content.

  • The pricing page has an underexploited opportunity: it currently ranks only for branded pricing queries. Adding a section addressing ‘how much does onboarding automation software cost?’ — framed around industry benchmarks and Onelo’s value relative to them — would allow the pricing page to rank for non-branded pricing intent queries. These queries (estimated 1,400 combined monthly searches for variations) attract buyers who are actively building a budget and evaluating options. A pricing page that ranks for them converts well because the audience is self-selected for purchase intent.

07. Structured Data and Entity Signal Implementation

Fragile

What this signal assesses

Structured data provides machine-readable signals about what a company is, what it does, who it serves, and what content on the site is designed to answer specific questions. For AI visibility specifically, structured data is one of the most direct ways to communicate entity information to AI systems — it bypasses the ambiguity of natural language and provides explicit declarations that AI systems can incorporate with high confidence.

Findings

This signal is rated Fragile rather than Missing because some structured data exists — Organization and SoftwareApplication schema are implemented. But the implementation is incomplete in two important ways: the category classification is too broad to contribute meaningful AI positioning signals, and the AI-specific schema types (FAQPage, HowTo, speakable) that are most valuable for AI citation readiness are entirely absent. This is the fastest-path intervention for improving AI visibility — structured data changes can be deployed in a single development sprint and begin influencing AI systems within weeks.

Content-to-commercial navigation analysis

[Link to spreadsheet: Analytics funnel data showing the most common navigation paths from informational blog posts to commercial pages. Identify: (1) which blog posts generate the most commercial navigation, (2) which commercial pages are most commonly navigated to, and (3) which blog posts have zero commercial navigation despite high traffic.]

Top 5 blog posts generating commercial navigation (these are the model to replicate):

  • /blog/hr-software-comparison-2024 → /solutions/mid-market-onboarding — 4.2% of sessions navigate commercially. This post mentions Onelo explicitly and includes a ‘see how Onelo compares’ link within the body text.
  • /blog/onboarding-automation-roi → /product/onboarding-automation — 3.8% commercial navigation rate. Informational intent but directly relevant to buyers building a business case.
  • /blog/employee-onboarding-software-guide → /solutions/mid-market-onboarding — 3.1% navigation rate. The guide explicitly addresses mid-market HR challenges.

Bottom 5 blog posts by commercial navigation (high traffic, near-zero commercial engagement):

  • /blog/employee-onboarding-checklist — 4,820 sessions/month, 0.3% commercial navigation. The audience is largely SMB and HR generalists; no contextual bridge to mid-market commercial pages exists.
  • /blog/remote-work-culture-tips — 2,200 sessions/month, 0.1% commercial navigation. Audience is entirely off-ICP; commercial navigation is essentially random.

RECOMMENDATION

Apply the navigation pattern of the top 5 performing posts to the top 20 posts by traffic. The common factor in posts with high commercial navigation rates is a contextual inline link within the body text — not a sidebar widget, not a generic footer CTA, but a sentence that naturally bridges the informational topic to the commercial solution.

  • Specifically: for every blog post where the audience could plausibly be a mid-market HR Director at evaluation stage, add one contextual paragraph near the end of the post (not at the very bottom — readers who scroll that far are rare) that bridges the topic to Onelo’s solution. The paragraph should be framed around the reader’s situation, not Onelo’s product: ‘If you’re managing onboarding for a team of 200+, the manual processes described above are likely costing your HR team 4–6 hours per new hire. Here’s how Onelo’s workflow automation addresses this specifically.’ Link the anchor text to the most relevant solution page, not the homepage.
  • For the bottom 5 posts with near-zero commercial navigation — these are the off-ICP, high-traffic posts — do not add commercial navigation modules. Injecting promotional content into posts that attract an entirely wrong audience creates a poor experience for the actual visitors and will not improve commercial metrics. Accept that these posts function as brand awareness assets with minimal commercial value and measure them accordingly.

08. Llms.txt and AI Crawl Accessibility

Missing

What this signal assesses

llms.txt is an emerging standard that allows websites to explicitly communicate to AI systems which content is most relevant for them to index and how to represent the company accurately. It is the AI equivalent of robots.txt — a machine-readable file that shapes how AI crawlers interact with the site. While not yet universally adopted, it is being implemented by forward-looking companies as a direct signal to AI systems, and its absence places Onelo behind the small number of competitors who have already implemented it.

Findings

No llms.txt file exists at onelo.com/llms.txt. This is not a critical gap given the file’s current limited adoption, but it is a signal of AI readiness that takes under 2 hours to implement and has no downside. In the context of a company that is Missing on 9 of 12 AI visibility signals, adding llms.txt is a quick win that demonstrates AI visibility investment to any AI system that checks for it.

Conversion rate by segment — segmented analytics

[Link to spreadsheet: Analytics conversion data segmented by traffic intent type. For each segment, show: sessions, leads, conversion rate, and estimated pipeline value. The pipeline value calculation multiplies leads by the average Onelo deal value ($25,000 midpoint of $18K–$32K range) and the historical close rate (41%).]

  • Buyer-intent segment — CVR:  5.8% session-to-lead (above 3.8% benchmark)
  • Buyer-intent segment — estimated monthly pipeline contribution:  ~$89,000 (312 queries x avg 1.2 sessions per query x 5.8% CVR x $25K ACV x 41% close)
  • Informational segment — CVR:  0.3% session-to-lead
  • Off-category segment — CVR:  0.1% session-to-lead

The buyer-intent segment is already performing above benchmark. The Demand Match intervention is not about improving conversion rate — it is about increasing the volume of buyer-intent sessions. Doubling the buyer-intent traffic share (from 38% to 65% of total sessions) while maintaining conversion rate would increase organic pipeline from ~$89K to ~$153K/month — a 72% increase with no change to the conversion experience.

RECOMMENDATION

Use this segmentation data to reframe how the organic programme is measured internally. The current dashboard tracks total organic traffic and total organic leads — metrics that obscure the real performance because they blend high-value and low-value sessions. Replace the top-line metric with buyer-intent sessions and buyer-intent conversion rate. These two numbers, tracked weekly, tell a more accurate story about organic channel health.

  • Share this segmentation with the leadership team before the next investor update. The data shows that organic is already performing well for the audience it attracts — which reframes the organic growth story from ‘organic is underdelivering’ to ‘organic is working well for the right audience and we have a plan to bring more of that audience to the site’. This is a materially more credible narrative and is supported directly by the conversion rate data.

09. Google AI Overview Presence for Commercial Queries

Missing

What this signal assesses

Google AI Overviews appear at the top of search results for an increasing share of queries, particularly informational and evaluation-stage queries. They summarise information from multiple sources and cite the pages they draw from. Appearing in an AI Overview for a commercial query is a highly visible trust and discovery signal — the AI Overview appears above all organic results and includes the vendor’s name and a link. Absence from AI Overviews for relevant queries means missing visibility at the very top of the search result page.

Findings

Onelo appears in zero Google AI Overviews for any of the 4 commercial queries tested where AI Overviews were triggered. Rippling appears in AI Overviews for 3 of 4 queries. BambooHR appears in 2 of 4. The structural reason is the same as Perplexity citation absence: Onelo has no content formatted for AI extraction. AI Overviews draw from pages with clear question-answer structures, FAQ schema, and content that directly addresses the query being searched.

Content gap map — consideration stage

These are consideration-stage queries with meaningful search volume where Onelo has no ranking content. All reflect the research behaviour of an HR Director at a 200–2,000-employee company evaluating whether to invest in onboarding automation.

  • ‘how to evaluate employee onboarding software’:  720 searches/month. KD 28. Buyers researching evaluation criteria. Zero Onelo presence. A structured ‘buyer’s guide to onboarding software evaluation’ page would rank for this and 12–15 related queries simultaneously.
  • ’employee onboarding software features comparison’:  590 searches/month. KD 24. Buyers building their requirements list. A feature comparison page or checklist targeting mid-market would rank here.
  • ‘onboarding automation for 500 employees’:  480 searches/month. KD 18. Company-size specific query directly in Onelo’s ICP. No existing content.
  • ‘what does employee onboarding software cost’:  1,200 searches/month. KD 32. High-volume consideration-stage query. The pricing page could rank for this with a market-context pricing section added.

[Link to spreadsheet: Full consideration-stage content gap analysis — all queries in the consideration stage with volume >100/month where Onelo has no ranking. Sorted by estimated pipeline value (volume x intent quality score x estimated CVR).]

Content gap map — decision stage

Decision-stage gaps are the most commercially urgent because buyers searching these queries have already made a category decision and are choosing a vendor.

  • ‘Onelo vs BambooHR for mid-market’:  320 searches/month. Buyers comparing Onelo specifically. No dedicated comparison page exists — position 8 held by a blog post.
  • ‘BambooHR alternative with workflow automation’:  590 searches/month. Buyers who find BambooHR’s automation limited. Onelo’s workflow depth is the direct answer but there is no page targeting this query.
  • ‘how long does onboarding software take to implement’:  480 searches/month. Risk-assessment query. Onelo’s <2-week implementation is a competitive advantage that no page is built to exploit.
  • ’employee onboarding software for HR director role’:  260 searches/month. Role-specific search with perfect ICP alignment. No content.

These 4 decision-stage gaps collectively represent approximately 1,650 monthly searches. At a 5% conversion rate (consistent with buyer-intent segment performance) and a $25K ACV, capturing these queries would represent approximately $85,000 in monthly pipeline from 4 pages.

RECOMMENDATION

Prioritise the content gap programme in three tiers. Tier 1 (weeks 12–16): build the decision-stage pages identified above — comparison pages, alternative pages, and the implementation-time page. These have the highest per-session commercial value and the most direct impact on pipeline.

  • Tier 2 (weeks 16–20): build one high-quality consideration-stage buyer’s guide targeting ‘how to evaluate employee onboarding software’. This single piece of content would rank for 12–15 related consideration-stage queries simultaneously if built correctly — with a structured evaluation framework, a feature checklist template download, and an ICP-specific lens. A downloadable evaluation template within this page doubles as a lead capture mechanism for buyers who are not ready to request a demo but want to save the framework.
  • Tier 3 (weeks 20–24): add a market-context pricing section to the existing pricing page to capture the ‘what does onboarding software cost’ consideration-stage traffic. This is a 2-hour editorial task, not a new page build.

10. Google AI Overview Citation Quality

Missing

What this signal assesses

This signal is not assessable in the traditional sense for Onelo — because Onelo is absent from all AI Overviews tested, there are no citations to assess for quality. The signal is documented here because it becomes relevant once the AI visibility programme begins producing results, and because understanding what citation quality looks like for competitors provides the template for what Onelo’s citation content should be built toward.

Findings

Based on analysis of competitor citations within AI Overviews, the content that gets cited shares three characteristics: it is on a dedicated page for the specific query topic (not a general product page), it contains specific and quotable claims (statistics, timeframes, named features), and it is structured with clear H2 subheadings that match the query intent. Rippling’s /onboarding page is the benchmark — it is cited in more AI Overviews for onboarding-related queries than any other vendor page because it is architecturally optimised for citation.

 Three cannibalisation instances — detail and impact

[Link to spreadsheet: GSC data for the 6 pages involved in the three cannibalisation pairs, showing impressions, clicks, average position, and ranking queries for each. The data will confirm which page in each pair is the stronger performer and should become the canonical destination.]

Cannibalisation instance 1:

  • Page A:  /blog/employee-onboarding-checklist — position 8, 880 impressions/month
  • Page B:  /blog/new-hire-onboarding-checklist — position 12, 620 impressions/month
  • Target query:  ’employee onboarding checklist’ — 2,400 searches/month
  • Combined potential if consolidated:  Estimated position 4–6, ~2,100 impressions/month (vs current 1,500 split)

Cannibalisation instance 2:

  • Page A:  /blog/remote-onboarding-best-practices — position 11, 720 impressions/month
  • Page B:  /blog/virtual-onboarding-tips — position 18, 340 impressions/month
  • Target query:  ‘remote onboarding’ — 1,600 searches/month
  • Combined potential if consolidated:  Estimated position 7–9, ~1,440 impressions/month (vs current 1,060 split)

Cannibalisation instance 3:

  • Page A:  /blog/onboarding-automation-benefits — position 14, 480 impressions/month
  • Page B:  /blog/why-automate-employee-onboarding — position 22, 260 impressions/month
  • Target query:  ‘onboarding automation’ — 880 searches/month
  • Combined potential if consolidated:  Estimated position 8–11, ~840 impressions/month (vs current 740 split)

RECOMMENDATION

Consolidate all three cannibalisation pairs. The consolidation process for each pair: (1) identify the stronger page using the GSC data (higher impressions and position); (2) merge the content of both pages into the stronger URL, incorporating the best sections of the weaker page; (3) set up a 301 redirect from the weaker URL to the consolidated URL; (4) update all internal links pointing to the deprecated URL.

  • For the /blog/onboarding-automation-benefits consolidation, this is particularly valuable because the merged page should then be repositioned as a conversion-oriented piece rather than a pure blog post — adding a comparison of manual vs automated onboarding with quantified outcomes, and a direct path to the product page. This consolidation would turn two thin awareness-stage posts into one consideration-stage asset with better ranking potential and commercial intent.
  • Timeline: all three consolidations can be completed in one focused week of editorial work. Ranking recovery after consolidation typically takes 4–8 weeks as Google recognises the canonical signal. This is a low-effort, medium-impact action that can run in parallel with the category page build.

11. Brand Mention Share in AI vs Traditional Search

Missing

What this signal assesses

This signal compares Onelo’s brand mention share in traditional search (branded queries as a proportion of category queries) against its brand mention share in AI-generated responses. A company with growing brand awareness in traditional search but zero AI mention share is building a future problem: as AI-assisted research displaces traditional search for initial vendor discovery, a brand visible in traditional search but absent in AI becomes increasingly reliant on existing brand awareness rather than discovery.

Findings

Onelo’s branded search volume is growing at 18% year-over-year in traditional search — a healthy signal (Component 07). In AI-generated responses, branded mention share is effectively zero. The gap between traditional search brand growth and AI brand absence is widening. As AI-assisted research grows from its current estimated 40% of B2B evaluation journeys toward majority share, this gap becomes a structural acquisition problem.

 Three cannibalisation instances — detail and impact

[Link to spreadsheet: GSC data for the 6 pages involved in the three cannibalisation pairs, showing impressions, clicks, average position, and ranking queries for each. The data will confirm which page in each pair is the stronger performer and should become the canonical destination.]

Cannibalisation instance 1:

  • Page A:  /blog/employee-onboarding-checklist — position 8, 880 impressions/month
  • Page B:  /blog/new-hire-onboarding-checklist — position 12, 620 impressions/month
  • Target query:  ’employee onboarding checklist’ — 2,400 searches/month
  • Combined potential if consolidated:  Estimated position 4–6, ~2,100 impressions/month (vs current 1,500 split)

Cannibalisation instance 2:

  • Page A:  /blog/remote-onboarding-best-practices — position 11, 720 impressions/month
  • Page B:  /blog/virtual-onboarding-tips — position 18, 340 impressions/month
  • Target query:  ‘remote onboarding’ — 1,600 searches/month
  • Combined potential if consolidated:  Estimated position 7–9, ~1,440 impressions/month (vs current 1,060 split)

Cannibalisation instance 3:

  • Page A:  /blog/onboarding-automation-benefits — position 14, 480 impressions/month
  • Page B:  /blog/why-automate-employee-onboarding — position 22, 260 impressions/month
  • Target query:  ‘onboarding automation’ — 880 searches/month
  • Combined potential if consolidated:  Estimated position 8–11, ~840 impressions/month (vs current 740 split)

RECOMMENDATION

Consolidate all three cannibalisation pairs. The consolidation process for each pair: (1) identify the stronger page using the GSC data (higher impressions and position); (2) merge the content of both pages into the stronger URL, incorporating the best sections of the weaker page; (3) set up a 301 redirect from the weaker URL to the consolidated URL; (4) update all internal links pointing to the deprecated URL.

  • For the /blog/onboarding-automation-benefits consolidation, this is particularly valuable because the merged page should then be repositioned as a conversion-oriented piece rather than a pure blog post — adding a comparison of manual vs automated onboarding with quantified outcomes, and a direct path to the product page. This consolidation would turn two thin awareness-stage posts into one consideration-stage asset with better ranking potential and commercial intent.
  • Timeline: all three consolidations can be completed in one focused week of editorial work. Ranking recovery after consolidation typically takes 4–8 weeks as Google recognises the canonical signal. This is a low-effort, medium-impact action that can run in parallel with the category page build.

12. AI Visibility Trajectory Assessment

Missing

What this signal assesses

AI visibility trajectory measures whether a company’s AI presence is improving over time, declining, or stagnant — and compares that trajectory against competitors. A company that began its AI visibility programme 18 months ago and has been consistently building signals has a compounding advantage that is increasingly expensive to close. A company starting from zero today can still close the gap, but the required investment grows with each month of delay.

Findings

Onelo’s AI visibility trajectory is flat at zero — no improvement detectable because no programme has been in place. Competitor trajectories are all positive. Rippling’s AI presence is estimated to have been building since Q1 2023, giving it an 18-month compounding advantage. The gap is material but not insurmountable: AI systems update their training data and retrieval indices regularly, and a focused 6-month programme will produce measurable results within that window.

 Three cannibalisation instances — detail and impact

[Link to spreadsheet: GSC data for the 6 pages involved in the three cannibalisation pairs, showing impressions, clicks, average position, and ranking queries for each. The data will confirm which page in each pair is the stronger performer and should become the canonical destination.]

Cannibalisation instance 1:

  • Page A:  /blog/employee-onboarding-checklist — position 8, 880 impressions/month
  • Page B:  /blog/new-hire-onboarding-checklist — position 12, 620 impressions/month
  • Target query:  ’employee onboarding checklist’ — 2,400 searches/month
  • Combined potential if consolidated:  Estimated position 4–6, ~2,100 impressions/month (vs current 1,500 split)

Cannibalisation instance 2:

  • Page A:  /blog/remote-onboarding-best-practices — position 11, 720 impressions/month
  • Page B:  /blog/virtual-onboarding-tips — position 18, 340 impressions/month
  • Target query:  ‘remote onboarding’ — 1,600 searches/month
  • Combined potential if consolidated:  Estimated position 7–9, ~1,440 impressions/month (vs current 1,060 split)

Cannibalisation instance 3:

  • Page A:  /blog/onboarding-automation-benefits — position 14, 480 impressions/month
  • Page B:  /blog/why-automate-employee-onboarding — position 22, 260 impressions/month
  • Target query:  ‘onboarding automation’ — 880 searches/month
  • Combined potential if consolidated:  Estimated position 8–11, ~840 impressions/month (vs current 740 split)

RECOMMENDATION

Consolidate all three cannibalisation pairs. The consolidation process for each pair: (1) identify the stronger page using the GSC data (higher impressions and position); (2) merge the content of both pages into the stronger URL, incorporating the best sections of the weaker page; (3) set up a 301 redirect from the weaker URL to the consolidated URL; (4) update all internal links pointing to the deprecated URL.

  • For the /blog/onboarding-automation-benefits consolidation, this is particularly valuable because the merged page should then be repositioned as a conversion-oriented piece rather than a pure blog post — adding a comparison of manual vs automated onboarding with quantified outcomes, and a direct path to the product page. This consolidation would turn two thin awareness-stage posts into one consideration-stage asset with better ranking potential and commercial intent.
  • Timeline: all three consolidations can be completed in one focused week of editorial work. Ranking recovery after consolidation typically takes 4–8 weeks as Google recognises the canonical signal. This is a low-effort, medium-impact action that can run in parallel with the category page build.