Hacker Newsnew | past | comments | ask | show | jobs | submit | businessmate's commentslogin

What began as a MarTech responsibility is evolving into a strategic and fiduciary obligation involving CMOs, CFOs, executive committees, and boards. This article defines that continuum: from operational control (measurement and monitoring) to governance assurance (audit and disclosure).

In his recent interview, Google’s Robby Stein downplayed AEO and GEO as distinct disciplines, explaining that Google’s AI still performs “query fan-out” — dozens of background searches governed by the same ranking and quality signals that underpin SEO.

Across the emerging field of assistant-visibility analytics—the measurement of how often and prominently brands, products, or entities appear in AI-assistant outputs—one pattern persists: identical prompts, run through different dashboards, rarely produce the same numbers.

In early 2025, the Prompt-Space Occupancy Score (PSOS™) introduced a reproducible way to measure how brands appear within AI assistant responses. It established visibility as a quantifiable, auditable dimension of brand equity.

PSOS-C, the conversion-weighted evolution of that metric, extends PSOS beyond exposure. It connects assistant-level visibility to measurable user actions and verified financial outcomes through a transparent, auditable attribution chain. PSOS-C closes the loop between being seen and creating value—transforming AI visibility into a governance-grade financial indicator.


The first edition of Brand Visibility Watch™ confirms what dashboards miss: volatility in AI assistants is already reshaping brand competition. Across four sectors — Auto, Banking, Luxury, and SaaS — we observed sudden swings in recall. Incumbents lost presence inside assistant answers while challengers and substitutes filled the gap. Boards must begin treating AI visibility as a third systemic risk, alongside finance and cyber.

Most marketers still optimise for “AI visibility” as if large language models (LLMs) share a single index.

They do not.

Each assistant has its own ingestion pipeline, retrieval weighting, and update cadence. Treating them as one channel is equivalent to running the same media plan across Google, Meta, and TikTok without accounting for format or audience.

The result: wasted spend, inconsistent presence, and unmeasurable leakage.


Description This white paper introduces Siloed Visibility, a new governance concept within the AIVO Standard v3.5 framework. As large language models (LLMs) evolve into agentic ecosystems, user queries are increasingly routed through brand-owned agents that transact and retrieve data inside closed execution layers.

The headlines say Google is collapsing as AI assistants like ChatGPT and Gemini take over. Our tests suggest the truth is more complicated.

Assistants increasingly pre-allocate answers. Brands appear — or vanish — without a click. Google still mediates discovery, but often invisibly. Traffic stability can be misleading. Boards risk assuming safety while competitors substitute inside assistant answers. The verdict: Google is not collapsing, but its power is being hollowed out from above.


For two decades, Q4 meant fighting for search rankings, media slots, and cart conversions. Brands assumed visibility could be bought and managed. That assumption no longer holds.

AI assistants like ChatGPT, Gemini, and Claude now collapse discovery and decision into a single synthesized answer. Their retraining cycles—often landing in September, October, or December—reshuffle which brands appear, with 30–60% recall losses inside 30 days. A competitor takes your slot and captures seasonal spend.


Dashboards claiming to show “millions of AI prompts” are proliferating. The pitch is simple: type in a term and you get back a precise number of times users asked it in ChatGPT, Gemini, or Claude. These numbers are being positioned as if they are observed counts, akin to search volumes in Google Keyword Planner.

The reality is less robust. None of the major LLM vendors provide raw usage logs. What is presented as factual is, in fact, modeled: panel data scaled to population estimates. When these projections are stripped of error margins and displayed as integers, they mislead.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: