Reading Your Results
Once your report finishes running, you’ll see a results page with six tabs. Each one shows a different angle on how AI platforms talk about your brand. Here’s what to look at and why it matters.
The results tabs
Section titled “The results tabs”Your primary dashboard. This tab shows summary cards with top-line metrics: total conversations analyzed, brands mentioned, and share of voice breakdowns. Start here for the big picture of how your brand is performing.
A breakdown of results by AI model — ChatGPT, Gemini, Claude, and others. This is where you spot platform-specific gaps. You might have strong visibility on one platform but be nearly invisible on another.
How positively or negatively each AI platform talks about your brand. Sentiment is scored on a 0–100 scale across positive, neutral, and negative categories. Being mentioned is good — being mentioned positively is better.
The web sources that AI platforms reference when discussing brands. Shows which URLs are cited, how often, and for which brands. This reveals where AI gets its information — and where your content could be more visible.
The raw AI responses. Read the actual conversations to see exactly how AI platforms phrase things about your brand. This is where you find qualitative insights that metrics alone don’t capture — the specific language, comparisons, and framing AI uses.
Manage your report’s configuration. Rename the report, update sources, or adjust prompts for the next run.
Start with Visibility for the big picture, then drill into Platforms to see where you’re strongest and weakest. If sentiment is low on a particular platform, switch to Conversations to read what’s actually being said.
Compare results over time. TrioSens shows change percentages between your current and previous report runs — look for trends, not just snapshots. A small dip in one report isn’t a crisis, but a consistent decline across several runs is worth investigating.