Is Brand Demand Sabotaging Your SEO? Or Hiding Your Week Strategy?
Organic Search is a catcher’s mitt for demand. If no one throws the ball, nothing gets caught.
This week’s #SEOForLunch sponsor is SEOTESTING.COMCheck this out, this “fun” story from ~10 years ago.
I’m staring at a client’s analytics. An SEO miracle. Everything we shipped this year has hit, dashboards glowing green for months on end. Then one week goes red, very red.
YOY drops in site traffic and conversions. The homepage ends up being the culprit. Odd, since it mostly ranks for the brand’s name and still holds the top spot across all search engines. WTF!?
The client and I chalk it up to bad data, but I can’t let it go. Fast forward to the end-of-month readout: someone finally says, “Didn’t we spend five million on TV around this time last year?”
I sit up. “And you definitely didn’t this year, right?” So the drop wasn’t a ranking issue. Last year, demand for the brand spiked, driven by TV ad spend. This year, there was no TV, so no traffic surge. 🥴
We overlay the TV airing dates on the traffic curve. Perfect match. Mystery solved. Except for the part where it shows as “SEO traffic,” and my team did nothing wrong.
This week, I’m breaking down other outside factors that can swing organic performance even when your SEO is airtight.
A Special THANK YOU To This Weeks Sponsor: SEOTESTING.COM
Track and Test Your LLM Traffic!
LLMs are now real discovery channels.
With SEOTesting, you can finally quantify them. The LLM Traffic Pages Report shows you which pages attract visits from ChatGPT, Claude, Perplexity, Gemini, and Copilot, and how that changes over time. Then use our new LLM Test Type to run proper time-based tests on content or technical tweaks and measure uplift in LLM visits per day.
This is all valuable evidence that your stakeholders will trust!
Setting Performance Expectations
“SEO takes months to work,”
“SEO is a compounding channel,”
“SEO lifts all other marketing channels.”
All true. Set those expectations early. You’re really defining how we can accurately track and report on success: conversions, traffic, share of addressable market, and so on…
All of this is still true. But also, still table stakes. Here’s the part most teams ignore, the thing that can mask shitty SEO or send an otherwise successful SEO strategy right off the tracks.
Brand demand.
If nobody’s creating it, organic cannot catch it. If someone pours gasoline on it, organic looks heroic even if nothing changed on-site.
As you saw from my real-life example above, your reporting lives or dies on whether you separate supply-side SEO work from the demand that shows up.
Let’s break down the outside forces that swing “SEO numbers” so you stop taking heat for things you don’t control.
The Outside Forces That Move “SEO Numbers”
I’ve identified four buckets that swing your SEO results, whether you ship genius or hot garbage. I’ve also defined levers, impact (effect), and how to detect each of these fast!
1. Demand Generators:
Levers: TV/CTV/YouTube, PR/news, influencer spikes.
Effect: brand queries + homepage surges.
Detect fast: flight sheets, GSC branded impressions, Share of Search.
2. Market & Ops:
Levers: Pricing/promos/merch, inventory/OOS, shipping/returns, policy changes.
Effect: Conversion (CVR) swings wildly despite the same rankings.
Detect fast: Price index vs competitors, Out of stock (OOS) rates on top URLs, and release notes.
3. Measurement & Cannibalization:
Levers: Tag breaks, model flips, consent mode, email/SMS/app popups, stealing last click.
Effect: Phantom drops/lifts in performance (don’t forget to look at Direct traffic, as this data can be lumped here)
Detect fast: parallel analytics and tag manager properties, attribution model comparison (we know it screws SEO), Email/SMS send logs, and modal exposure rate (new popups/banners, etc).
4. Surfaces & Competition:
Levers: AIO on/off, SERP modules reshuffle, competitor spend/rebrand, seasonality/weather.
Effect: traffic yo-yo with stable rank.
Detect fast: pixel-level SERP logs (what features have been added/removed or changed), AIO presence/citations, competitor spend context (what are they investing in both online and offline, impacting industry demand), pull a 3-year baseline of channel performance to identify seasonal patterns or anomalies.
You can rationalize numbers all day. Executives want proof, not poetry.
Next up: Prove it fast with demand-adjusted reporting that separates supply-side SEO from outside noise.
Prove It Fast (Demand-Adjusted Reporting)
Someone is already itching to nitpick. Fine. Demand-adjusted reporting is how you report performance minus the outside noise. Call it a structured take on quantifying the halo effect, but with the numbers that keep that one colleague from picking your report apart and asking stupid questions to hear themselves think.
Metrics
Stabilize KPIs so they don’t swing with off-site activity.
Demand-Adjusted YoY = Non-brand sessions YoY ÷ Branded impressions YoY.
If this holds while topline dips, demand changed, not SEO.
Promo-Adjusted CVR = CVR ÷ promo exposure × price delta factor.
Same rank, different offer. Explains CVR whiplash in one line.
Contribution Index = Non-brand sessions × assisted conversions.
Shows SEO created doors others walked through.
AI Panel = AIO presence rate, citation count, answer delta vs top 3.
Either you appear and get cited or you don’t. Track it.
Artifacts
Build these once (I might recommend doing so soon after that setting expectations meeting) and make them non-optional in the readouts.
Cross-marketing annotation calendar covering TV/CTV, PR, promos, price moves, inventory status, policy changes, tracking tag changes, etc.
Brand vs non-brand dashboards* with separate targets and a demand-adjusted line.
Share of Search panel for you vs your top 3 competitors. (Honestly, I really hate SoS reports, but again, more directional data to leverage)
AI visibility tracker logging AIO presence and citations for critical queries/topics.
*Yes, brand vs non-brand requires assumptions and uses imperfect data, but it is directionally right. If I had this in place, that TV-driven homepage dip at the start of this post would have been a two-minute diagnosis.
Comprehensive Readouts
Implement this on day one, and the leadership team becomes much more open to “new factors” that occur. Bring this up the first time when justifying poor performance, and it comes off as an excuse.
Take this portion of the readout to answer three questions.
What changed outside SEO: dated items pulled from the annotation calendar.
What we control: supply levers shipped and their status.
What we recommend: the capture plan tied to those changes, with owners and due dates.
With proof in hand, you move from defense to offense.
Next up: the weekly playbook so every report is demand-adjusted by default.
The Playbook Checklist + Governance Plan
We talked about why you should consider implementing data-adjusted reporting, but the list below provides a step-by-step checklist and a laid-out strategy for when your own reports turn red.
Setup
Stand up the cross-marketing annotation calendar.
Create a brand vs non-brand dashboard with a demand-adjusted YoY line.
Add Share of Search and an AI panel for money topics/queries.
Overlay price and inventory on top revenue URLs.
Every Reporting Readout
First slide: “Outside factors” recap with dates.
Footer on key slides: Demand-Adjusted YoY and Promo-Adjusted CVR.
“Keep an eye on this”: SERP modules moved, AIO citation delta, competitor push, variance vs 3-year baseline.
When numbers go red
24h: verify measurement. Check parallel analytics and tag management.
48h: isolate demand vs supply. Brand vs non-brand, price, inventory, surfaces, AIO.
72h: exec update with a capture plan. Owners and due dates.
Targets that do not burn you
Publish Supply KPIs: crawl, indexation, page quality, rank share.
Publish Outcome KPIs: revenue or signups normalized to demand.
Tie goals and incentives to that split.


