Google Ads Quality Score: how it impacts profit and why it is still under-managed in 2026
- Dave Kirby

- 14 hours ago
- 7 min read
One of the most common marketing channels for our clients at Coppett Hill is Google Ads. Whether B2B or B2C, paid search is often one of the top three sources of new business pipeline, and frequently the largest single line of marketing spend. It is also one of the channels where we see the widest range in terms of operational maturity and the level of specialist knowledge of the in-house team and agency partners.
Some of our clients are running highly sophisticated accounts, with disciplined ad group structures, granular conversion tracking through to revenue and margin, and active testing programmes. Others have inherited an account that has been on autopilot for years, with smart bidding doing most of the heavy lifting and very little proactive management happening underneath. Both can work to a point, but in almost every case there is one metric that is systematically under-attended to, and that is Quality Score.
How Google Ads Quality Score works (and how to backsolve the formula)
Quality Score is essentially Google’s way of telling each advertiser how good their ads are, on a 1 to 10 scale at the keyword level. The reason it matters is that Google rewards better advertisers with (relatively) cheaper clicks and better positions. This is not Google being charitable. Better ads mean more clicks and repeat visits to Google, more clicks mean more revenue for Google, so the platform has a strong commercial incentive to give an advantage to advertisers who write more relevant and compelling ads and send users to landing pages that address the user’s need. The neat thing is that Google tells you exactly how you are doing, and exactly where you are losing points, for free.
If you want the underlying mechanics, Adalysis has a good write-up of the formula, but in short Quality Score is built from three components that Google reports back to the advertiser:
Expected Click-Through Rate (CTR), which measures how likely your ad is to be clicked compared to the historical average for that keyword
Ad Relevance, which measures how closely your ad copy matches the search intent of the keyword
Landing Page Experience, which measures how relevant and useful your landing page is to someone clicking the ad
Each of these is reported as ‘Above Average’, ‘Average’, or ‘Below Average’ – and this is relative to other advertisers in your auctions, not against an ‘absolute standard’, so it is realistic for all advertisers to aim for the best possible score.
From these three pieces of information you can essentially backsolve what your Quality Score will be. Two ‘Above Average’ ratings will typically push you to a score of 7 or above, all ‘Average’ lands you around a 5 or 6, and any ‘Below Average’ rating drags you down quickly. Once you know which component is letting you down, you also know exactly where to focus.
How to improve your Google Ads Quality Score
The kinds of things advertisers do to improve Quality Score are not particularly mysterious. Ad copy is often the easiest place to start, and a good starting point is competitor research – pulling the top three ads on your priority keywords and asking why they have higher CTRs than yours. Landing pages are another common culprit, particularly when one generic page is being used for a range of unrelated keywords.
Tightening up ad group structure so that each group covers a narrower set of intents, and making sure each group has its own bespoke ad copy and landing page, tends to lift all three components at once. For example, separating ‘virtual PA’ vs ‘virtual receptionist’ into different ad groups, with unique ad copy and landing pages helped a client to improve Quality Score.
Adding negative keywords (specific search terms we ask Google not to show each ad against) also helps, both by reducing wasted spend and by improving the relevance of our ad & landing page to the user.
Re-running my 2014 Quality Score analysis with 2026 data
Back in 2014, when I was responsible for paid acquisition at CarTrawler, I did a piece of analysis on how changes in Quality Score were impacting our cost per click. At the time, we found that each improvement of one point in Quality Score was reducing our CPCs by somewhere between 10 and 20% on non-brand keywords. That was a meaningful number, and it shaped how we prioritised optimisation work for years afterwards.
Fast forward to 2026, and a lot has changed. Smart bidding is now the default rather than the exception. Targeting has moved from tightly controlled exact and phrase match to much broader match types, with the algorithm doing more of the work. Performance Max has muddied the waters even further. I had a hunch that the relationship between Quality Score and CPC might have weakened, or at least changed shape, and I was curious to see what the data would actually show.
We are also in a much better position now to do this kind of analysis. At Coppett Hill we work with a wide range of clients across different sectors and geographies, and we have a unique approach to integrating full funnel data through to revenue and profit, not just clicks and CPC. So we pulled the data and re-ran a version of the original analysis, but this time looking at profit impact rather than just CPC.
How Quality Score impacts CPC, clicks and profit in 2026
The headline result is that Quality Score still matters, and it matters a lot, but the mechanics of how it impacts your account have changed in an interesting way.
Each one-point improvement in Quality Score is worth on average 18.5% more profit on non-brand keywords, and each one-point reduction costs you about 19.5% of profit. The relationship is roughly symmetrical – Google rewards you when you improve and punishes you when you slip, by similar amounts.
What is different from 2014 is the route through which this impact flows. We are no longer seeing big changes in CPC. In fact, a one-point improvement in Quality Score is associated with a small increase in CPC of around 5% on average, and similar for a one-point reduction. But what we are seeing is a big change in click volumes – a one-point improvement is associated with a 17% uplift in clicks, and a one-point drop is associated with a small 2-3% reduction in clicks (which I’d suggest is more or less flat).

This makes intuitive sense once you think about how smart bidding works. The bidding algorithms are reacting in real time to changes in efficiency. When your Quality Score improves, your ads suddenly become more profitable per click, so the algorithm bids you into more auctions and into higher positions. CPCs go up slightly because you are buying more expensive impressions, but you are getting many more clicks at a still-attractive economic profile. The net effect on profit is strongly positive. The reverse happens when Quality Score drops.
The takeaway from this is that if you are still measuring the impact of Quality Score work purely through CPC, you are likely understating the value significantly. Profit is the right metric to focus on, because that is where the real story is. A small CPC increase combined with a 17% click uplift looks great in profit terms, but might look worrying if you are only watching the cost side.
How much non-brand spend sits on low Quality Score keywords?
For most clients, Quality Score work is one of the highest-ROI activities you can do in a Google Ads account, and yet it almost never appears as a line item in marketing plans. Part of the reason is that the work is unglamorous: rewriting ad copy, building out tighter ad groups, fixing landing pages, adding negatives. Another reason is that the impact has historically been hard to measure cleanly, particularly with smart bidding sitting between you and the auction.
We have been working on the second part of the problem internally. The feedback loop from Quality Score data is rich and frequent, which makes it a good candidate for automation. We are currently alpha-testing an approach that uses the Quality Score components as a feedback signal to auto-optimise ad copy across an account, focusing first on the keywords where Expected CTR is rated ‘Below Average’. Early results have been encouraging, and we will share more on this in due course.
In the meantime, if you have not looked at your Quality Score distribution recently, it is worth ten minutes of your time. Pull the report at the keyword level, weight it by spend, and see how much of your non-brand budget is sitting on keywords with a Quality Score of 5 or below. Across our own client base, the answer was strikingly varied – the most disciplined accounts have under 10% of non-brand spend on QS ≤ 5, the least disciplined have over 80%, and the average across our clients is around 50%.
Most accounts have meaningfully more headroom here than their teams realise. There may be some types of traffic where Quality Score is hard to improve, such as very general keywords (such as those at the start of a buyer’s research journey), or competitor-related keywords – but these are the exception rather than the rule.
My conclusion - some things have changed since 2014 (!) - but Quality Score remains a great value creation lever to pull if paid search is part of your marekting mix.
If you would like to discuss how Quality Score is working in your account, or how we are thinking about automation in this space, please Contact Us.
Methodology
We analysed Google Ads account data for 13 clients (6 B2B, 7 B2C), covering the period between October 2023 and April 2026. Not all clients had data available for the entire time range. For each client we pulled daily Quality Score, clicks, and cost at the keyword and campaign level, and for each day we also calculated total clicks and cost for the 14 days before and after each date. We then excluded all brand keywords.
Using a LAG function, we identified all dates on which Quality Score changed compared to the previous day for a given keyword and campaign combination. From this list of changes we filtered out any events where other Quality Score changes occurred during the previous or following 14 days, to give us clean before-and-after windows.
For each change event we calculated CPC before and after (defined as total cost over the 14-day window divided by total clicks over the same period), the absolute change in CPC, and the percentage change in CPC. We then estimated absolute profit before and after each event using the formula: (Clicks × Conversion Rate × Customer Value) – Cost.
The metrics presented here are aggregated using arithmetic averages, given the difference in scale between accounts, with a minimum threshold of 50 clicks in the ‘before’ period to remove outliers caused by data scarcity.
All views expressed in this post are the author’s own and should not be relied upon for any reason. Clearly.



