Google says Gemini helped block over 99% of bad ads...
Read More
Online advertising has become one of the most powerful engines of the modern internet.
It funds websites, promotes businesses, drives e-commerce, and helps users discover products and services. But it also creates a massive attack surface.
That attack surface is now being targeted at industrial scale.
According to MarketingShot’s summary of Google’s 2025 Ads Safety Report, Google reported that Gemini powered tools caught over 99% of policy violating ads before they were served in 2025. Google also said it blocked or removed over 8.3 billion ads and suspended 24.9 million advertiser accounts during the year.
That number should make every business, marketer, advertiser, and cybersecurity professional pause.
Because this is not just a story about Google Ads, but a story about where digital trust is heading.
The Scale Of The Bad Ads Problem
Google’s own 2025 Ads Safety Report says it blocked or removed more than 8.3 billion ads and suspended 24.9 million advertiser accounts. That included 602 million ads and 4 million accounts associated with scams.
That is not a small enforcement issue, this is an ecosystem level security problem.
Bad ads are no longer just annoying pop ups or questionable banners. They can be used to distribute scams, impersonate trusted brands, push fake investment opportunities, promote malicious software, or redirect users toward phishing infrastructure.
The UK National Cyber Security Centre describes malvertising guidance as advice designed to make it harder for cyber criminals to deliver malicious advertising and reduce the risk of cyber-facilitated fraud.
That is why advertising security is no longer just a marketing compliance issue. It is cybersecurity.
Why Gemini Matters Here
The important part of Google’s announcement is not just the number of ads removed.
The important part is when they were stopped.
Google says Gemini powered tools helped stop over 99% of policy violating ads before they were ever seen by users.
That changes the security model.
Traditional enforcement often reacts after harm has already happened:
- A scam ad runs.
- A user clicks.
- A fake landing page collects credentials.
- A victim pays.
- A brand is impersonated.
- A report is filed.
- The platform eventually responds
Gemini shifts more of that process toward prevention.
Google says its models analyze hundreds of billions of signals, including account age, behavioral cues, and campaign patterns, to stop threats before they reach people. Google also says newer models are better at understanding intent rather than relying only on older keyword based systems.
That is a very important distinction. Attackers do not always reuse the same words. They reuse intent.
The Move From Keyword Filtering To Intent Detection
Older security systems often relied heavily on indicators:
- Blocked phrases.
- Known bad domains.
- Suspicious URLs.
- Repeated account patterns.
- Known scam templates.
Those signals still matter, but attackers adapt quickly.
- A scammer can change the wording.
- A fake investment ad can use new images.
- A phishing campaign can rotate domains.
- A malicious advertiser can create new accounts.
- A fake brand campaign can look visually polished.
Intent detection is different. Instead of only asking, “Does this ad contain a known bad phrase?”, the system starts asking:
- What is this ad trying to do?
- Does the offer look deceptive?
- Does the advertiser behavior match legitimate activity?
- Does the campaign pattern resemble known abuse?
- Does the landing page match the claim being made?
- Is this a real business offer or a lure?
That is where AI can become powerful. Not because AI is magic, but because it can correlate huge volumes of weak signals faster than a human team could manually review them.
AI Is Being Used On Both Sides
This is the uncomfortable part. Google is using AI to stop harmful ads.
Attackers are using AI to create them.
Google’s own reporting notes that Gemini is being used to detect and stop bad ads at scale, while malicious advertisers are becoming more sophisticated in how they attempt to evade detection.
That is the new cyber arms race. Attackers can now generate:
- Fake celebrity endorsements.
- Convincing product ads.
- Localized scam copy.
- Deepfake style promotional content.
- Fake investment narratives.
- Phishing landing page text.
- Brand impersonation campaigns.
This links directly with modern phishing. In my post, How to Identify the Latest Phishing Attacks (2025 Guide), I covered how phishing is now being powered by AI generated emails, deepfake voice calls, QR code baiting, and fake MFA prompts.
Bad ads are another delivery mechanism for the same problem. They are not always the final attack. Sometimes they are the front door.
Why This Matters For Businesses
For businesses, the lesson is simple:
Trust cannot be assumed just because something appears in a paid ad slot.
- A search ad can be malicious.
- A social ad can be malicious
- A display ad can be malicious.
- A sponsored result can be malicious
- A fake brand campaign can look professional.
This matters for both sides of the advertising ecosystem.
If You Are A Brand
Your brand can be impersonated.
Attackers can run fake ads pretending to be your business, your product, your support team, or your login portal. This can damage customer trust even if your own infrastructure was never breached.
That is why brand monitoring, take down processes, verified ad accounts, and domain protection matter.
If You Are A Marketer
Your ad spend is part of a wider supply chain.
The NCSC recommends that brands check advertising partners for strong know your customer checks, good cyber security practices, reputable data sources, industry standards, malvertising detection and removal services, threat intelligence sharing, reliable reporting mechanisms, and transparency.
That is a very practical checklist. Marketing teams should not treat ad platforms as a black box. They should be asking security questions.
If You Are A Security Team
Bad ads are a user access problem. A user clicking a malicious ad can become the first step in credential theft, malware delivery, account takeover, or business email compromise.
This connects directly with phishing awareness, Zero Trust thinking, endpoint protection, DNS filtering, browser hardening, and security monitoring.
That Zero Trust mindset matters here. In Embracing Zero Trust Security for a Resilient Digital Future, I explained why organizations need to move away from outdated perimeter based thinking and continuously verify access requests instead of assuming trust.
The same principle applies to advertising. Just because something appears in a familiar place does not mean it is safe.
AI Enforcement Still Needs Human Oversight
There is another important detail in Google’s announcement.
MarketingShot notes that Gemini helped reduce incorrect advertiser suspensions by 80% and helped teams take action on more than four times as many user reports in 2025 compared with the previous year.
That matters because automated enforcement can create collateral damage. If an AI system blocks malicious advertisers but also wrongly suspends legitimate businesses, trust still suffers.
This is why AI security needs balance:
- Automation for scale.
- Human judgement for edge cases.
- Appeals for legitimate businesses.
- Clear policies for advertisers.
- Transparency for users.
- Continuous monitoring for abuse.
This is also where AI governance becomes critical.
In NIST AI Risk Management Framework: A Practical Guide, I covered how organizations can use structured AI risk management to identify, assess, prioritize, and manage AI related risks. The same thinking applies to AI powered ad enforcement: the goal is not just to use AI, but to use it responsibly, transparently, and with proper controls.
AI should not become a blind enforcement machine. It should become a force multiplier for security teams.
The Bigger Lesson: Prevention Beats Cleanup
The strongest part of this story is prevention.
Stopping a bad ad before it runs is far better than removing it after damage is done.
This mirrors a broader cybersecurity principle.
- Blocking a malicious login attempt is better than responding to an account takeover.
- Stopping phishing before inbox delivery is better than cleaning up compromised mailboxes.
- Preventing malware execution is better than restoring from backup.
- Detecting scam infrastructure early is better than investigating fraud after money is gone.
Google’s use of Gemini is a reminder that security has to move earlier in the chain.
Not just faster response, but earlier prevention.
AI Driven Scams Are Becoming A Real Financial Problem
The FBI’s 2025 IC3 Annual Report makes the AI threat very clear.
The report states that artificial intelligence can be used for legitimate purposes or criminal motives, and that AI enables the creation of convincing synthetic content such as social media profiles and personalized conversations at scale. In 2025, IC3 received more than 22,000 complaints reporting AI related information, with adjusted losses exceeding $893 million.
That is the same pattern we see with bad ads. The attacker does not need to compromise the platform itself, they need to abuse trust.
They need to look legitimate long enough for the victim to click, sign in, pay, download, or share sensitive information.
This is why AI powered ad security matters. It is not only about blocking misleading adverts, but about reducing the number of people who ever reach the scam in the first place.
What Organizations Should Take From This
Businesses should not read this news and think, “Google has solved bad ads.”
That would be a mistake.
A better takeaway is this:
The advertising ecosystem is becoming part of the security perimeter.
That means organizations should:
- Review how their brand is being used in ads.
- Monitor for impersonation campaigns.
- Use verified advertiser accounts where possible.
- Educate users that sponsored results can still be risky.
- Strengthen DNS, browser, and endpoint protections.
- Ask ad partners about malvertising controls.
- Treat marketing platforms as part of the digital supply chain.
Security teams, marketing teams, and leadership need to meet in the middle, because customer trust can be damaged by a malicious campaign even when the company’s own website, servers, and infrastructure remain secure.
Final Thoughts
Gemini blocking 99% of bad ads before they ran is impressive, but the bigger story is not just the technology.
The bigger story is that trust online is becoming harder to defend manually.
- Attackers are scaling with AI.
- Scammers are scaling with AI.
- Fraud campaigns are scaling with AI.
- Brand impersonation is scaling with AI.
So defenders must scale too.
The future of digital security will not be humans versus AI.
It will be humans with AI, against attackers with AI.
Organizations that understand this early will be in a much better position to protect their users, their customers, and their reputation.
Call To Action
Have you ever clicked a sponsored result and later realized something felt off?
That moment is exactly why ad security matters.
As AI-powered scams become more convincing, businesses need to treat digital advertising as part of their cybersecurity strategy, not just their marketing strategy.
Share your thoughts in the comments below.
Do you trust sponsored results as much as organic links, or are you becoming more cautious?
Related Reading On Eagle Eye Technology
If you want to go deeper into the connected topics behind this story, these posts are worth reading next:
How to Identify the Latest Phishing Attacks (2025 Guide)
Embracing Zero Trust Security for a Resilient Digital Future
Is WordPress Still Using MD5 for Passwords? The Truth Behind the Confusion
Think WordPress still stores passwords using MD5? Not quite. MD5...
Read MoreAI Liability Shields: Why Transparency Without Accountability Is Not Enough
AI companies are now fighting over liability, not just innovation....
Read MoreWhat Is The Linux fdatasync Command?
What is the Linux fdatasync command really? It is not...
Read More
Leave a Reply