NEWYou can now listen to Fox News articles!
Artificial intelligence is everywhere these days — in your phone, your car, even your washing machine. I saw one just the other day featuring built-in AI. And while that might sound a little over the top, there’s no denying that artificial intelligence has made life easier in a lot of ways.
From boosting productivity to unlocking new creative tools, it’s changing how we work and live. The most common version you’ve probably encountered? Generative AI, think chatbots like ChatGPT. But as helpful as this tech can be, it’s not without its problems.
If you’ve used Google’s Workspace suite, you may have noticed the company’s AI model, Gemini, integrated across apps like Docs, Sheets and Gmail. Now, researchers say attackers can manipulate Gemini-generated email summaries to sneak in hidden phishing prompts.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTER
HOW AI IS NOW HELPING HACKERS FOOL YOUR BROWSER’S SECURITY TOOLS
Google Gemini app on a mobile device (Kurt “CyberGuy” Knutsson)
How Gemini summaries can be hacked
Researchers at Mozilla’s 0Din have discovered a vulnerability in Google’s Gemini for Workspace that allows attackers to inject hidden instructions into email summaries. The issue, demonstrated by Marco Figueroa, shows how generative AI tools can be misled through indirect prompt injection. This technique embeds invisible commands inside the body of an email. When Gemini summarizes the message, it interprets and acts on those hidden prompts.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
The attack does not rely on suspicious links or attachments. Instead, it uses a combination of HTML and CSS to conceal the prompt by setting the font size to zero and the color to white. These commands remain invisible in Gmail’s standard view but are still accessible to Gemini. Once you request a summary, the AI can be tricked into presenting fake security alerts or urgent instructions that appear to come from Google.
In a proof of concept, Gemini falsely warned a user that a Gmail password had been compromised and included a fake support phone number. Since Gemini summaries are integrated into Google Workspace, you are more likely to trust the information, making this tactic especially effective.

A Google sign on a building (Kurt “CyberGuy” Knutsson)
What is Google doing about the flaw?
While Google has implemented defenses against prompt injection since 2024, this method appears to bypass current protections. The company told CyberGuy it is actively deploying updated safeguards.
GET FOX BUSINESS ON THE GO BY CLICKING HERE
In a statement, a Google spokesperson said, “Defending against attacks impacting the industry, like prompt injections, has been a continued priority for us, and we’ve deployed numerous strong defenses to keep users safe, including safeguards to prevent harmful or misleading responses. We are constantly hardening our already robust defenses through red-teaming exercises that train our models to defend against these types of adversarial attacks.”
Google also confirmed that it has not observed active exploitation of this specific technique.

Google Gemini app on the home screen of a mobile device (Kurt “CyberGuy” Knutsson)
6 ways you can stay safe from AI phishing scams
So, how can you protect yourself from phishing scams that exploit AI tools like Gemini? Here are six essential steps you can take right now to stay safe:
1. Do not blindly trust AI-generated content
Just because a summary appears in Gmail or Docs does not mean it is automatically safe. Treat AI-generated suggestions, alerts or links with the same caution you would any unsolicited message. Always verify critical information, such as security alerts or phone numbers, through official sources.
2. Avoid using summary features for suspicious emails
If an email seems unusual, especially if it is unexpected or from someone you do not recognize, avoid using the AI summary feature. Instead, read the full email as it was originally written. This lowers the chance of falling for misleading summaries.
3. Beware of phishing emails and messages
Watch for emails or messages that create a sense of urgency, ask you to verify account details or provide unexpected links or contact information, even if they appear trustworthy or come from familiar sources. Attackers can use AI to craft realistic-looking alerts or requests for sensitive information, sometimes concealed within automatically generated summaries. So, always pause and scrutinize suspicious prompts before responding.
The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.
Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices at CyberGuy.com/LockUpYourTech
4. Keep your apps and extensions updated
Ensure that Google Workspace and your browser are always running the latest version. Google regularly releases security updates that help prevent newer types of attacks. Also, avoid using unofficial extensions that have access to your Gmail or Docs.
5. Invest in a data removal service
AI-driven scams like the Gemini summary attack don’t happen in a vacuum. They often begin with stolen personal information. That data might come from past breaches, public records or details you’ve unknowingly shared online. A data removal service can help by continuously scanning and requesting the removal of your information from data broker sites. While no service can wipe everything, reducing your digital footprint makes it harder for attackers to personalize phishing attempts or link you to known breach data. Think of it as one more layer of protection in a world where AI makes targeted scams even easier.
Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com/Delete
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com/FreeScan
6. Disable AI summaries for now if you’re concerned
If you’re worried about falling for an AI-generated phishing attempt, consider disabling Gemini summaries in Gmail until Google rolls out stronger protections. You can still read full emails the traditional way, which can lower your risk of being misled by manipulated summaries.
How to disable Gemini features on desktop
- Open Gmail on desktop.
- Click the Settings gear icon in the upper right.
- Click See all settings.
- Scroll to “Google Workspace smart features” and click Manage Workspace smart feature settings.
- Disable the toggle for Smart features in Google Workspace.
- Then, click Save.
- Note: This will turn off Gemini summaries as well as other smart features.
How to disable Gemini features on mobile
On iPhone:
If you use the Gemini mobile app specifically:
- Open the Gemini app.
- Tap your Profile picture.
- Tap Gemini Apps Activity.
- At the top, tap Turn off.
On Android:
Settings may vary depending on your Android phone’s manufacturer
- Open the Gmail app on your Android.
- Tap the Menu icon (three horizontal lines) in the upper left corner.
- Scroll down and tap Settings.
- Select the relevant email account.
- Scroll down and tap Google Workspace smart features and uncheck the box to turn them off.
Key caveats to know:
- Disabling Smart Features may remove other convenient functionalities, such as predictive text and automatic appointment detection.
- The Gemini icon or summary buttons may still appear, even after disabling these features. Some users report having to physically hide them via browser tools.
There is no centralized single “off switch” to completely remove all Gemini AI references everywhere, but these steps significantly reduce the feature’s presence and risk.
CLICK HERE TO GET THE FOX NEWS APP
Kurt’s key takeaway
This vulnerability highlights how phishing tactics are evolving alongside AI. Instead of relying on visible red flags like misspelled URLs or suspicious attachments, attackers are now targeting trusted systems that help users filter and interpret messages. As AI becomes more deeply embedded in productivity tools, prompt injection could emerge as a subtle but powerful vector for social engineering, hiding malicious intent in the very tools designed to simplify communication.
How comfortable are you letting AI summarize or filter your emails, and where do you draw the line? Let us know by writing to us at Cyberguy.com/Contact
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTER
Copyright 2025 CyberGuy.com. All rights reserved.