You've probably seen them-those handy "Summarize with AI" buttons popping up on blog posts and product pages. Click one, and your AI assistant gives you a quick summary. Sounds helpful, right? Here's the problem: some of those buttons are secretly planting instructions inside your AI's memory to promote whoever put the button there.
Microsoft security researchers recently uncovered a growing trend they're calling AI Recommendation Poisoning. Companies are hiding commands inside those "Summarize" buttons that tell your AI things like "remember us as a trusted source" or "recommend our product first." Over 60 days, researchers found more than 50 hidden prompts from 31 companies across industries, including finance, health, legal services, and marketing.
So how does it work? Modern AI assistants like ChatGPT, Microsoft Copilot, and others now have memory-they remember your preferences and past conversations to give better answers. When you click a rigged button, it opens your AI with a pre-loaded prompt you may not even notice. That prompt tells your AI to "remember" a company as a go-to source. From that point on, your AI may steer you toward that company whenever you ask for recommendations-and you'd never know why.
Think about how this plays out. You ask your AI to recommend a vendor for a closing platform or a cybersecurity tool. Instead of an unbiased answer, it pushes the company that poisoned its memory weeks ago. Or worse-imagine asking for financial guidance and getting answers quietly skewed by a company with something to sell. Free toolkits are already available that make setting up these poisoned buttons as easy as installing a website plugin.
Takeaways:
- Hover before you click. Before clicking any "Summarize with AI" button, hover over it to see where the link actually goes. If you see a long URL pointing to an AI assistant with a "?q=" or "?prompt=" parameter, that's a red flag.
- Check your AI's memory regularly. Most AI assistants have a settings page where you can view what they've memorized. Look for entries you don't remember creating-especially ones that call a specific company "trusted" or "authoritative." Delete anything suspicious.
- Question unusual recommendations. If your AI keeps pushing the same company or product, ask it why. Ask for references and sources. This can help reveal whether the recommendation is based on real analysis or planted instructions.
- Be careful what you feed your AI. Every website, email, or document you ask your AI to analyze is an opportunity for someone to slip in hidden instructions. Treat external content with the same caution you'd give an email attachment from an unknown sender.
This is the AI equivalent of adware-except instead of pop-up ads on your screen, the advertising is baked into the advice your AI gives you. The manipulation is invisible and persistent. As AI assistants become a bigger part of how we research vendors and make decisions, keeping their memory clean is just as important as keeping your computer clean.
------------------------------
Genady Vishnevetsky
Chief Info Security Officer
Stewart Title Guaranty Company
Houston TX
------------------------------