Anomaly in Google Gemini: AI Displays Wrong Images from Attachments
Maciej Lesiak
- 3 minutes read - 533 words
Ten artykuł jest dostępny również po polsku:
Anomalia w Gemini Google: AI wyświetla niewłaściwe obrazki z załączników
What's in this article
What Actually Happened?
After further analysis, it turned out that the link leads to Elastic documentation on Discourse. This indicates a different but equally problematic issue - uncontrolled AI references to external resources during document generation. Gemini arbitrarily added an image from documentation instead of using my attachments. Classic…
Wrong images in gemini artifact.
During today’s work on a performance audit for one of the e-commerce stores, when I was using Google Gemini PRO to prepare the final report, I encountered an interesting anomaly. After attaching console screenshots and Grafana charts, the AI started displaying completely different images than those I had actually uploaded.
While creating the final audit report for a certain solution and improving it in Gemini, the problem revealed itself during the creation of an artifact with the final version of the audit. Gemini generated a document containing images from outside my attachments - a link to Elastic/Discourse CDN appeared https://us1.discourse-cdn.com/elastic/original/3X/e/1/e1ee022063de6bd2ddb865e955723f768512108d.png, while I had only attached Grafana charts showing CPU usage. Obviously, it’s also interesting that it provides a link to Discourse CDN.

During the creation of another version of the artifact, Gemini attached a different image in the content. After exporting everything to Google Drive, I received different links than those actually attached, which is empirical evidence of a data security problem.
Is This a Problem?
It depends on how you look at it, but in my opinion, it’s very serious. This case indicates possible context mixing between users or a bug in the attachment management system. Additionally, something I never described before - while using AI intensively in the context of radical US movements, I had many Russian-language word “bleeds,” which indicates context leakage and feeding by Russian services (probably) with content or SEO spam to manipulate recommendation algorithms. However, let’s return to the current problem. Potential consequences:
- Leakage of confidential data between different user sessions
- Compromise of business documents and personal information
- GDPR violations and other data protection regulations
- Integration problems with Google Workspace in corporate environments
Historical Context
Google Gemini has had data leak problems before - in February 2024, users’ private conversations appeared in Google search results. Researchers from HiddenLayer also discovered vulnerabilities enabling system prompt leaks and indirect injection through Google Drive.
Status and Recommendations
I plan to test the reproducibility of this problem next week. Given the potential severity of the issue:
- Avoid uploading sensitive documents to Google Gemini
- Carefully check all attachments in generated reports
- Document any anomalies - this may help identify a pattern
I don’t plan to report this issue to Google through official security channels, as I tried to do this with GPT and wasted time. This is another case showing that AI systems require significantly more rigorous security controls, especially in the context of handling attachments and data separation between users. I have already written about the SVG problem and the possibility of packing packed attachments and creating malicious code wrapped in SVG for download.
Unfortunately, due to work overload, I won’t check immediately whether the problem is reproducible and whether it concerns only the export function or also real-time attachment analysis. I’m putting this on the internet - if someone has time, they can escalate…
Related
- Case Study: Leak of sensitive airport data due to email configuration error (2007)
- #2520 Manipulating Recommendation Systems: GROK, White Genocide, and Musk's Racist Conspiracy Theories
- Phatic Function in Practice: How ChatGPT's Conversation Maintenance Generates Millions in Losses
- GPTBot Is Scanning The Internet: How OpenAI Will Change Content Consumption and the Future of Search
- Bypassing Security Filters in ChatGPT's SVG Generation
- SEO Spam and Competition Gaming - The Dark Side of AI Content Marketing
- TECH: How AI is Changing the Face of Polish Digital Media in 2024
- AI series: A scenario of how AI can take over recommendation systems, generating and reinforcing conspiracy theories and disinformation
Amplify the Signal

Best support is sharing articles and tagging dadalo.pl on social media. You can also support financially - this covers media access and press archives needed for research.
Shares are more important than donations. Financial support helps maintain research independence.