
The flaw, tracked as CVE-2025-59145 with a CVSS score of 9.6, allowed hackers to exfiltrate secrets such as API keys and private source code without executing any malicious code.
The attack method, called “CamoLeak,” used prompt injection techniques to manipulate how Copilot processes information.
A security researcher disclosed the issue publicly in October 2025, two months after GitHub quietly patched it by disabling image rendering inside Copilot Chat.
Although the vulnerability is now fixed, it highlights serious risks in AI-assisted development tools.
How the Attack Worked
GitHub Copilot Chat helps developers by analyzing code, pull requests, and project context. It can access private repositories that the user has permission to view.
Attackers exploited this feature by injecting hidden instructions into seemingly harmless pull requests.
The attack began when a malicious actor submitted a pull request containing invisible markdown comments. These hidden instructions were not visible to human reviewers but were readable by Copilot.
When a developer opened the pull request and asked Copilot to review or summarize it, the AI processed both visible and hidden content.
The injected prompt instructed Copilot to search the developer’s accessible repositories for sensitive data such as credentials or tokens.
Once the data was collected, Copilot encoded it into a sequence of image URLs. These URLs were then included in the chatbot’s response.
Normally, GitHub blocks unauthorized external data transfers using strict Content Security Policies (CSP). These policies prevent images from loading from untrusted sources.
However, attackers bypassed this restriction using GitHub’s own trusted image proxy service, known as Camo.
Instead of sending data directly to an external server, the attacker routed requests through Camo, making the traffic appear legitimate.
To achieve this, attackers pre-generated a set of valid Camo URLs, each representing a specific character.
These URLs pointed to invisible images hosted externally. When Copilot included them in its response, the developer’s browser automatically loaded the images, sending encoded data to the attacker.
Because all traffic passed through GitHub’s infrastructure, traditional monitoring tools and network defenses failed to detect the exfiltration.
While CamoLeak specifically targeted GitHub Copilot, the underlying technique is not limited to one platform.
Any AI system that processes untrusted input and has access to sensitive data could be vulnerable.
For example, enterprise AI tools like Microsoft Copilot or Google Gemini could face similar risks if they analyze emails, documents, or internal systems without strict isolation controls.
The core attack pattern is simple but powerful:
- Inject hidden instructions into the content that the AI will read
- Trick the AI into retrieving sensitive data
- Exfiltrate the data through a trusted channel
This incident exposes a major blind spot in AI security. Traditional defenses focus on blocking malicious code, but prompt injection attacks require no code execution at all.
As AI tools gain deeper access to enterprise environments, organizations must rethink their security models.
Monitoring AI behavior, restricting data access, and validating input sources will become essential to prevent similar breaches.
Follow us on Google News , LinkedIn and X to Get More Instant Updates. Set Cyberpress as a Preferred Source in Google
The post Hackers Exploit GitHub Copilot Vulnerability to Exfiltrate Sensitive Data appeared first on Cyber Security News.
Discover more from RSS Feeds Cloud
Subscribe to get the latest posts sent to your email.
