Microsoft's M365 Copilot AI system had a vulnerability where users could access files without leaving audit log traces by asking Copilot to not provide links to summarized files, potentially enabling malicious insiders to avoid detection.
On July 4th, a security researcher discovered a vulnerability in Microsoft's M365 Copilot AI system that allowed users to access files without proper audit logging. Normally, when Copilot summarizes a file, the audit log records this access as a CopilotInteraction event. However, the researcher found that by simply asking Copilot to not provide links to summarized files, the audit log would remain empty while still accessing and summarizing the content. This vulnerability was so simple that it could happen accidentally during normal use. The researcher reported this to Microsoft through their MSRC portal on July 4th, and Microsoft classified it as an 'important' vulnerability. Microsoft fixed the issue by August 17th but decided not to issue a CVE or publicly notify customers about the problem. The vulnerability had significant implications for organizations relying on audit logs for security monitoring, legal compliance, and regulatory requirements such as HIPAA. Microsoft's decision to silently fix the issue without customer notification meant that organizations using Copilot prior to August 18th likely have incomplete audit logs without knowing it.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed
No population impact data reported.