
Microsoft Copilot - Photo- Collected
Microsoft has acknowledged a technical error that caused its artificial intelligence work assistant, Microsoft 365 Copilot Chat, to access and summarise some users’ confidential emails by mistake.
Microsoft has promoted Copilot Chat as a secure AI tool for workplaces. However, the company said a recent issue allowed the tool to surface content from some enterprise users’ Outlook draft and sent email folders, including messages marked as confidential.
The tech giant said it has now rolled out a global update to fix the problem and insisted that the error did not allow users to see information they were not already authorised to access.
“We identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labelled confidential authored by a user and stored within their Draft and Sent Items in Outlook desktop,” a Microsoft spokesperson said. The spokesperson added that while access controls and data protection policies remained in place, the behaviour did not match the intended Copilot experience.
Copilot Chat works inside Microsoft programs such as Outlook and Teams, allowing users to ask questions or generate summaries of messages and chats.
The issue was first reported by technology news site Bleeping Computer, which cited a Microsoft service alert stating that emails with confidential labels were being incorrectly processed by Copilot Chat. According to the alert, a work tab within Copilot summarised emails stored in users’ draft and sent folders, even when sensitivity labels and data loss prevention policies were in place.
Reports suggest Microsoft became aware of the issue in January. The notice was also shared on a support dashboard for NHS staff in England, where the root cause was described as a code issue. However, the National Health Service said no patient information had been exposed and that the contents of draft or sent emails remained visible only to their creators.
Despite Microsoft’s assurances, experts warned that such incidents highlight the risks of rapidly deploying generative AI tools in workplaces.
Nader Henein, an analyst at Gartner, said mistakes of this kind are difficult to avoid given the fast pace at which new AI features are being released. He said many organisations lack the tools needed to properly manage and govern each new capability.
Cybersecurity expert Professor Alan Woodward of the University of Surrey said the incident underlined the need for AI tools to be private by default and enabled only by choice.
He warned that as AI systems evolve rapidly, unintentional data leakage is likely to occur, even when security safeguards are in place. - UNB