Three companies that are working on a project to detect child sexual abuse material (CSAM) before it reaches encrypted environments as part of a government challenge claim that pre-encryption scans for such content can be carried out without compromising privacy.
Launched in September 2021, the Safety Tech Challenge Fund is designed to boost innovation in artificial intelligence (AI) and other technologies that can scan, detect and flag illegal child abuse imagery without breaking end-to-end encryption (E2EE).
The five winning projects were announced in November 2021, and each will receive £85,000 to help them advance their technical proposals.
The challenge fund is part of the government’s wider effort to combat harmful behaviours online and promote internet safety through the draft Online Safety Bill, which aims to establish a statutory “duty of care” on technology companies by legally obligating them to proactively identify, remove and limit the spread of both illegal and legal but harmful content, including CSAM.
In response to the challenge, end-to-end email encryption platform Galaxkey, biometrics firm Yoti and AI-based content moderation software provider Image Analyzer are collaborating on a system that will be able to scan content for CSAM on users’ devices before its encryption. They claim that on-device scanning would be the best way to protect users’ privacy.
The companies will be developing their system until March 2022, when they will present their proof of concept to the Department for Digital, Culture, Media and Sport (DCMS). Further funding of £130,000 will then be made available to the strongest projects.
The proposed project
According to a government press release, the three companies will work “to develop software focusing on user privacy, detection and prevention of CSAM and predatory behaviour, and age verification to detect child sexual abuse before it reaches an E2EE environment, preventing it from being uploaded and shared”.
The firms have said any CSAM detected by the system will be reported to moderators for further action to be taken. When CSAM is discovered by the AI algorithm, the information given to moderators will be tracked and audited to prevent any misuse.
The developers claim there are currently no products in the market that provide this kind of pre-content filtering with end-to-end encryption.
Speaking to Computer Weekly, Galaxkey CEO Randhir Shinde said the company’s existing architecture is built in such a way that when users – mostly enterprise clients – set up the infrastructure, they can generate and hold the encryption keys within their own environments, giving them greater security and control than other methods.
“When you talk about encryption keys, the person who controls the key effectively controls the data,” he said. “So the entire architecture of Galaxkey is built on solving that one problem. The Galaxkey architecture is built in such a way that corporates, when they set up the infrastructure, have the keys with them in their own environment – giving complete end-to-end encryption.”
Shinde said this meant Galaxkey was completely unable to access any of the keys, and security was further maintained within client organisations through the additional use of an identity-based encryption model, whereby every end-user must present the correct identity and authorisation to access any encrypted data within the wider environment.
Galaxkey has previously built email encryption, file encryption and file exchange products using this architecture, and was already building its own encrypted instant messaging platform, known as Lock Chat, before the challenge fund announcement in September.
Shinde added that because Galaxkey already had longstanding relationships with Image Analyzer and Yoti – which are providing AI-powered CSAM detection and age verification algorithms, respectively – it made sense for them to collaborate on the government challenge.
According to Image Analyzer CEO Cris Pikes, the company has previously worked with UK police forces to train its CSAM detection algorithm.
However, the client side scanning (CSS) of communications prior to their encryption raises a number of concerns – primarily that it would render the encryption meaningless because the content would already have been scanned.
Undermining encryption?
In August 2021, Apple announced its plan to introduce scans for CSAM on its US customers’ devices, which would work by performing on-device matching against a database of known CSAM image hashes provided by the National Center for Missing and Exploited Children (NCMEC) and other child safety organisations.
“Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the known CSAM hashes,” said the company. “This matching process is powered by a cryptographic technology called private set intersection, which determines if there is a match without revealing the result.
“The device creates a cryptographic safety voucher that encodes the match result, along with additional encrypted data about the image. This voucher is uploaded to iCloud Photos along with the image.”
Asked whether the ability to scan content before its encryption would fundamentally render encryption useless, Galaxkey’s Shinde said that would only be the case if the information was sent to a back-end server. He added that with Galaxkey’s current architecture, unlike Apple’s proposal, there is no mechanism that would allow this to happen.
“That’s because there is no controlling server in the back end,” he said. “Everything is on the device. And that’s where the potential of Image Analyzer comes, as you can scan with AI on the phone itself.”
Adding Image Analyzer’s algorithm will only relay to moderators the probability of a message containing CSAM as a percentage, said Shinde. “We have a mechanism where it will create triggers, but it won’t disclose any information – that’s not going to be possible because of the architecture.”
In a paper on the risks of CSS written in response to Apple’s proposal, 14 cryptographic experts, including Ross Anderson and Bruce Schneier, said: “Technically, CSS allows end-to-end encryption, but this is moot if the message has already been scanned for targeted content. In reality, CSS is bulk intercept, albeit automated and distributed.”
The experts added that while existing device-scanning products, such as antivirus software or ad blockers, act to protect the user, CSS does the opposite. “Its surveillance capabilities are not limited to data in transit; by scanning stored data, it brings surveillance to a new level,” they wrote.
“Only policy decisions prevent the scanning expanding from illegal abuse images to other material of interest to governments, and only the lack of a software update prevents the scanning expanding from static images to content stored in other formats, such as voice, text or video.”
Asked what technical measures could be put in place to prevent government surveillance, Shinde said: “Rogue governments have used technology to monitor communication. What we can do is limit or restrict that by adding layers of protection. We have worked with [British signal intelligence agency] GCHQ and one of the biggest concerns we had is, because we don’t give [encryption] keys to anybody, and neither do we have it, will we get approval?
“We were really surprised that they said ‘no, we don’t want the keys, but we want a mechanism where we can approach the end-user and say look, give us the keys right now’. We have pushed that responsibility to the end-user, so if the end-user feels ‘yes, I am safe giving my key to the government’, they can do so.”
Concern over repurposing
Another concern raised by critics of CSS is that it could easily be repurposed to search for other kinds of content.
“While proposals are typically phrased as being targeted to specific content, such as CSAM, or content shared by users, such as text messages used for grooming or terrorist recruitment, it would be a minimal change to reconfigure the scanner on the device to report any targeted content, regardless of any intent to share it or even back it up to a cloud service,” wrote Anderson, Schneier and others.
“That would enable global searches of personal devices for arbitrary content in the absence of warrant or suspicion. Come the next terrorist scare, a little push will be all that is needed to curtail or remove the current protections.”
As regards repurposing, Shinde said: “There is no way to stop that. Obviously, government is always going to try and ask for more, but it’s responsible companies that have to push back.”
Image Analyzer’s Pikes said the company has had tools on the market since 2017 to detect the likes of weapons, drugs and self-harm in images. “We very much put them as a solution to be used in a positive manner, to detect and remove these things,” he said. “We did obviously always consider that they could be used to harvest other material, but we do not license our technology for those reasons.
“We check the clients that we supply to. We heavily encrypt our models so there are levels of protection in there, but ultimately… yes, they could be used in a negative fashion, but unfortunately that’s the world we live in today.”
Pikes added: “You will always find that, as within any market or any geography, there will always be those that abuse it. You’ve obviously got to mitigate that as far as you can. That’s why legislation is there.”
Given the highly centralised nature of social media firms’ architecture and the legal obligations that will be placed on them by the Online Safety Bill to monitor for CSAM, it is still unclear whether users of services such as Facebook will still be able to control the decryption keys if the system being developed by the three firms is rolled out.
Shinde, for example, said it is for the government to mandate how social media companies, within the context of the Online Safety Bill, use the system. “We cannot control that – it’s a very big problem,” he said.
Stay connected with us on social media platform for instant update click here to join our Twitter, & Facebook
We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.