Australia’s eSafety commission used pioneering laws this year to compel companies to reveal information that has been kept secret by companies and chased by governments internationally for up to a decade.
Some,particularly Apple,have been reluctant to monitor content for child abuse amid concerns that it invades users’ privacy.
Now,a report on the companies’ responses,published on Thursday,shows Microsoft’s video conferencing tool Skype,which is the most common platform for live-streamed child abuse,takes two days to respond to a report of abuse and does not use any of the available technology to detect new child abuse material unprompted.
Microsoft developed a technology called PhotoDNA,which can detect known exploitation content,but does not use it to check material stored in its OneDrive service,allowing offenders to escape detection unless they try to share the material.
Apple does not check for abuse material either in its cloud,or when it is shared via iMessage. It made the fewest reports of child exploitation of any tech giant last year,with just 160 instances reported to a US database,despite many of its 2 billion users having access to FaceTime.
The tech giant last week abandoned plans to use a new tool to check iPhones and iCloud photos for child abuse material because of a privacy backlash.
WhatsApp bans 300,000 accounts a month for child exploitation violations,but does not share details about the users it bans with stablemates Facebook or Instagram,even though abusers use multiple accounts.