Decisions over which companies received the first tranche of notices were based on considerations such as the number of complaints to e-Safety,the company’s reach,and how much information is already public. More orders are likely to be issued.
Inman Grant said some in the sector had an attitude that if they were not aware of the problem,they were not responsible for it,even though some organisations had technology that could track and pull down dangerous material.
Each company will be asked different questions to elicit information that is not publicly available. “We’ve got a range of questions for Meta and WhatsApp,in terms of where they’re scanning,what they’re scanning,how they’re scanning,” Inman Grant said.
The responses will also be examined on a case-by-case basis. If the companies are found to be non-compliant after 28 days,they can be fined $550,000 a day.
“In my experience,having worked in the industry[at Microsoft for 17 years],companies are moved by anything that challenges their revenue,anything that harms their reputation,and any significant regulatory threats,” Inman Grant said.
The internet has led to a booming online child exploitation industry,involving both shared and live images. “For the past 15 years there’s been a trade in livestreaming child exploitation material,” Inman Grant said.
“With lockdowns around the globe,what we started to see was the Philippines at the epicentre of pay-per-view child abuse material. Now we have so many video conferencing platforms that can facilitate that sexual abuse material.”
NSW Police Detective Superintendent Jayne Doherty,the Child Abuse and Sex Crimes Squad Commander,said officers “welcomed any opportunity to help identify,target and prosecute persons involved in the abuse of children”.
Federal Communications Minister Michelle Rowland said the reporting from the companies would “help inform future government decisions around what needs to be done to protect Australians online,and improve transparency to the public”.
Apple faced a significant backlash from privacy advocates last year when it flagged a new feature,CSAM,that would scan iCloud photo libraries against known child sexual abuse material (photos that have been validated by at least two agencies).
Loading
The company’s website no longer makes reference to the CSAM technology. It has added a new feature,involving an intervention if users search for child exploitation material on its search tools. The interventions explain “that interest in this topic is harmful and problematic,and provide resources from partners to get help with this issue”.
The Morning Edition newsletter is our guide to the day’s most important and interesting stories,analysis and insights.Sign up here.