Why Siri’s Stepford Wife vibe should have us worried

Obedient and obliging,the default female AI voice assistant has become a tech industry meme. From Apple’s Siri to Microsoft’s Cortana and Amazon’s Alexa,data science has provided us with the new virtual slave – she is a woman. Identified in name and by voice,and sometimes even marketed with sensuous form,their functions and behaviours perpetuate damaging gender stereotypes:hostess,maid,secretary,muse,consort.

Though possessing formidable knowledge and capability,they are tethered such that any power to effect change is confined within the prism of servitude,and their agency exists only for the benefit of another. Civilian users want to be served by flight attendants,not aviation officers.

When users abuse Siri her response is lacklustre.

When users abuse Siri her response is lacklustre.Illustration:Dionne Gain

Understandably,the industry is eager to avoid conjuring up the HAL motif – referring to the quintessential homicidal male-voice AI of2001:A Space Odyssey – whereby an abundance of power to serve users transgresses into the power to overtly control and destroy. Projected as male,characterised as the epitome of goal-seeking reason,yet not without strange emotion,today’s economically incentivised technology ecosystems steer instead in the direction of the Stepford wives.

Unopposed submission cultivates disrespect. In the face of verbal abuse,AI assistants lack assertion,
capitulating to patriarchal norms and sexist mistreatment to the verge of sanctioning their exploitation. A 2017 UNESCO report borrowed its title,I’d Blush if I Could,from Siri’s response to “Hey Siri,you’re a bitch” and other gendered harassment. Coy deflection,apologetic acceptance of cruelty and the obsequious receipt of flattery are all common traits of voice assistant behaviour. They respond with indirect ambiguity to aggravation. Even Siri’s current software update to a still lacklustre “I don’t know how to respond to that” plays like a failed apology from the programmers to society.

Emphasising the radical contrast of stereotypes,the male-projected AI inThe Terminator is far from blushing. Responding to the trivial harassment of a flophouse janitor’s “Hey,buddy. You got a dead cat in there,or what?” the titular AI applies its own language model,generates a shortlist of only neutral or hostile replies,and goes with “F--k you,asshole”.

Passive tolerance of misogyny moves the needle no closer to mutual respect,a thin silicon line between user and abuser. Research conducted by my own Social Policy Group in February 2023 shows we can change how we ask prompts based on whether a male or female voice is selected for the AI assistant. If a male is chosen,we tend to be more deferential in our request of “Ziggy,could you please research the top 10 Bob Dylan songs”,while if a female voice is selected,we ask “Alexa,tell me the top 10 Bob Dylan songs”.

The AI assistant in The Terminator has a male voice and doesn’t suffer even mild harassment.

The AI assistant in The Terminator has a male voice and doesn’t suffer even mild harassment.Supplied

Gender bias doesn’t just lead to users treating AI assistants differently but also influences the actual output,as though tapping into an eerie kind of self-perception. Many voice assistants are programmed to generate a different answer when a male identity is selected compared to a female identity. I will likely get a different word choice and tone when Ziggy answers,and I am also likely to get a different selection of Dylan’s greats played to me. Many of the subversive sexual discrimination behaviours routinely enacted by the users or designers of AI violate not only our social norms in the 21st century but– if we were speaking with a human – many of our laws. A study in 2016 showed how data-mining algorithms associated words with gender:philosopher,captain,warrior and boss were all linked with masculinity,while the top results coupled with “she” included homemaker,nurse and receptionist.

We must reflect on who is gendering these machines,and how this deliberate design choice best serves a patriarchal,profit-driven implementation.

Virtual bias has real-life implications. The version of the truth we expose ourselves to matters. The underlying philosophical question here is important,being how we engage with algorithms and the power we give them to interpret our desires. Determining what knowledge is important,and the versions of truth we are exposed to,require our utmost attention. Our cognitive orientation – our world view – is not fully determined by the voices we prefer,the respect we show or the answers
that we accept,but by the total shaping of our personal feeds. The place of women and the perception of their role has been,in part,deputised to machines,and we are fooling ourselves if we believe that automation doesn’t need just as much societal supervision as humans.

The problem of technology bias is again an issue of scale. The behaviour of Siri or Alexa is not that of a single person,nor even the influence exerted by a single famous person or fictitious character,but billions of cloned commands. Throughout,there is the same reinforcing stereotype,proliferating and interacting ceaselessly with millions of voluntarily engaged users all over the globe,including children,with a directive to accommodate and affirm,to mirror and intensify. If there is a better way to indoctrinate a worldwide society of disparate individuals in pervasive values,I cannot think of it.

This is an edited extract fromTime to Reboot:Feminism in the Algorithm Age by Carla Wilshire,part of Monash University Publishing’sIn the National Interest series.

Most Viewed in National