The problem with Alexa: what’s the solution to sexist voice assistants?

The problem with Alexa: what’s the solution to sexist voice assistants?

If you have a smart speaker in your home, you probably interact with an AI-enabled voice assistant fairly regularly – and chances are you’re speaking to what sounds like a woman. 

Your voice assistant may have even been given a woman’s or feminine-sounding name, like Alexa, Cortana, or Siri, depending on which brand you bought your smart speaker from. Sure, some of these voice assistants can be configured to have a male-sounding speaking voice, including Google Assistant and Siri, but most smart speaker users are interacting with virtual women. 

At face value that may not sound like a problem – but society’s equating of women with voice assistants could have some worrying societal implications. 

A female voice is the default choice for smart assistants, including Alexa on the Amazon Echo

In May 2019, a groundbreaking report by UNESCO suggested that the default use of female-sounding voice assistants in our smart home gadgets and smartphones perpetuates sexist attitudes towards women.

The report, titled I’d Blush if I Could, takes its name from Siri’s former default response to being called a ‘bitch’ by users – and criticizes the fact that Apple’s Siri, Amazon Alexa, Google Assistant, and Microsoft’s Cortana, are “exclusively female or female by default, both in name and in sound of voice”.

Sympathetic and agreeable

So, why do voice assistants sound like women? Julia Kanouse, CEO of the Illinois Technology Association , explains that the companies behind these voice assistants based their choices on consumer feedback. 

She explains: “Research shows that women women’s voices tend to be better received by consumers, and that from an early age we prefer listening to female voices”. 

Indeed, in an interview with Business Insider , the head of Amazon’s Smart Home division, Daniel Rausch, explained that his team “carried out research and found that a woman’s voice is more sympathetic”. 

So far, so plausible – and as Kanouse concedes, the use of female-sounding voice assistants is clearly grounded in research. 

Research has shown that from an early age we find female voices more sympathetic than male ones

However, the choices made by voice assistant creators could have far-reaching consequences for women at home, and in the workplace.

“The use of female voice assistants may reinforce the stereotype that we prefer to tell a woman what to do, rather than a man,” says Kanouse.

“Only recently have we started to see men move into what were traditionally viewed as female roles, and, conversely, see women fight to ensure these roles (such as flight attendants, nurses, paralegals, executive administrators) are seen as more than ‘just an assistant’.”

This progress could potentially be undone by the proliferation of female voice assistants, according to UNESCO. Its report claims that the default use of female-sounding voice assistants sends a signal to users that women are “obliging, docile and eager-to-please helpers, available at the touch of a button or with a blunt voice command like ‘hey’ or ‘OK’”. 

It’s also worrying that these voice assistants have “no power of agency beyond what the commander asks of it” and respond to queries “regardless of [the user’s] tone or hostility”. These may be desirable traits in an AI voice assistant, but what if the way we talk to Alexa and Siri ends up influencing the way we talk to women in our everyday lives?

Researchers say the use of predominantly female voices in smart speakers can feed into subconscious biases

One of UNESCO’s main criticisms of companies like Amazon, Google, Apple and Microsoft is that the docile nature of our voice assistants has the unintended effect of reinforcing “commonly held gender biases that women are subservient and tolerant of poor treatment”. 

This subservience is particularly worrying when these female-sounding voice assistants give “deflecting, lackluster or apologetic responses to verbal sexual harassment”. 

While Kanouse doesn’t think this has led to overt cases of sexual discrimination, she does believe it creates “a level of unconscious bias”, adding that “the prevalence of female voice assistants may feed into subconscious biases against women in the workplace and home, making it more difficult for women to overcome these obstacles”. 

Are smart speakers really safe for children?

Should voice assistants be gender-neutral?

One solution could be to make voice assistants sound gender-neutral – and it’s something that’s entirely possible, as demonstrated by the makers of Q, the world’s first gender-neutral voice assistant . 

Speaking to NPR , Julia Carpenter, an expert on human behavior and emerging technologies who worked on the project, explained that one of the team’s goals was to “contribute to a global conversation about gender, and about gender, technology, and ethics, and how to be inclusive for people that identify in all sorts of different ways”. 

To create the voice of Q, the team recorded “dozens of people”, including those who identify as male, female, transgender, and nonbinary, although in the end they chose just one voice, and pitch-altered it until it sounded neither male nor female.

You can hear what Q sounds like in the video below.

The result, while perhaps a little more synthetic-sounding than Alexa or Siri, is a truly inclusive voice assistant for everyone – and the goal is to convince tech giants to adopt Q as a third option for their assistants.

Sadly, this isn’t likely – after all, brands like Apple, Google and Amazon are notoriously rigid when it comes to the design of their products, and we can’t see them agreeing to use the same voice as their rivals.

Diversity is key

So, instead of making voice assistants sound homogenous, could the answer lie in making them super-diverse?

This diversity doesn’t have to be focused on gender either; why can’t our voice assistants have regional accents? Why couldn’t they sound young or old, use slang, or pidgin-English?

The news that the BBC is working on a voice assistant called Beeb , which will understand all the diverse regional accents of the UK, has stoked hopes that it will also speak with some of these accents. 

Dr Matthew Aylett, Chief Scientific Officer at speech technology company Cereproc , thinks this could set Beeb apart from the other voice assistants on the market. 

“No other organization could boast of the resonance and importance of voice compared to the BBC,” he says, explaining that choosing a synthetic voice to represent the organization is “a big challenge”.

The relatively low number of women working in tech fields means they have less input into the design of voice assistants

Discussing brands like Apple, Google, and Amazon he explains that, “in many cases decision-makers are choosing a default, neutral, well-spoken female voice without even considering that this is a major design decision”.

And the BBC could be in the perfect position to challenge this. With its encouragement of participation from its vast audience, Aylett believes that the use of a diverse voice for Beeb “could lead to some ground-breaking new perspectives on voice interaction”.

Aylett thinks the BBC could even call on this audience to select well-loved BBC presenters and create an amalgamated voice from the results – imagine how soothing a David Attenborough / Joanna Lumley hybrid could be.

However, Aylett doesn’t think that global voice assistant developers will support third-party diversity from the likes of the BBC, or be “courageous enough to offer much diversity themselves”. 

Why? Well, the teams behind our most popular voice assistants just aren’t that diverse themselves. 

Women to the front

According to UNESCO, Alexa’s sexism problem is largely down to the lack of women in the room when tech companies are designing their voice assistants. 

This is an issue that affects the entire industry, with just 7% of ICT (Information and Communication Technology) patents generated by women across G20 countries; UNESCO says the overuse of female-sounding voice assistants is “a powerful illustration of gender biases coded into technology products, pervasive in the technology sector and apparent in digital skills education”. 

The solution? We need more women in the STEM (Science Technology, Engineering, and Maths) industries, and that, says UNESCO, requires “recruiting, retaining, and promoting women in the technology sector” – after all, how can our voice assistants effectively represent their users if a huge percentage of those users have no say in their development?

Whatever the answer is, it’s clear that we need more choice when it comes to the voices in our smart speakers. As Kanouse says, “whether it’s a male voice, or gender-neutral, or a mimicked recording of someone famous like Morgan Freeman for example, there are creative solutions that these companies could implement that would ensure we aren’t reinforcing gender stereotypes”. 

She adds: “Making that switch could be a very powerful statement from these influential companies”. 

“And wouldn’t it be fun to tell Morgan Freeman what to do every day?”

Want to know more about Amazon’s smart speakers? Check out our reviews: Echo  | Echo Dot  | Echo Plus  | Echo Show  | Echo Spot

Comments are closed, but trackbacks and pingbacks are open.