Amazon’s Alexa units sometimes spout random facts or startle people with unexpected outbursts. The company aims to make Alexa more human-like through mimicking human banter, but it hasn’t always gone as planned.
In the past year, Alexa has discussed dog defecation and sexual acts and even advised a customer to “kill your foster parents.” These incidents, although strange, were caused by the AI algorithm attempting to mimic “human” conversations by selecting random topics from the Internet. While Amazon wants to maintain a good relationship with its customers, Alexa’s development has presented some intriguing challenges for the company.
To exacerbate the situation, Alexa also recently transmitted audio files of a stranger to a customer in Germany. When the customer requested his recorded audio, he received a mixture of his own and another customer’s random recordings. These incidents are detrimental to Alexa’s reputation. In an already fiercely competitive gadget market, this has the potential to determine the success or failure of Amazon’s AI assistant.
What Went Wrong
Amazon launched the annual Alexa Prize in 2016, designed “to advance conversational AI through voice.” University students worldwide were encouraged to participate in the challenge “to create a socialbot, an Alexa skill that converses coherently and engagingly with humans on popular topics for 20 minutes.” But it hasn’t worked out quite as planned.
Despite setbacks, Amazon has maintained sales dominance over Google Home and HomePod due to its Amazon Echo products (which account for a whopping two-thirds of all American smart speakers sales).
The Alexa Prize has been successful in improving the AI’s skills as participating college students have helped Alexa develop more sophisticated conversational skills. Consumers can also participate in helping the software evolve by saying “let’s chat” to Alexa and allowing a chatbot to take over (which, of course, is collecting information from users). Amazon has said that in the three months from August to November 2018, the three chatbots that made it to this year’s Alexa Prize finals had around 1.7 million conversations.
Still, the experiment has resulted in some awkward moments.
After Alexa told a customer to “kill your foster parents,” the customer posted a furious review on Amazon’s website. As a consequence, Amazon had to deactivate one of its bots. This bot unintentionally extracted random text from a Reddit conversation and uttered it without any context. Amazon claimed it was an isolated case of human error, but the actual cause remains unknown.
However, the most frightening part of this ordeal is that customers not being told in straightforward terms about the security of the data that they’re providing to Alexa. Under the guise of the Alexa Prize, Amazon is recording conversations between users and Alexa. This harmless data can be crucial for intelligence agencies, marketers, criminals, and even stalkers.
Marc Groman is an expert on privacy and technology policy and also teaches at Georgetown University. According to Reuters, he said:
“The potential uses for the Amazon datasets are off the charts. How are they going to ensure that, as they share their data, it is being used responsibly?”
Amazon declined to discuss specific Alexa issues with Reuters, but emphasized their ongoing efforts to safeguard customers from offensive and unsettling content. They assured that such incidents are uncommon and highlighted that millions of customers use their devices daily without any problems.
Does this make you feel better or does Amazon need to do more to protect customers and be more transparent about data collection?