We know that computers do not discriminate. They do what they are told to do and they can do it reliably over and over. However, one can argue that the algorithms which dictate the computer can be biased. Why does Siri on apple’s iPhone have trouble interpreting a query if the person has an accent? Isn’t iPhone just a computer which is supposed to do what it’s told? Well yes and no.
Siri analyzes your voice and extracts features to compare them to a database to find the best match. Computer programmers write the algorithms to analyze your voice and they also create the database to compare your voice. While computers are unbiased, the programmers are still human and they have their own biases and prejudices. They train Siri’s algorithms with the features in the database they created and optimize it to work with that dataset. What if Siri’s database only had American accent? For example, a word as simple as “garage” might not be recognized by Siri if a British person was asking it.
How do we fight this bias? A naive solution would be to have a diverse database for Siri to operate on. Nevertheless, given the ubiquity of Siri, it would be impossible to have an adequate database that can encompass the diverse user base it has. Any machine learning algorithm at its core has a database like this that it uses to interact with the user. The same hypothesis can be applied to image processing algorithms too. Even the best facial recognition software cannot recognize every image it processes for the very same reason. We live in a very interesting world where people get offended by a Starbucks coffee cup while not realizing that they are discriminated every day by their very own “virtual assistant”.
Essays
Likes
1099 Views
Share: