The Download: political AI models, and a wrongful arrest
How they did it: The team asked language models where they stand on various topics, such as feminism and democracy. They used the answers to plot them on a political compass, then tested whether retraining models on even more politically biased training data changed their behavior and ability to detect hate speech and misinformation (it did).
Why it matters: As AI language models are rolled out into products and services used by millions, understanding their underlying political assumptions could not be more important. That’s because they have the potential to cause real harm. A chatbot offering health-care advice might refuse to offer advice on abortion or contraception, for example. Read the full story.
—Melissa Heikkilä
Read next: AI language models have recently become mixed up in the US culture wars, with some calling for developers to create unbiased, purely fact-based AI chatbots. In her weekly newsletter all about AI, The Algorithm, Melissa delves into why it’s a nice concept—but technically impossible to build. Read it to find out more, and if you don’t already, sign up to receive it in your inbox every Monday.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 A woman was wrongfully arrested after a false face recognition match
It’s notable that every person we know this has happened to has been Black. (NYT $)
+ The movement to limit face recognition tech might finally get a win. (MIT Technology Review)
2 AI startups are fighting dirty