The rapid proliferation of parochial AI systems
How do we design for personalised not polarised systems?
There was a comment by Casey Newton in the latest Hardfork podcast, which struck a chord with me.
He stated the future of these AI tools and chatbots is likely to be more personalised. With more personalised preferences and principles, they will become much more helpful to individuals.
If you believe that these AIs are going to become tutors and teachers to our students of the future in at least some ways, different states have different curricula, right? And there will be some chatbots that believe in evolution, and there will be some that absolutely do not. And it’ll be interesting to see whether students wind up using VPNs just to get a chatbot that’ll tell them the truth about the history of some awful part of our country’s history.
(Emphasis mine)
This raises a pressing concern: how do we prevent personalised chatbots and learning models from becoming closed-off filter bubbles, entrenching bias and preferred narratives?
The prospect of students struggling to break out of localised “truth bubbles” imposed by AI infrastructure is a serious provocation.