What is the Filter Bubble?
If our digital identities are only guided by the recommendations fed by an artificial intelligence, we risk creating a virtual world with polarized opinions.
On December 4, 2009, Google extended its personalized search feature to all worldwide users. Previously, this personalization only applied to searches run by profiles linked to a gmail account. To implement this update, the Mountain View-based company was required to adjust its algorithm making a series of changes, which are explained in the opening chapters of 'The Filter Bubble', a book by Eli Pariser published in 2011 that delves into the potential drawbacks of having artificial intelligence decide on all the recommendations that are fed to our digital identities.
Seven years later, this “filter bubble” (also known as “social bubble”), as Pariser’s work referred to it, remains a highly topical issue; one that, on many occasions, defines the behavior of our digital identities, even if we’re not aware of it.
But what exactly is the filter bubble? In the early days, social media websites gave their users absolute freedom to choose contacts, the news sources they followed and the topics that were relevant to them. But this selectiveness can lead to an endogamy that is also further exacerbated by the algorithms on which Facebook, Twitter, Instagram, Spotify and other social media companies rely to tailor their content recommendations, and which factor in our behavior, our preferences as well as our friends and acquaintances’ actions.
"One day you wake up and you find that everyone agrees with what you think"
And this is where the idea of the filter bubble comes into play: a concept that makes reference to the circumstances under which our online activity ends up being restricted to a limited space where everything we encounter seems to ratify our views. According to the most ardent supporters of this idea, it is the internet that decides what we read and what we think, which ultimately could give rise to the following dystopia: one day you wake up and you find that everyone agrees with what you think.
And living within these bubbles has a huge downside: the polarization of opinions. In other words, it leads to the creation of virtual environments where all members share similar views on specific topics and where deferring opinions – that are unanimously agreed on in other subnetworks – are automatically dismissed as far-fetched or baseless.
Bursting the bubble
Over the course of his research, Pariser concluded that, to burst these bubbles, the following conditions must be met:
- That those that have helped shape the Internet as we know it – Facebook, Google and the like – instill some sort of civil responsibility bias into their AI-powered engines.
- That these players also ensure and provide proof that their algorithms are truly transparent, explaining the rules they follow.
- That users are allowed to tweak - to a certain extent - the filters applied to the pages they visit, to decide what does and does not happen in them.
Also, Pariser recommends fostering personal inquisitiveness by creating spaces where people can engage in respectful, peer-to-peer discussions with other people that hold different views. The key would be to create and promote tools that encourage debate and the exchange of opinions, similar to the online forums that thrived in the late 1990s and early 2000s did.
For this purpose, in May 2016, Google launched Spaces, a social app that allowed users to create groups to share impressions and contents on specific topics, with the purpose of structuring and enriching online conversations in ways that are no longer easy to find in other apps, due to the alleged concentration of users around like-minded opinions. However, the app’s failure to build up a viable user base led to its official demise and was officially taken offline on April 17, 2017, not even one year after it made its debut.