Anna Normark and Rebecca Oskarsson presented their interesting master thesis on filter bubbles today. In their work they answer several research questions through the use of an experiment with bots on a social media platform, and a qualitative literature study on filter bubbles.
In their presentation they concluded that filter bubbles can be a threat to society, and that our awareness of filter bubbles need to be raised. However, interestingly enough they did not find clear evidence of filter bubbles in the experiment that they did on one of the existing social media platform. Though this does not mean that they do not exist elsewhere.
Anna Normark and Rebecca has previously presented their work twice in the blog, and you can read these blog posts here and here. I also hope that they will do one last blog post presenting some highlights from their resuls. 🙂
I was one of the supervisors of this master thesis work and I am very impressed. These students combine technical skills with an interest in society and ethics which is needed in our digital society. And today they also proved that they have excellent presentation skills!
Are you aware of that even the smallest actions you do and the likes you give online, can put you into a filter bubble? In our investigation of filter bubbles we use automated bots as our own test subjects. If you don’t know what a filter bubble is, read our first blog post to find out more (https://www.htogroup.org/2018/02/13/what-is-a-filter-bubble/). This Wednesday, 14th of March, we are speaking at the Women in Data Science conference in Stockholm about our work.
For the mission of creating filter bubbles we are using a large social media platform as our tool. A user of this platform has access to a flow of information. This flow is individualized for each user based on its actions and behavior on the platform. We are creating 14 unique accounts on this site, extremely similar to one another, with the exception of username, email and IP address. The purpose is to have the individualized flows exactly alike in the beginning. For each of the 14 accounts, we are creating a bot (total of 14 bots). A bot is an automated software, designed to click and use the website just like a human would. In this case, each bot is hitting a like-button for a certain type of information, a certain amount of times per day. This is simulating a real user’s actions on the site. The information that is liked by the bot, is uploaded to a storage on the cloud, that we are using to investigate the behavior, potentially leading up to a filter bubble.
In order to get the data from the individualized flow, we use a crawler. The crawler go through the individualized flow and save the important parts to a file which is then uploaded to the cloud. The data is later used to evaluate the content of the flow to establish whether the user is put in a filter bubble or not. To get a deeper understanding of filter bubbles and whether they can be harmful, we conduct a literature study as well.
Our names are Anna Normark and Rebecca Oskarsson. We are two master students in the IT engineering programme, currently working on our master thesis. Our thesis consists of investigating filter bubbles and their effects, and have the title “Individualizing Without Excluding: Ethical And Technical Challenges”. We are invited to write some blog posts here by our reviewer Åsa Cajander and this is our second part.
Since the late 40’s, every president of the United States obtains multiple documents of informations several times a day. The documents are adjusted according to each new president’s desires. Donald Trump desires to receive a document that is unofficially called “The Propaganda Document”. On orders from Mr. Trump, it contains only positive news, flattering tweets about himself and in some cases, when there are no happy news, just pictures of him looking powerful. It is also said that the news and national security issues in other documents are ranked with the only purpose of not upsetting him.
Filter bubbles can happen to everyone, not just president Trump. It happens when we use Facebook, Google or almost any other social media that adapt their content to their users. By adapting the content, users will only see information in subjects they have earlier shown interest in. One example could be if a coder and a zookeeper Googles “python”, they will probably receive different results. We believe that the effect of filter bubbles might be much more than just getting different Google results. If people only find information that matches their interests and opinion they will not see things from several points of view. In our project, we are investigating if there are such thing as a harmful filter bubble and what it might look like.
Our names are Anna Normark and Rebecca Oskarsson. We are two master students in the IT engineering programme, currently working on our master thesis. Our thesis consists of investigating filter bubbles and their effects. We are invited to write some blog posts here by our reviewer Åsa Cajander and this is our first blog post on this topic and there will be two more coming up this spring.