The increase of controversial issues within social media platforms, especially during the Presidential elections in the last decade, has given rise to online disinformation, and their credibility. In many cases, online disinformation is enabled by various technologies, more specifically informational/social platform websites. Since many platforms do not put the effort into spreading misinformation, the best method of resisting online information comes down to the individual: learn to be more skeptical, and view information from as many perspectives as possible.
Social media platforms have little incentive to provide fact checking, most platforms want traffic, they simply want more and more individuals to use their site. Platforms cost money to run, and they are incentivized to do anything in their power to keep individuals browsing their site. As discussed by Sunstein, platforms love to give you the information you want to see, there is profit in keeping you on their platform. Since most individuals are easily clouded by their own biases, it’s the best method for platforms to keep individuals on, as discussed by Carr. Individuals who see information that’s already filtered to their beliefs, makes them feel comfortable, which is why I believe the best way to stop misinformation from spreading stems from the individuals: start being skeptical.
As mentioned by Anthony and Stark, companies like Uber and Facebook manipulate users into staying on their platform. If the users themselves begin questioning systems like these, and learn how to be more skeptical about these technologies, then the user would completely bypass the “infinite loop of addiction” and see the system from a holistic view, and make their judgement properly, which is key to resisting online disinformation.
Now in addition to resisting online disinformation, it’s also important to view and understand technologies from different perspectives. Being able to see the issue from different perspectives, and bringing other individuals into this view would be incredibly impactful, as shown in Ludlow’s article. A single person can’t change anything, but by being the individual to see the multiple perspectives of information, and sharing this, would no doubt attract a ton of attention and bring the issue into light. Additionally, forcing yourself to see the information from as many perspectives as possible is very uncomfortable, and would keep yourself questioning the information itself, which gives birth to research.
Some individuals might think that it’s not possible to solve the disinformation issue this way, and suggest tackling it from the government perspective. My question is, how would you incentivize the government to employ laws that force platforms to fact check? If a group of individuals, who see the issue from a different perspective, can’t guarantee a change, then how can you start employing change from the top of the chain? It’s important that checking disinformation is more individual, and that you can’t change the mind of an individual who isn’t willing to see the issue from a different perspective.
This writing is for a humanities Computer Science class. Any feedback would be appreciated!
Nicholas Carr. 2008. Is Google Making Us Stupid? The Atlantic.
Cass Sunstein. 2001. The Daily Me. In Republic.com, 1–22. Princeton University Press: Princeton, NJ.
Denise Anthony and Luke Stark. Don’t quit Facebook, but don’t trust it, either. The Conversation.
Peter Ludlow. 2013. The Banality of Systemic Evil. New York Times.