Social Media and Political Neutrality

- Part 1

By Abhishek Venkatesh

The recent call for scrutiny of Facebook is representative of the growing conflict of varying perceptions of political neutrality

Of late, Social media giants have come dangerously close to political storms in different countries. Facebook in India was the most recent one, with accusations that it allowed hate-based content by certain members of the ruling party, out of fear of business ramifications. While a detailed explanation followed, that reiterated Facebook’s commitment to its content guidelines and the philosophy behind the same, this is hardly the first time that a social media giant has been accused of an observable bias. A little before that, Twitter too found itself in a similar muddle. Its own official account censured, and called out President Trump for advocating violent suppression of the Black Lives Matter protests, leading him to question Twitter’s commitment to neutrality, and if it was an unfair tool that favoured Democrats. Even earlier, Twitter faced backlash for taking down accounts in Hong Kong and China, in light of the protests.

Social media platforms can singularly be credited with redefining how technology, politics, and society interact. This interaction, that has now created lasting socio-political networks, is also largely sustained by how we interpret neutrality, a quintessentially political value. However, in what has now become a major concern in public policy, social media giants have found themselves at the dead centre of this idea of neutrality. Thus, technology may have led to greater democratisation of political values, but it has also brought to the forefront, the role of principals of such technologies in the distribution of these values. This can also be seen in an insightful work by Katherine Nahon, where she describes three ways that platforms exercise power- influencing decisions, shaping political agendas, and shaping perceptions (Nahon, 2015).
This article, the first of a two-parter, building on the aforementioned ideas, aims to focus on the following questions-

  • What do we imagine neutrality to be? And are social media platforms innately apolitical, or neutral?

  • Does the ambiguity of roles, ranging from intermediary to publisher, and moderator, affect their realization of neutrality?

Neutrality in the political sphere can refer to a principled distance from all ideologies, and opinions reflecting the same. What also complements this idea, is equal treatment of all such legitimate ideologies and opinions. Thus, there are two aspects of neutrality to consider here- what is displayed to the user (and thus affecting his/her behaviour) and what the platform itself does (explicit advocacy).

Homophily, or the tendency of similar minded people associating with each other, is a characteristic behaviour of people on social media (Benkler, 2006). However, while this is an expected tendency in a medium that aims to connect people, it is the aggregation of homophilic networks (pages/groups) that pose a concern. These aggregations, owing to a high amount of activity, may display bias, as algorithms may be programmed to display information from such networks to users. Consequently, some major issues around targeting user behaviour include- use of personal and non-personal data for displaying selective information and targeted political advertisements, opaque search indexing that filter search results, to other online (and sometimes offline) activities, and high discretion to gatekeepers/mediators of online communities to control the flow of information (that in turn affects access to information). These problems are essentially algorithmic in nature, and the starting point of any association with neutrality will lie in the architecture of such algorithms and thus, the platform itself.

The second problem- of what the platform itself does- is more a matter of principle. It is also closely related to the second part of our question, of whether this neutrality is innate. Platforms are based on strong liberal principles, particularly those concerning free speech and access to information. In this regard, there is a moral force to stay committed to these principles, but how they manifest is what is interesting. The very activity of enabling opinions, or allowing people to access or not access them, is as political as having one. When this is combined with the fact that these platforms themselves have an operational account on platforms, the line between their roles as users and regulators get blurred. It becomes difficult to pinpoint the cause of any censure; whether it resulted out of disagreement with liberal principles that define social media as a user, or inconsistency with standards set by a regulator.
Twitter’s censure of President Trump, seems to be a case of how twitter perceives itself as a user, affected by Mr Trump’s opinion. While it is true that equality or neutrality may also end up legitimizing harmful extremist views, it is exactly in this space that the likes of Twitter would need a transparent, and more importantly, politically acceptable standards for enforcing neutrality. In a normative sense, neutrality here would be the platform sticking to the consistent application of its content guidelines, and any deviation, or observable bias, would constitute a bias.

The ambiguities around business identities that social media platforms have also made it difficult to enforce an idea of neutrality. Social media giants, in different contexts, act as platforms (conduits for people to express and communicate), publishers (content as intellectual property, determining the style, layout, and editability of content), moderators (guidelines for posting, actions taken on reported activities, displaying content with greater traction), and activists (responding to prevailing public values and sentiments). The problem with such ambiguity is the lack of comprehensive legislation or policy, that can effectively address all these roles. A couple of examples here merit regard.

Firstly, the Communications Decency Act (Section 230) in the USA (Ruane, 2018), is one of the major legal provisions that address the activities of ‘providers of interactive computer service’ (a blanket term for all internet-based companies). According to this, they are not supposed to be treated as a publisher or speaker of information provided by someone else. Additionally, there’s little liability on these companies, if they decide to voluntarily restrict access to, or availability of, certain kinds of information. Simply put, it places an onus on users to be responsible for the content they post, while platforms can make necessary changes (including access and availability) without being declared as publishers. These have some major implications-

  • These allow platforms to steer away from litigations arising out of inter-user behaviour (such as Trolling), and issues of copyright and IPR infringement (Artwork, tweets, original thought).

  • Lack of a compass to navigate through equality or neutrality,

  • Dangers that algorithms may massively favour divisive clusters of information (traffic driven), leading to polarisation of users.

As a second example, it’s important to have a look at the Intermediary Rules in India, read together with proposed amendments to section 79 of the IT Act (Sadana, Rastogi, & Taneja, 2020). These involve more complex regulatory challenges. The rise in fake news and hate-based content has prompted the government to explore harder stances in its draft amendments, and increased scrutiny of the ‘safe harbour’ protections that platforms currently enjoy under the act. Mandatory Assistance to State agencies, without a proper framework (when also read with amendments Section 69 of the IT act) raises issues of State capacity, accountability, and surveillance. On the other hand, the need for ‘proactive monitoring’ by platforms, absolves them of their ‘passive’ nature, and can no longer be treated as just conduits for expression. Further, the recent Facebook incident raises questions into the efficacy of oversight mechanisms over ‘prohibited content’, which is also a requirement under existing Intermediary Rules.

This dynamic nature of platforms brings forth a conundrum of enforcing neutrality and accountability. A comprehensive social media policy sounds rational but has issues. As platforms engage in a vast diversity of commercial activities, and matters of data privacy and protection policies go beyond social media, this task would become difficult. But since this is an issue of regulation, it brings us to another burning question.

Does the responsibility for their neutrality fall on governments, or themselves? The next part, explores the nuances of any such regulation, with the possibility of a shared burden.


  1. Benkler, Y. (2006). he Wealth of Networks: How Social Production Transforms Markets and Freedom. Social Science Computer Review, 26(2), 259–261.

  2. Nahon, K. (2015). Where There is Social Media There is Politics. In A. Bruns, G. Enli, E. Skogerbo, A. O. Larsson, & C. Christensen, The Routledge Companion to Social Media and Politics (pp. 39-55). Routledge. doi: 10.4324/9781315716299-4

  3. Ruane, K. A. (2018, 02 21). How Broad A Shield? A Brief Overview of Section 230 of the Communications Decency Act. Retrieved from Federation of American Scientists:

  4. Sadana, T., Rastogi, A., & Taneja, A. (2020, 05 12). Impact Of Proposed Amendments To Intermediary Guidelines. Retrieved from Mondaq: