Fake News Identification


Automating Fake News identification - an imperative!

"A lie can travel halfway around the world, while the truth is putting on its shoes" - Mark Twain
Word of the Year: Fake News has gained the dubious reputation of being the ‘Word of the Year’ in the last couple of years. Its ill effects span from partisan influences on democratic elections; creation of ethnic conflicts and even spawning of economic losses.

Hyper Propagation: With 4 out of 10 people using social media as a channel for news consumption; the propagation of fake news is hard to contain due to the hyper-connectedness this digital world provides. Today anyone with access to a smart phone could don the role of a citizen journalist - irrespective on how inimical their intent be; the rapid propagation and the ensuing detrimental effects can be at a scale to create human tragedies.

Distrust of Real News: Apart from any of the major disturbances to humanity; fake news has also created a sense of distrust which makes even real & factual news incredulous to the consuming public. This lack of trust has already permeated to a point where it has become difficult to mobilize public opinion on even news based on facts like environment, public health or even economies.

Human authentication will not scale: Veracity of news items could be ascertained by human intervention with involvement from experts drawn from specific domains & specialties. However the rate at which news (both real and fake) are created and propagated; this human centric involvement will not scale even to any levels of adequacy in debunking fake news or in authenticating the real ones.

Automating the authentication: Automating the authentication without depending on non-scalable and linear human expertize has become an imperative. Fortunately, Artificial Intelligence (AI) and Natural Language Processing (NLP) techniques seem to have a first level solution to process and automate this endorsement.

Fake-O-Meter: Information Retrieval and Extraction Lab (iREL) at International Institute of Information Technology Hyderabad (IIIT Hyderabad) has used Deep Learning methods and NLP to create ‘Fake-O-Meter®’ a mechanism to segregate the real news from fake news. The computing models which have been designed and programmed to inherently & continuously learn today are capable of authenticating news in the political domain - especially for the USA.

Extensibility: The model can be expanded to diverse domains (Political Domains – World or specific geographical divisions, Financial Domains, Public Health; arguably to any domain of interest). The model’s applicability is only limited by the availability of past data of a specific domain. The past data is used to ‘teach’ the model which then applies itself to future items of the same domain.

Economic Value: Media houses stand to gain immensely if this automation becomes intrinsic to their news production process. Authentication ratings can be also be used to alleviate distrust among the consuming public to retain and grow patronage in a competitive environment. Both of this directly lead to economic value

Experience it! IIIT Hyderabad is reaching out to showcase the solution and expand it to general applicability across diverse domains. As a first step; a demonstration of this automated news authentication (‘Fake-O-Meter®’) will be arranged to the discerning media and business gurus. Our contact: meetwithram@gmail.com

Get in touch

For a demonstration, please contact us at:




IREL, IIIT-Hyderabad