IJFANS International Journal of Food and Nutritional Sciences

ISSN PRINT 2319 1775 Online 2320-7876

Exposing Internet Bullies on Social Media to Preserve User Integrity over Twitter Trolls

Main Article Content

Mohammed Owaisuddin, Md. Ateeq Ur Rahman and Ganesh Mani

Abstract

Given that online recordings often persist on the Web for a significant long time and are challenging to control, cyberbullying is one of the most detrimental effects of social media and tends to be more diabolical than traditional bullying. In this paper, we introduce BullyNet, a three-phase algorithm for identifying online bullies on the Twitter social network. By suggesting a reliable way for creating a cyberbullying signed network, we take use of bullying characteristics (SN). In order to maximise their bullying score, we evaluate tweets to ascertain their relationship to cyberbullying while taking the context of the tweets into account. Additionally, we suggest a centrality metric and demonstrate its superior performance in identifying online bullying from a cyberbullying SN. Our research uses a dataset of 5.6 million tweets, and the results demonstrate that the suggested approach is very accurate at identifying cyberbullies while being scaleable in terms of tweet volume. As social media sites and microblogging websites grew quickly, direct connection between people with various psychological and cultural backgrounds increased, leading to an increase in "virtual" confrontations between them. As a result, hate speech is employed more frequently, to the point that it has seriously disrupted these public venues. Hate speech is the use of aggressive, violent, or offensive language directed at a certain group of individuals who share a characteristic, such as their race, gender, or ethnicity (i.e., racism) or their faith, values, etc. Although the majority of microblogging and online social networks prohibit the use of offensive speech, the sheer magnitude of these networks and sites makes it nearly difficult to regulate all of their material. Therefore, it becomes necessary to automatically identify such speech and censor any information that contains inflammatory words. In this essay, we suggest a method for identifying hate speech on Twitter. Unigrams and trends that are dynamically gathered from the training dataset are the foundation of our strategy.

Article Details