By: Kyle James | 12-18-2018 | News
Photo credit: Krissy Eliot

Berkeley Scientists Are Developing Artificial Intelligence To "Police" Free Speech

In 2018, a hot topic has been free speech which has been under attack like never before. Comedians are being fined for jokes, YouTubers are losing their livelihoods for espousing conservative viewpoints, and people are being deemed racists and bigots for using the cartoon frog "PePe". The left has been on a rampage against free speech, especially the big tech giants who are essentially regulating the flow of information and free speech in the entire world.

These left-wing big tech companies and universities have found a way to further their political agenda by silencing their opposition. How do they silence their opposition? They declare opposition's words as "hate speech". Now, scientists at the University of California, Berkeley, are creating an artificial intelligence to police speech and combat whatever they deem to be "hate speech" on social media. The College Fix reported, "Ten students of diverse backgrounds helped to develop the algorithm".

Related coverage: Wisconsin Police "Investigating" Photo of High School Class "Seig Heil," Where Is The Crime?

This new tool will rely on artificial intelligence to identify "hate speech" on social media, but who gets to decide what "hate speech" is? I would argue that all speech is protected and in America, you have the right to say anything you want and to express any opinions you may have. That precious right has been under attack for several years but the war against free speech is about to be taken to a new level if this artificial intelligence is adopted by the social media giants.

The scientists hope that the program could one day out-perform human beings when it comes to identifying bigoted comments on social media platforms like Twitter, Reddit, and YouTube. Berkeley's D-Lab "are working in cooperation with the [Anti-Defamation League] on a ‘scalable detection’ system—the Online Hate Index (OHI)—to identify hate speech," according to the California Alumni Association.

Related coverage: 4chan Admin Prefers User Arrests Over Post Deletions

The new tool will utilize artificial intelligence as well as several new techniques that will supposedly detect offensive speech online. The problem is, anyone can say they are offended by anything they want, and it is being used by the left as a tool to silence conservatives who have done nothing wrong other than exercise their right to free speech. This new tool is a direct threat to that freedom and is just another example of how the left-wing dominated Universities are slowly chipping away at our rights.

The report also said that the tool will utilize techniques including "machine learning, natural language processing, and good old human brains." Researchers hope that one day "major social media platforms" will adopt the technology in order to detect what they deem as "hate speech" and eliminate the user's account who posted it. The current technology being used to regulate speech on big social media platforms involves "keyword searches", according to one researcher who also described this method as "fairly imprecise and blunt" since users can get around the algorithms by spelling "offensive" words differently.

Related coverage: NBC, Facebook and FOX Censor President Trump's Ad One Day Before Midterm Elections

The OHI intends to address these deficiencies. Already, their work has attracted the attention and financial support of the platforms that are most bedeviled—and that draws the most criticism—for hate-laced content: Twitter, Google, Facebook, and Reddit…

D-Lab initially enlisted ten students of diverse backgrounds from around the country to “code” the posts, flagging those that overtly, or subtly, conveyed hate messages. Data obtained from the original group of students were fed into machine learning models, ultimately yielding algorithms that could identify text that met hate speech definitions with 85 percent accuracy, missing or mislabeling offensive words and phrases only 15 percent of the time.

Though the initial ten coders were left to make their own evaluations, they were given survey questions (e.g. “…Is the comment directed at or about any individual or groups based on race or ethnicity?) to help them differentiate hate speech from merely offensive language. In general, “hate comments” were associated with specific groups while “non-hate” language was linked to specific individuals without reference to religion, race, gender, etc. Under these criteria, a screed against the Jewish community would be identified as hate speech while a rant—no matter how foul—against an African-American celebrity might get a pass, as long as his or her race wasn’t cited.

Another researcher said the tool could also be used to censor free speech. "Unless real restraint is exercised, free speech could be compromised by overzealous and self-appointed censors," the researcher said. They added, "working to minimize bias with proper training and online protocols that prevent operators from discussing codes or comments with each other." You can read the full report here

On Twitter:
Tips? Info? Send me a message!

Follow The Entire Goldwater Team via Twitter!


Twitter: #University #Berkeley #ArtificialIntelligence #FreeSpeech #HateSpeech #Censorship #MAGA

Share this article
Thoughts on the above story? Comment below!
0 comment/s
What do you think about this article?
Comment *

Recent News

Popular Stories