Recently, the Biden administration has stirred up a heated debate by allocating more than half a million dollars in grants to develop an artificial intelligence model that can detect and suppress microaggressions on social media. This means that the government is trying to use technology to monitor and potentially limit how people communicate on social media, even if those conversations are not explicitly offensive.
At first glance, this may seem like a positive step towards creating a safer online environment. People should not be subject to discrimination, and this technology could theoretically help to prevent it. However, there are some serious concerns about this proposed system, particularly regarding its potential to infringe on our free speech.
The first issue is the accuracy of the artificial intelligence system. It is unclear how reliable this system will be able to identify microaggressions, and whether it will be able to differentiate between offensive language and harmless banter. If the system is not accurate, there is a risk that it will block conversations that are not actually offensive.
I can tell you the AI Facebook uses to censor ‘harmful’ content is busted. I’ve personally been tossed in FB jail several times over completely harmless comments.
The second concern is the potential for the system to be used as a tool for censorship. If the system is used to suppress conversations or opinions that the government disagrees with, it could lead to a dangerous erosion of our right to free speech.
Finally, there are questions about who will have access to the data collected by the AI system. Will the data be open to the public, or will it be kept private? If it is kept private, how will it be used, and who will have control over it?
Judicial Watch president Tom Fitton likened the Biden administration’s funding of the artificial intelligence research to the Chinese Communist Party’s efforts to “censor speech unapproved by the state.” For the Biden administration, Fitton said, the research is a “project to make it easier for their leftist allies to censor speech.”
A spokesman for the National Science Foundation, which issued the research grant, rebuffed criticism of the project, which he said “does not attempt to hamper free speech.” The project, the spokesman said, creates “automated ways of identifying biases in speech” and addresses the biases of human content moderators.
Right… That sounds legit…
The crap doesn’t work on social media platforms and only creates a terrible experience for everyone. Not to mention, it constantly violates free speech but that’s done by a private company—This would be the freaking government violating the constitution.
Erica Carlin is an independent journalist, opinion writer and contributor to several news and opinion sources. She is based in Georgia.