Big Thinks is the Digital Magazine of the Global Mastermind Group

2021: The Weaponizing of Storytelling Will Continue

Michelle Galvani and Sophie Galvani Big Thinks December 2020 Predictions 2021

Sophie Galvani-Big Thinks Contributor

All Posts

More than half of Americans get their news from Social Media.

AmericanPress Institute Survey 2016

Information, Misinformation, and Disinformation:

Disinformation is the intentional spreading of false information, while misinformation is spreading false information without intent to mislead. As we engage with and share this misleading information, it feeds the algorithm. It is increasingly difficult to discern legitimate from illegitimate news due to AI software that can manipulate data. Interestingly, AI is also used to combat the spreading of false information. It is essential to understand how conspiracies are created and the threat that they pose to people’s right to objective news. 

 A conspiracy is a secretive plan, typically led by a small group of people, to do something unethical or harmful. In his article, UCLA professor Timothy Tangherlini explains that a conspiracy theory differs because it relies on community support and engagement. He continues that conspiracy theories aim to definitively explain a myriad of things in order to deceive the population. This UCLA study on how conspiracy theories form shows a model that connects different variables of a conspiracy theory. If one variable is removed, the conspiracy does not hold up. Contrastingly, true conspiracies hold true when one or multiple factors are taken out of the model. False information can be contained using machine learning technologies.

The Role of Machine Learning

You may have heard about a type of machine learning that alters photos and videos to mimic famous personalities. It is tough to detect the real from the modified version. Along the same lines, Natural Language Processing is used to produce Neural Fake News, which contains false information that mirrors journalistic standards of real news, posing many threats. 

Back in November, the CEOs of Facebook and Twitter, Mark Zuckerberg and Jack Dorsey, were called to testify to the Senate Judiciary Committee about the suppression of a New York Post article and address potential revisions to section 230. The Communications Decency Act. Section 230 does not hold internet services to the same standards as other content publishers. Both leaders spoke about and have made AI algorithm technology investments to moderate content and stop the spread of disinformation on their platforms.

Fabula AI, a UK startup, focused on authenticating information on social media using “Geometric Deep Learning,” technology capable of synthesizing datasets that are far larger and more complex than traditional machine-learning models. Instead of analyzing the misinformation content, this technology tracks the way news is distributed on a social network. This immediately followed an MIT study that revealed that “fake news” and actual news spread quite differently on Twitter.  Before being acquired by Twitter last year, Fabula envisioned itself as an open, decentralized “truth-risk” scoring platform for content. The software can filter content by utilizing a content credibility ranking system that slows down, blocks, or serves flagged content for human review.  

Machine learning’s nature is that it continuously learns but cannot discern right from wrong. The Fabula AI model intends to upend the threats posed by malicious disinformation. The model can also predict virality, amplifying the potential to unleash a significant danger if this program fell into the wrong hands. Another example is an AI program called “Malcom,” created by researchers at Penn State.  The goal was to determine if comments produced by Malcom could get past neural network models used by Big Tech to combat the spread of “fake news.” according to the paper the team published last month, it worked  93.5% of the time.

What does this mean? State-of-the-art machine learning models such as those used by Facebook and Twitter are vulnerable, creating considerations around the ethics of AI.

AI Ethics

 In a paper published this month, Jon Truby and Rafael Brown explain that a “digital thought clone” is a product of all a person’s available data,  including an individual’s decision making, thought process, and behavioral attributes. They tell how this ability is the “holy grail” for profit-seeking big tech companies who could use it to influence our online buying decisions and pose a significant risk for ethical issues about data privacy. Moreover, many studies revealed a potential bias in fact-checking programs: machine learning is “taught” through existing data ingested by the program. If there is bias in that data, then it will be passed to the algorithm. The shortage of independent AI startups is another possible issue. According to CB Insights, Big Tech has been on an acquisition spree, buying sixty startups over the past decade. A lack of independent AI companies could impact the diversification of projects and, therefore, innovation. How? If Big Tech controls the programs and the top talent, then priorities in the field could be based around the needs of Big Tech and not necessarily the global community.  

Artificial intelligence and machine learning are growing at an unprecedented rate. These technologies continue to have valuable applications, the best example being how this technology will be used to contain possible future pandemics. Unfortunately, there are significant ethical issues such as privacy concerns, transparency, and the bias in the data-collecting process. The U.S. is the only country that does not have any policy in place around data privacy. Ethics in AI requires stakeholders, scientists, researchers, business leaders, policymakers, and more to seek solutions and enforcement measures that ensure the global community’s ongoing safety.     

Big Thinks - Editors

Big Thinks - Global Mastermind Accelerator Contributors

This Months Guest Contributors

Author