Big Thinks is the Digital Magazine of the Global Mastermind Group

Artificial Intelligence Shifts Power

Interview with David Binnion, AI Expert

Interview with David Binnion, AI Expert

Companies hire David to solve complex problems using AI and machine learning.

The discussion around artificial intelligence (AI) continues to heat up as more reports come out about how artificial intelligence affects every area of our life.  The following are some examples of how AI is already being used.

  • AI is used to search billions of data points, including your public social media profiles, LinkedIn, Behance, Github, Speaking Engagements, patents, books, whitepapers, all in an effort to help recruiters and hiring managers find top candidates to fill jobs.
  • Numerous people use the following AI enabled devices, Alexa, Google Home, Siri, Amazon and Apple’s digital voice assistants to find information and to learn about businesses and services.
  • Marketing and Media companies use AI to serve up personalized information to customize your online experience.
But as AI continues to grow, people have been questioning whether AI is fair or good.  Does the AI used discriminate against certian candidates?  Are we creating environments that are echo chambers or filter bubbles where new ideas are limited and confirmation biases are strengthened?

Maybe we shouldn't be just asking, "Is AI good or fair?"  The even bigger question is, "Who does AI transfer the power to when used?"

Tracy Levine, Forbes Coaches Council Tweet

I asked David Binnion, an artificial intelligence (AI) expert who uses artificial intelligence (AI) to resolve complex problems, a software engineer and a artificial intelligence (AI) researcher to weigh in on the question.  “Who does AI transfer the power to when used?”

Artificial Intelligence

What is Artificial Intelligence?

David: Artificial Intelligence is a system created with specific intention to provide a natural reaction to some stimulus. It can be as simple as automated responses to a question, as with Alexa, or as complicated as a self-driving car. It takes into account many different areas including machine learning, big data, bioinformatics, and psychology. 

Currently, the pursuit of a general AI (one able to learn and solve on any given task) is still ongoing, but we are getting closer. Common use of the term AI, however really applies to machine learning or general algorithms, so for the remaining questions I’ll be using the term in the same manner.

How does Artificial Intelligence contribute to a shift in power, if at all?

David: Artificial Intelligence itself has no bias, so it doesn’t really contribute to shifts in power on its own.  It is, however, an amplifier.  Now this applies to any technology even down to the knife.  A knife amplifies the amount of force applied to a specific area by minimizing the area.

AI allows for application of processing power to be applied to a specific area and minimizes the amount of human interaction needed to solve that given problem.  What this does is allow for any specific person or organization in control of that technology to use it to further their own goals ahead of any other group.

This is seen currently with automatic generation of ratings and comments to protect specific properties from negative feedback that would then sway others to not purchase the property (“The Last of Us 2” and “Captain Marvel’).  Use of Artificial Intelligence to further one’s own goals is also seen with YouTube’s use of Natural Language Processing to flag videos by the words used in those videos.

Terms that are flagged include “MGTOW”, “Feminism”, “Coronavirus”, and other politically charged terms that allow them to find and manually review specific accounts.  Those that are determined to be “problematic” can then be removed, and when one group is the only one allowed to talk, those that are less informed can only learn from that one side.

What (other) problems are likely to come up with Artificial Intelligence?

 David: Many like to ask about possible issues with Artificial Intelligence taking over and pushing humanity down like in Terminator.  Even Elon Musk is afraid of this possibility.  I don’t think that’s the issue that’s going to come up first.  Currently we have issues with Artificial Intelligence being used to control who can talk and who can’t.  It’s used to push the religion of “equality.”  This is why you’ll find some “experts” talking about how to solve bias in AI by ensuring representation amongst the developers.  
They completely skip over the only thing the Artificial Intelligence runs on and cares about: the data.  
These “experts” skip this because talking about the data and making sure the data fully covers all possibilities simultaneously takes more work and keeps those “experts” from getting praised for talking about diversity.  AI doesn’t care about the colors of the butts in the seats, it cares about the data it trains on.  We’re not going to get to any level of AI that could possibly take over humanity if we stick to using it to screen out anyone with an idea contrary to the new religion.
We have issues coming up as well.  One is the believability of any given evidence.  With deepfakes and speech synthesis, we now have ways to create entire speeches for a given presidential candidate going over things they never said.  We can have people admit to crimes they never committed without ever talking to the person directly.  We can cause wars just by making a video of one country’s leader declaring it.  And of course, like any other problem in AI, the problem isn’t the Artificial Intelligence itself.  It’s who’s using it and for what.  You can use a knife to stab someone or you can use it to feed people.  It depends on the person using it

… like any other problem in AI, the problem isn't the AI itself. It's who's using it and for what. You can use a knife to stab someone or you can use it to feed people. It depends on the person using it.

David Binnion, AI Professional Tweet

Artificial Intelligence is only good or bad based on the humans that yeild the power of this tool.How can these problems be mitigated?

 David: Not every problem will be avoided.  Psychopaths exist.  Much like how you can’t make a government that won’t have corruption, you can’t make a technology that people can’t use for negative means.  The same language parsing system YouTube uses to try and control conversation on their “platform” is the same system that could be used to translate videos into other languages and allow a platform to reach anyone in the world. 
Now, can some AI problems be mitigated?  Yes, that’s the process of development.  But we have to be willing to accept the solutions rather than try and bend them to our own ideologies.  Can I make a system that removes my own biases when selecting candidates for a job or determining prison sentences?  Yes.  All I have to do is normalize out any factors that aren’t helpful such as gender, age, race, etc.
Unfortunately, doing so removes factors that may actually contribute to the outcome.  For example, women have a higher rate of non-reciprocal domestic violence than men.  Something we tend to not believe due to our own bias, but is supported by how lesbian relationships have higher rates of domestic violence than any other relationship type.  Removing sex as a factor when trying to figure out domestic violence probability would skew the data.  
If I don’t agree with the outcome because it doesn’t align with my uninformed opinion or my ideology, then I will want to change the model until it aligns to what I think it should.  Most believe men are far more likely to be violent in a relationship.  Taking in the data then seeing it predict women as being more likely will cause us to want to change the model.  Problems with Artificial Intelligence come from the people involved.  As long as the model is accurate, that’s all that should matter, but we’re people.  We’re fallible and often believe we’re the only ones that aren’t.

Are there other issues Artificial Intelligence is likely to solve?
David: As I said, Artificial Intelligence can solve many issues as long as we’re willing to accept the solutions.  Currently, social media puts people in bubbles.  We can make algorithms that introduce some variability to those bubbles.  We have many illnesses that don’t have cures yet.  We can use Artificial Intelligence to explore possibilities based on similar illnesses that have already had treatments produced.  We have biases within the court system and hiring system.  We can have those addressed by having Artificial Intelligence offer solutions in these areas.  The only issues really, are do we have people that want to try and use Artificial Intelligence to steer things to what they think the world should be rather than what it is

David, thank you for sharing your insights with Big Thinks.  You can follow or connect with David Binnion on LinkedIn.