Cookies help us display personalized product recommendations and ensure you have great shopping experience.

By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    image fx (67)
    Improving LinkedIn Ad Strategies with Data Analytics
    9 Min Read
    big data and remote work
    Data Helps Speech-Language Pathologists Deliver Better Results
    6 Min Read
    data driven insights
    How Data-Driven Insights Are Addressing Gaps in Patient Communication and Equity
    8 Min Read
    pexels pavel danilyuk 8112119
    Data Analytics Is Revolutionizing Medical Credentialing
    8 Min Read
    data and seo
    Maximize SEO Success with Powerful Data Analytics Insights
    8 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-25 SmartData Collective. All Rights Reserved.
Reading: How Public Opinion Shapes the Future of AI
Share
Notification
Font ResizerAa
SmartData CollectiveSmartData Collective
Font ResizerAa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Business Intelligence > Artificial Intelligence > How Public Opinion Shapes the Future of AI
Artificial IntelligenceExclusive

How Public Opinion Shapes the Future of AI

Darren Beauchamp
Darren Beauchamp
12 Min Read
Future of AI
Shutterstock Licensed Photo - By Syda Productions
SHARE

The anxiety about AI is very explicit today. Stephen Hawking and Elon Musk expressed their doubts that humanity will be able to control AI. Mark Zuckerberg brushed off these dire concerns and called them “irresponsible”. The humanoid AI Sophia made headlines and raised a few eyebrows when the Kingdom of Saudi Arabia granted her citizenship. This discussion attracts massive public attention and has immediate and delayed consequences.

Contents
Uncontrolled DevelopmentFatal ErrorUncanny ValleyInstitutionalized BiasAbuse by humansTakeaways

The impact of public opinion may put the brakes on AI adoption. As long as people think the technology is dangerous, whether it actually is or not, the development of AI will be hamstrung.

However, public sentiment is influenced more by popular culture than by experts. Bleak doomsday scenarios are particularly impactful. When Elon Musk called AI “a fundamental existential risk for human civilization”, it resonated with fears and doubts people already had.

What do they fear exactly? Let’s take a look at the most popular patterns.

More Read

Digital artificial intelligence text hologram 3D rendering
Can Integrating Kronos Resolve Concerns About Bias in AI Development
Busting the Myths of AI and Machine Learning
Data-Centric Firms Address Athena Shortcomings with Smart Indexing
Key Strategies for Leveraging User Data for Content Marketing
Using Data-Driven Lean Thinking to Optimize Business Processes

Uncontrolled Development

AI will transcend being just a tool of humanity. It will become powerful enough to be master of its own destiny. According to this myth, when AI evolves to think independently, it will find itself superior to humans. Thus, they must be subdued or exterminated. This is a classic Western dystopia. Many experts (including Elon Musk, Bill Gates, and Stephen Hawking) share this dark view of the singularity problem.

The creation destroying its creator is a very old theme. The first prominent example of it in the Western culture is Frankenstein’s monster. Later interpretations (HAL 9000, Terminator, and the Matrix) continue this tradition. However, the topic is very controversial. Even those in the AI-wary camp agree that technology of such power isn’t arriving soon.

However, Elon Musk has a point with at least on one thing – experts have to be proactive. Staying on top of things in the industry is not enough. We must see ahead. Whether AI will actually pose a threat or not, we must think of all possible scenarios to reassure the public.

While taking preventive steps, we must be careful not to dismiss the positive cases of AI implementation. One slip-up could bring down ill-written legislation against the entire industry, arresting AI development for years to come.

Fatal Error

Another popular scenario suggests that it’s dangerous to put too much trust in technology. If AI lacks emotional intelligence or data, it may interpret our requests erroneously. Alternatively, due to a bug or a virus, AI can become erratic and potentially dangerous. As O’Reilly Media founder Tim O’Reilly puts it: “Our algorithmic systems are a little bit like the genies in Arabian mythology. We ask them to do something but if we don’t express the wish quite right, they misinterpret it and give us unexpected and often alarming results.”

People aren’t ready to handle unpredictable AI. One hundred percent safety is a necessary condition for consumers to let virtual assistants and robots into their daily life. For example, the safety of AI always comes under scrutiny when a self-driving car is involved in a road accident.

So far there has been only one case when AI’s mistake caused death. Yet this was enough to stir controversy. People fail to weight this separate occurrence against the huge number of times when AI resolved dangerous situations. Each year, over 421,000 accidents involving distracted drivers lead to severe injuries. Over 78% of those drivers are texting. That is something an AI would never do.

Moreover, in the aforementioned tragic accident with Tesla Autopilot, the system failed to recognize white track against the bright sky, but so did the human driver. He had at least 7 seconds to hit the brakes – apparently he did not pay attention. The driver thought that the vehicle was 100% self-driving when it was not. A human error (wishful thinking) caused that accident.

The court’s ruling to clear Tesla self-driving system was fortunate for the future of the industry, but it was a close shave. The experts must address the issue with overly optimistic naming. If AI’s capability is limited, it must not be marketed as something that “drives a car for you” or “understands your every wish”. If a name of a product or its particular feature implies autonomy (“AI”, “Smart”, “Self”, “Intelligent”), there must be a guide that explains features and capabilities. Make clear which steps your users must take themselves.

Uncanny Valley

“Uncanny valley” represents metaphysical and existential fears that humanity has about AI. Artificial intelligence in humanoid form poses threat to our distinctiveness and identity, challenges human notions of “specialness”. People fear that these doppelgangers will replace them on their jobs, in relationships, and so on. Sci-fi writers actively explored this notion in the 50s and 60s.

Since most of today’s AIs are disembodied components of virtual assistants and navigation systems, this fear is wearing off. However, the power of technology to isolate people and decrease the number of available jobs is still worth considering. Almost a third of people fear that AI will replace them in their workplace.

Another very important thing to consider in the context of the “uncanny valley” is the emotional intelligence of AI. The androids from Do Androids Dream of Electric Sheep? are eerie because they are incapable of empathy. They do feel emotions, but only for self-preservation. They have no respect for the pain of living things, and compassion is alien to them.

The latter concern is not merely existential. Many experts believe that EI is vital if we want to create an AI that will truly improve the quality of life. It is hard to predict the future, but equipping machines with emotional intelligence is certainly a step in the right direction.

Institutionalized Bias

Recently, people began to fear that AI will formalize and reproduce racism, sexism, and other biases of its creators. This is the most realistic fear so far. Although machines are not programmed to be prejudiced, they are programmed to learn from people and act accordingly.

The ill-fated experiment with Microsoft’s chatbot Tai provided an illustrative example of how machine-learning mechanisms can backfire. There are many less egregious, but very disturbing cases with search results and face recognition issues. Google Photos made the mistake of tagging two African-American friends as gorillas in 2015. In 2009 Hewlett Packard video tracking software failed to pick up dark-toned faces. In that same year, Nikon’s camera software mislabeled an East Asian face as blinking.

Neil Davidson, the managing director of HeyHuman, thinks that brands should become saviors of the day. Companies cannot afford to let their customers down. Therefore, brands must do everything in their power to create an AI that is unbiased and harmless. “As the field advances, we need to learn from previous failings and ensure machines function with a degree of consciousness and intuition,” says Davidson.

Abuse by humans

Lastly, people are afraid that other humans will use AI to deceive them, trick them into trust, love, etc. Even marketing with the help of AI is seen as manipulation. Nine in ten Americans feel that use of AI in marketing should be regulated with a legally-binding code of conduct. Three-quarters think that brands should need their explicit consent before using AI when marketing to them. This is the latest trend, partly caused by recent scandals with social media targeting.

Anxieties shifted from machines to people, which is definitely a step in the right direction. This worldview mostly represents Eastern dystopian tradition, expressed in Japanese popular culture. Ghost in the Shell (manga and TV series) shows the future, where crimes are still committed by people. Criminals have very human motives and drives, whereas robots are merely tools, albeit sophisticated.

We can argue whether this sober attitude and Japan’s success in robotics are coincidental. One thing is certain – this view slowly becomes mainstream in the Western culture as well. People care less about AI and more about its human creators.

The film Ex Machina and one of Doctor Who episodes (Oxygen) represent this shift in popular culture. Both show how human greed and selfishness can be more destructive than any advanced AI.

In fact, people can trust AI even more than they trust other humans. Dr. Michael Paasche-Orlow of Boston Medical Center shares his observations of terminally ill patients. According to him, they embraced the system that was designed to guide them through the end of life and sometimes preferred it to human caregivers. “It turns out that patients were very happy to talk with a computer about it,” he said. “They were very explicit in telling us, ‘The doctor never asked me about these things.'”

Takeaways

  • Engineers have a responsibility to make sure AI is not designed in a way that will reflect back the worst of humanity. For online services, anti-abuse measures and filtering should always be in place before you invite everyone to join. Also, you cannot skip the part about teaching a bot what not to say.
  • To train your AI, feed it the most diverse data possible. For voice recognition, use a variety of accents. For face recognition use people of different ages and races. Test all features in various conditions before releasing an end product to the public. Letting your algorithms to “learn on the go” can be too costly a mistake.
  • Overly optimistic naming for AI products usually causes misunderstanding and harms the industry as a whole. Be very clear about what your AI can and what it cannot do.
  • AI needs emotional intelligence to gain acceptance. EI is in high demand both in the human and robotic workplace.
  • To ensure the change in public perception, we must make positive cases of AI implementation prevalent. For decades, the public was conditioned not to trust AI. These deeply rooted fears cannot be dismissed easily.
TAGGED:artificial intelligenceFuture of AI
Share This Article
Facebook Pinterest LinkedIn
Share
ByDarren Beauchamp
Follow:
Darren Beauchamp is a freelance graphic designer from Minnesota. He specializes in big data visual representation for many IT projects, including MacFly Pro. Darren is currently working on his own visual recognition AI.

Follow us on Facebook

Latest News

image fx (2)
Monitoring Data Without Turning into Big Brother
Big Data Exclusive
image fx (71)
The Power of AI for Personalization in Email
Artificial Intelligence Exclusive Marketing
image fx (67)
Improving LinkedIn Ad Strategies with Data Analytics
Analytics Big Data Exclusive Software
big data and remote work
Data Helps Speech-Language Pathologists Deliver Better Results
Analytics Big Data Exclusive

Stay Connected

1.2kFollowersLike
33.7kFollowersFollow
222FollowersPin

You Might also Like

Speech IVR with Artificial Intelligence Is the Bees Knees

4 Min Read
ai monitoring solutions
Artificial Intelligence

How Money Laundering Concerns Require New AI Monitoring Solutions

8 Min Read
RAG large language model
Artificial IntelligenceExclusiveProgramming

RAG – The Newest Advance in AI Is All About Context

8 Min Read
AI in ecommerce
Artificial IntelligenceExclusiveRisk Management

AI Advances Minimize Risk of Site Accessibility Lawsuits in eCommerce

8 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

ai is improving the safety of cars
From Bolts to Bots: How AI Is Fortifying the Automotive Industry
Artificial Intelligence
AI and chatbots
Chatbots and SEO: How Can Chatbots Improve Your SEO Ranking?
Artificial Intelligence Chatbots Exclusive

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-25 SmartData Collective. All Rights Reserved.
Go to mobile version
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?