Cookies help us display personalized product recommendations and ensure you have great shopping experience.

By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    image fx (67)
    Improving LinkedIn Ad Strategies with Data Analytics
    9 Min Read
    big data and remote work
    Data Helps Speech-Language Pathologists Deliver Better Results
    6 Min Read
    data driven insights
    How Data-Driven Insights Are Addressing Gaps in Patient Communication and Equity
    8 Min Read
    pexels pavel danilyuk 8112119
    Data Analytics Is Revolutionizing Medical Credentialing
    8 Min Read
    data and seo
    Maximize SEO Success with Powerful Data Analytics Insights
    8 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-25 SmartData Collective. All Rights Reserved.
Reading: Yes, Computers Can Stereotype Now
Share
Notification
Font ResizerAa
SmartData CollectiveSmartData Collective
Font ResizerAa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Analytics > Predictive Analytics > Yes, Computers Can Stereotype Now
Predictive Analytics

Yes, Computers Can Stereotype Now

Ari Amster
Ari Amster
6 Min Read
Image
SHARE

ImageThe very thought that computers may be subject to some of the same biases and prejudices as humans can be frightening, but the unsettling reality is something we as a society need to confront before the problem proliferates.

ImageThe very thought that computers may be subject to some of the same biases and prejudices as humans can be frightening, but the unsettling reality is something we as a society need to confront before the problem proliferates. We have entered an era where things like big data, machine learning, and adaptive algorithms are becoming commonplace among businesses, learning institutions, healthcare providers, and more. In many cases, these learning algorithms and the use of data are meant to overcome some of the shortcomings of human thoughts and actions, eliminating the biases that creep into our minds without our even realizing it. Sadly, many of the same prejudices could be copied over into how computers solve problems. Even worse, those biases may actually be magnified, and having the best of intentions doesn’t prevent the problems from spreading.

On the surface, using the likes of big data and machine learning (an application of Spark vs. Hadoop) would seem like an excellent way to get rid of biases. After all, if a computer is only dealing with hard data, how could bias even factor into the equation? Unfortunately, the use of big data doesn’t always lead to impartial outcomes. In the case of machine learning, algorithms look for ways to solve specific problems or come up with innovative solutions. While the process is a learning one, algorithms are still plugged in by human programmers, and unless specifically controlled and monitored for bias, those same algorithms may be learning in the wrong way, all against the intentions of the programmers.

Let’s take a look at some examples of how this can happen. One area where people want to use learning algorithms is in what is often called predictive policing. The idea is to use machine learning to find where to best allocate police resources in an effort to reduce crime. In this way, police would deploy officers and other resources according to where the data indicates and not based on racial profiling. While the goal is admirable, the algorithms are only as good as the data that is used, and if the data collected indicates police are already targeting a certain race and getting arrests out of it, the algorithms will tell police to focus on certain neighborhoods where that race is prevalent. Think of it as a form of confirmation bias, only made worse because police would now think they’re focusing their attention on the right spots because a computer told them to.

More Read

A Cautious Look at Big Data
Another BI Vendor Acquired
Big Data: What can an energy company teach us about data science?
Kosmix, along with DeepPeep, are example of the Deep Web , aka…
Business Intelligence Isn’t Just About Technology [VIDEO]

Another example comes from a British university which adopted a computer model that would help them with the admissions process. The program would base its selection criteria on historical patterns indicating which candidates were accepted and which were rejected. This would allow them during the first round of admissions to filter out candidates determined to have little chance of being accepted. Again, this is a case where the goal is an admirable one, yet the results ended up being troubling. Based off of the data fed into the algorithm, significant bias was shown against female candidates and those with non-European looking names. While the problem was discovered and addressed, the fact that a machine learning algorithm came to that conclusion should raise more than a few eyebrows.

There are many other examples that could be cited, but the news isn’t all doom and gloom. Many experts and groups are raising awareness of the problem before it becomes more widespread. And since the whole thing is looked at as accidental bias, there’s hope that steps can be taken to ensure more bias doesn’t creep into machine learning. First off, it’s possible to test for bias within algorithms, something that is relatively easy to do. Second, many are pushing for more engineers and programmers to become involved in these types of policy debates since they are the ones with the most understanding of the problem and how to solve it. Third, there’s a greater call for “algorithmic transparency”, which basically opens up the underlying algorithmic mechanisms to review and scrutiny, ensuring that bias is kept out as much as possible.

Perhaps it shouldn’t be surprising that computers could stereotype unwittingly. The major push is to develop artificial intelligence, which is meant to mimic the human mind in the first place. The fact that bias and prejudices are involved should be something expected but also controlled for. As more organizations adopt big data and related tools like Apache Spark, along with machine learning techniques, they’ll need to take care that the results they discover are the untampered truth and not the result of unintentional biases.


TAGGED:smart data
Share This Article
Facebook Pinterest LinkedIn
Share

Follow us on Facebook

Latest News

image fx (2)
Monitoring Data Without Turning into Big Brother
Big Data Exclusive
image fx (71)
The Power of AI for Personalization in Email
Artificial Intelligence Exclusive Marketing
image fx (67)
Improving LinkedIn Ad Strategies with Data Analytics
Analytics Big Data Exclusive Software
big data and remote work
Data Helps Speech-Language Pathologists Deliver Better Results
Analytics Big Data Exclusive

Stay Connected

1.2kFollowersLike
33.7kFollowersFollow
222FollowersPin

You Might also Like

smart cities data
Exclusive

What Type Of Data Storage Do Smart Cities Need?

9 Min Read
cloud computing contract
Business IntelligenceBusiness RulesCloud Computing

7 Key Terms for Negotiating Your Cloud Contract

3 Min Read
smart data use in manufacturing
Big DataExclusive

Smart Data Is Changing Multi-Board System Design

6 Min Read
smart data for business cost reduction
Big Data

3 Massive Cost-Saving Benefits of Smart Data for Businesses

4 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

ai in ecommerce
Artificial Intelligence for eCommerce: A Closer Look
Artificial Intelligence
AI chatbots
AI Chatbots Can Help Retailers Convert Live Broadcast Viewers into Sales!
Chatbots

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-25 SmartData Collective. All Rights Reserved.
Go to mobile version
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?