Cookies help us display personalized product recommendations and ensure you have great shopping experience.

By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    image fx (67)
    Improving LinkedIn Ad Strategies with Data Analytics
    9 Min Read
    big data and remote work
    Data Helps Speech-Language Pathologists Deliver Better Results
    6 Min Read
    data driven insights
    How Data-Driven Insights Are Addressing Gaps in Patient Communication and Equity
    8 Min Read
    pexels pavel danilyuk 8112119
    Data Analytics Is Revolutionizing Medical Credentialing
    8 Min Read
    data and seo
    Maximize SEO Success with Powerful Data Analytics Insights
    8 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-25 SmartData Collective. All Rights Reserved.
Reading: Challenges of Chinese Natural Language Processing – Segmentation
Share
Notification
Font ResizerAa
SmartData CollectiveSmartData Collective
Font ResizerAa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Analytics > Text Analytics > Challenges of Chinese Natural Language Processing – Segmentation
AnalyticsText Analytics

Challenges of Chinese Natural Language Processing – Segmentation

Ken Hu
Ken Hu
4 Min Read
SHARE

 

As the Chinese consumer market takes the center stage in the world economy, the rush to adapt business tools for the Chinese market is equally as frenzy. Fortunately, despite what my friend Ben might say, most of the adaptions are confined to the interface layer. That means, the majority of the challenges are limited to character encoding, font style, and static text translation.

 

More Read

6 Questions to Ask for Real Insight From Big Data
6 Predictions About the Future of Predictive Analytics
Harvard Gets Access to Twitter Data Stream to Predict Foodborne Illness Outbreaks
5 Important Ways Artificial Intelligence Improves Sales
What’s in Store for Big Data Analytics in 2016

As the Chinese consumer market takes the center stage in the world economy, the rush to adapt business tools for the Chinese market is equally as frenzy. Fortunately, despite what my friend Ben might say, most of the adaptions are confined to the interface layer. That means, the majority of the challenges are limited to character encoding, font style, and static text translation.

However, for analysis tools, which are very sensitive to the data source and quality, the same cannot be said. Many developers are aware of this and applied the same adaption strategy to their data — automated translation. To them, here is my response (in short: Don’t). It is a no-brainer that Chinese and English are two very different languages. There for a system designed under the English paradigm will find itself ill-suited for Chinese text.

Segmentation

Consider these two pieces of text “Edinburgh is a beautiful city” and “愛丁堡是個很漂亮的城市”. Very quickly we can notice that the Chinese text is not separated by spaces as the English is. So how do you extract terms for your analytics? (Term is used here as an unit of text which has a meaning/definition.) The naive approach is to treat each character as individual terms, because in English we observe most terms as uni-gram (singluar words). This, however, does not apply to Chinese. Most Chinese terms are used as bi-gram (2-word phrases). Here is an interesting paper on the n-gram statistics by Google researchers, who made the interesting discovery:

The trend of the total number of unique N-grams as a function of N is similar in English and Chinese, but the Chinese version is shifted to the right. The curves indicate that, on average, 1.5 Chinese characters correspond to 1 English word.

The problem of properly selecting the correct n-grams to use as terms is called Segmentation. It is an important problem to solve for character-based languages such as Chinese and Korean. Yahoo! Taiwan offers a nice API for developers to use here. (Sorry guys, I can’t find an English documentation for it.) Besides segmentation, it also applies part-of-speech tagging to the terms. It is not perfect, however, as I have observed many should-be 4-grams being segmented as 3-grams. Furthermore, it can be inferred that the API is driven by syntex and dictionary. This means that slangs and emoticons will not be covered unless manually added to the dictionary on the server. This API is also rate limited, making it great for research, but poor for commerical use.

 

Share This Article
Facebook Pinterest LinkedIn
Share

Follow us on Facebook

Latest News

image fx (2)
Monitoring Data Without Turning into Big Brother
Big Data Exclusive
image fx (71)
The Power of AI for Personalization in Email
Artificial Intelligence Exclusive Marketing
image fx (67)
Improving LinkedIn Ad Strategies with Data Analytics
Analytics Big Data Exclusive Software
big data and remote work
Data Helps Speech-Language Pathologists Deliver Better Results
Analytics Big Data Exclusive

Stay Connected

1.2kFollowersLike
33.7kFollowersFollow
222FollowersPin

You Might also Like

Measuring Conversion Rate: Are you making an Impact?

7 Min Read

IBM – GIO Study Theme: Media and Content

0 Min Read

At an event in its Hawthorne, NY research facility, Big Blue…

2 Min Read

What? So what? Then what? … Why not?

2 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

ai chatbot
The Art of Conversation: Enhancing Chatbots with Advanced AI Prompts
Chatbots
giveaway chatbots
How To Get An Award Winning Giveaway Bot
Big Data Chatbots Exclusive

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-25 SmartData Collective. All Rights Reserved.
Go to mobile version
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?