Cookies help us display personalized product recommendations and ensure you have great shopping experience.

By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    stock investing and data analytics
    How Data Analytics Supports Smarter Stock Trading Strategies
    4 Min Read
    predictive analytics risk management
    How Predictive Analytics Is Redefining Risk Management Across Industries
    7 Min Read
    data analytics and gold trading
    Data Analytics and the New Era of Gold Trading
    9 Min Read
    composable analytics
    How Composable Analytics Unlocks Modular Agility for Data Teams
    9 Min Read
    data mining to find the right poly bag makers
    Using Data Analytics to Choose the Best Poly Mailer Bags
    12 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-25 SmartData Collective. All Rights Reserved.
Reading: Challenges of Chinese Natural Language Processing – Segmentation
Share
Notification
Font ResizerAa
SmartData CollectiveSmartData Collective
Font ResizerAa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Analytics > Text Analytics > Challenges of Chinese Natural Language Processing – Segmentation
AnalyticsText Analytics

Challenges of Chinese Natural Language Processing – Segmentation

Ken Hu
Ken Hu
4 Min Read
SHARE

 

As the Chinese consumer market takes the center stage in the world economy, the rush to adapt business tools for the Chinese market is equally as frenzy. Fortunately, despite what my friend Ben might say, most of the adaptions are confined to the interface layer. That means, the majority of the challenges are limited to character encoding, font style, and static text translation.

 

More Read

A Question of Scope
Predictive Analytics World (PAW) was a great event
A story about the power of rules to improve analytic decisions
Big Data for SMEs
The Rise of Workforce Analytics

As the Chinese consumer market takes the center stage in the world economy, the rush to adapt business tools for the Chinese market is equally as frenzy. Fortunately, despite what my friend Ben might say, most of the adaptions are confined to the interface layer. That means, the majority of the challenges are limited to character encoding, font style, and static text translation.

However, for analysis tools, which are very sensitive to the data source and quality, the same cannot be said. Many developers are aware of this and applied the same adaption strategy to their data — automated translation. To them, here is my response (in short: Don’t). It is a no-brainer that Chinese and English are two very different languages. There for a system designed under the English paradigm will find itself ill-suited for Chinese text.

Segmentation

Consider these two pieces of text “Edinburgh is a beautiful city” and “愛丁堡是個很漂亮的城市”. Very quickly we can notice that the Chinese text is not separated by spaces as the English is. So how do you extract terms for your analytics? (Term is used here as an unit of text which has a meaning/definition.) The naive approach is to treat each character as individual terms, because in English we observe most terms as uni-gram (singluar words). This, however, does not apply to Chinese. Most Chinese terms are used as bi-gram (2-word phrases). Here is an interesting paper on the n-gram statistics by Google researchers, who made the interesting discovery:

The trend of the total number of unique N-grams as a function of N is similar in English and Chinese, but the Chinese version is shifted to the right. The curves indicate that, on average, 1.5 Chinese characters correspond to 1 English word.

The problem of properly selecting the correct n-grams to use as terms is called Segmentation. It is an important problem to solve for character-based languages such as Chinese and Korean. Yahoo! Taiwan offers a nice API for developers to use here. (Sorry guys, I can’t find an English documentation for it.) Besides segmentation, it also applies part-of-speech tagging to the terms. It is not perfect, however, as I have observed many should-be 4-grams being segmented as 3-grams. Furthermore, it can be inferred that the API is driven by syntex and dictionary. This means that slangs and emoticons will not be covered unless manually added to the dictionary on the server. This API is also rate limited, making it great for research, but poor for commerical use.

 

Share This Article
Facebook Pinterest LinkedIn
Share

Follow us on Facebook

Latest News

stock investing and data analytics
How Data Analytics Supports Smarter Stock Trading Strategies
Analytics Exclusive
qr codes for data-driven marketing
Role of QR Codes in Data-Driven Marketing
Big Data Exclusive
microsoft 365 data migration
Why Data-Driven Businesses Consider Microsoft 365 Migration
Big Data Exclusive
real time data activation
How to Choose a CDP for Real-Time Data Activation
Big Data Exclusive

Stay Connected

1.2KFollowersLike
33.7KFollowersFollow
222FollowersPin

You Might also Like

Are Data Analysts More Like Batman or Superman?

4 Min Read

How Google Uses R to Make Online Advertising More Effective

4 Min Read
Hadoop elephants
AnalyticsBig DataBusiness IntelligenceHadoopITMapReduceOpen SourceSoftware

4 Considerations When Choosing a Hadoop Distribution

7 Min Read

When Data Flows Faster Than It Can Be Processed

10 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

giveaway chatbots
How To Get An Award Winning Giveaway Bot
Big Data Chatbots Exclusive
ai chatbot
The Art of Conversation: Enhancing Chatbots with Advanced AI Prompts
Chatbots

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-25 SmartData Collective. All Rights Reserved.
Go to mobile version
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?