Challenges of Chinese Natural Language Processing – Segmentation

4 Min Read

 

As the Chinese consumer market takes the center stage in the world economy, the rush to adapt business tools for the Chinese market is equally as frenzy. Fortunately, despite what my friend Ben might say, most of the adaptions are confined to the interface layer. That means, the majority of the challenges are limited to character encoding, font style, and static text translation.

 

As the Chinese consumer market takes the center stage in the world economy, the rush to adapt business tools for the Chinese market is equally as frenzy. Fortunately, despite what my friend Ben might say, most of the adaptions are confined to the interface layer. That means, the majority of the challenges are limited to character encoding, font style, and static text translation.

However, for analysis tools, which are very sensitive to the data source and quality, the same cannot be said. Many developers are aware of this and applied the same adaption strategy to their data — automated translation. To them, here is my response (in short: Don’t). It is a no-brainer that Chinese and English are two very different languages. There for a system designed under the English paradigm will find itself ill-suited for Chinese text.

Segmentation

Consider these two pieces of text “Edinburgh is a beautiful city” and “愛丁堡是個很漂亮的城市”. Very quickly we can notice that the Chinese text is not separated by spaces as the English is. So how do you extract terms for your analytics? (Term is used here as an unit of text which has a meaning/definition.) The naive approach is to treat each character as individual terms, because in English we observe most terms as uni-gram (singluar words). This, however, does not apply to Chinese. Most Chinese terms are used as bi-gram (2-word phrases). Here is an interesting paper on the n-gram statistics by Google researchers, who made the interesting discovery:

The trend of the total number of unique N-grams as a function of N is similar in English and Chinese, but the Chinese version is shifted to the right. The curves indicate that, on average, 1.5 Chinese characters correspond to 1 English word.

The problem of properly selecting the correct n-grams to use as terms is called Segmentation. It is an important problem to solve for character-based languages such as Chinese and Korean. Yahoo! Taiwan offers a nice API for developers to use here. (Sorry guys, I can’t find an English documentation for it.) Besides segmentation, it also applies part-of-speech tagging to the terms. It is not perfect, however, as I have observed many should-be 4-grams being segmented as 3-grams. Furthermore, it can be inferred that the API is driven by syntex and dictionary. This means that slangs and emoticons will not be covered unless manually added to the dictionary on the server. This API is also rate limited, making it great for research, but poor for commerical use.

 

Share This Article
Exit mobile version