Key Words Through Graph Entropy Hierarchical Clustering

4 Min Read

In the last post I showed how to extract key words from a text through a principle called graph entropy.
Today I’m going to show another application of the graph entropy in order to extract clusters of key words.

Why
The key words of a document depict the main topic of the content, but if the document is big, often, there are many different sub topics related to the main.

In this perspective, a clusters of keywords should make easier for the reader the identification of the key points of a document.

In the last post I showed how to extract key words from a text through a principle called graph entropy.
Today I’m going to show another application of the graph entropy in order to extract clusters of key words.

Why
The key words of a document depict the main topic of the content, but if the document is big, often, there are many different sub topics related to the main.

In this perspective, a clusters of keywords should make easier for the reader the identification of the key points of a document.

Moreover, imagine to implement a search engine based on clusters of relevant words instead of the common indexing of atomic words: it enables documents comparison, taxonomies definition, and much more!

How
The definition of graph entropy I’m studying on, assigns to each word of the document a relevance score and a sub graph of words topologically closed to it.

The clustering should maximize the relevance score obtained merging two words in the same cluster.

It’s easy to understand that we have to face a combinatoric maximization problem.

The idea is to take advantage of the Simulated annealing (a bit revisited and adapted to the scope) in order to identify sub-optimal merging solution at each step of the merging phase of the hierarchical clustering.

Experiment
I decided to adopt as document test the complete version of the file we used in the last post: Nuclear_weapon.
Here you are the clusters of first 100 relevant words extracted:

The three clusters obtained.
 

It’s interesting to highlight the following considerations:

  • The first cluster merged together words as “material,uranium, plutonium, isotope” and “war, attack, arm“, and also “proliferation, movement, control, development“.
  • The second cluster (which has the lowest rank) aggregates words as “japan, japanese, place, israel, iraq,american“, and “ton, tnt, yeld”  
  • The third cluster (which has the highest rank) describes quite well the primary topic, merging all the most important words of the document! 

Of course, the procedure is still in “incubator” phase, and the accuracy of the clusters rests on the performance of the Annealing clustering (…maybe different algorithms in this context perform better… but just to show a rough solution I guess it’s enough :D)

This is the optimization process for the last merging stage (I presume that temperature schedule requires an adjustment):

Optimization curve through Simulated Annealing Hierarchical Clustering (last merging stage)


Next steps:
Looking forward to receive comments, and suggestions.
…It would be interesting using such methodology to create a new kind of full text search engine, totally independent by frequency of the words and frequency of visits.

The doc
here you are the document parsed and colored through the clustering assignment (have been highlighted just the first 100 relevant features ranked through the Graph Entropy method).
Stay tuned
cristian.


Share This Article
Exit mobile version