Analyzing Logs and More – A Big Data Architecture

7 Min Read

Big data and log files

Splunk’s great success in providing the tools for a sysadmin to delve into previously inaccessible log files has opened up the market for deeper analysis on data in log files.

When processing log files, understanding the syntax is very important. The process varies by volume and file type, so make sure you understand the proper protocols.

Volume and Variety in log files

What is commonly referred to as “log files” is a set of machine or user logged text data, containing information about the behavior of an application or a device. These log files come in various formats and have the truth about how a product or software is being used. Traditional uses of log file mining has been around weblogs and syslogs but over the last few years game companies like Zynga and Disney( where I was responsible for analytics infrastructure) have perfected the art of logging usage data at regular intervals and mining the data to understand customer usage patterns, performance issues as well as longer term trends on product usage on what features work or don’t work.

The infrastructure and tools are now available to capture logs from any device or software – think storage arrays or servers in data centers, medical devices and sensors – they all produce data. Increasingly the people who produce and support these products have to deal with not only the volume of data that comes out of these devices and software components but also the variety of information that is needed to understand customer usage, problem diagnosis, installed base analytics etc.

Traditional log file vendors like Splunk focus on problem resolution which in Splunk’s case is by providing a platform to index all log data and show patterns through a very intuitive UI. This search and index based solution may not be suitable for complex log bundles and other non time-series semi-structured data as well as use cases that require longer term trend analysis and reporting.

A new reference architecture for mainstream log file analysis

Another approach that is more relevant to the product or app owners, is to use a stack consisting of not just the tools to collect and index the log files but also go a step further and derive structure for performing enterprise business intelligence from unstructured logs and log bundles. The reference architecture for accomplishing this is as follows

  1. Applying context to data: A language for defining the semantics of the data is the first step in preparing to analyze the data sets in the log files. Using this language one can delineate different sections in a log file or across multiple files and how they relate to each other. The language can also define and tag various elements of the log file in a repeatable scalable and flexible manner to accommodate changing log file formats, new sections and attributes being introduced with new versions etc. A DSL (Domain Specific Language) is one way to go about doing this( Scala DSL for example).
  2. Collecting and routing data:  Data comes from various sources and transports and this needs to be routed and processed centrally. See Apache camel as an example of a tool to handle this. Whether you build it yourself or build on top of Camel, this is a key step which today gets buried as a pre-processing step consisting of a myriad of custom scripts that are difficult to maintain.
  3. Scalable backend:  Data needs to be collected and stored in a NoSQL column family based data storage: Apache HBase or Cassandra for creating structure from the raw data. The advantage with Cassandra is that you can leverage it to store the raw logs as a blob and integrate with Lucene/Solr as well for search on the files. Datastax for example offers the Solr/Cassandra combo as Solandra. A Cassandra like NoSQL data stores provides the flexibility to create schemas on the fly and accommodate needs of gathering both structured and unstructured data in one place.
  4. Rules and Alerts – a way to define rules identifying common patterns when problems occur so that an automatic alert can be triggered when a similar pattern of issues is seen with a different customer. Some of these rules can be on structured data within a log – such as ‘count number of errors in a section where the line has a string “Error in device format”’ as an example. However, many a time, the rules are more complex, allowing for lookups on data across multiple sections in a file and even across multiple files, combining simple look ups with regular expression searches. Providing a tool to define these rules centrally allows you to manage and extend your knowledgebase over time.
  5. Reporting and Analysis: A middleware and a set of apps and reporting infrastructure for predefined queries satisfying common business cases -example installed base analytics, performance analysis, product usage analysis, capacity management etc. You can develop the middleware/app using a framework like PLAY. Expose common business queries as webservices so you can plug in common BI tools like Tableau to the data.

Conclusion

Parsing and processing log files tactically is still a grep/awk/sed or some scripting exercise done by a lone super ranger in an IT department or elsewhere. But with the growing strategic value of the data in log files, major product and software vendors are looking to put together a robust technology stack to leverage this information across the enterprise. If done right, this would become a very powerful and unique “Big Data” app providing meaningful insights across the enterprise from product support to engineering and marketing providing both operational and business intelligence from machine logs.

Share This Article
Exit mobile version