天天看点

Centralized Logging Architecture

http://jasonwilder.com/blog/2013/07/16/centralized-logging-architecture/

means you need to use several of them together to build a robust solution.

The main aspects you will need to address are: collection, transport, storage,

and analysis. In some special cases, you may also want to have an alerting capability

as well.

Applications create logs in different ways, some log through syslog, others log directly to files. If you consider a typical web application running on a linux hosts, there will be a dozen or more log files in /var/log as well as a few application specific

logs in home directories or other locations.

If you are supporting a web based application and your developers or operations staff need access to log data quickly in order to troubleshoot live issues, you need a solution that is able to monitor changes to log files in near real-time. If you are using

a file replication based approach where files are replicated to a central server on a fixed schedule, then you can only inspect logs as frequently as the replication runs. A one minute rsync cron job might not be fast enough when your site is down and you

are waiting for the relevant log data to be replicated.

On the other hand, if you need to analyze log data offline for calculating metrics or other batch related work, a file replication strategy might be a good fit.

Log data can accumulate quickly on multiple hosts. Transporting it reliably and quickly to your centralized location may need additional tooling in order to effectively transmit it and ensure data is not lost.

designed for transporting large volumes of data from one host to another reliably. Although each of these frameworks addresses the transport problem, they do so quite differently.

to log data via their API. Typically, application code is written to log directly to these sources which allows them to reduce latency and improve reliability. If you want to centralize typical log file data, you would need something to tail and stream the

logs via their respective APIs. If you control the app that is logging the data you want to collect, these can be much more efficient.

a number of input sources but also support natively tailing files and transporting them reliably. These are a better fit for more general log collection.

Now that your log data is being transfered, it needs a destination. Your centralized storage system needs to be able to handle the growth in data over time. Each day will add a certain amount of storage that is relative to the number of hosts and processes

that are generating log data.

How you store things depends on a few things:

How long should it be stored - If the logs are for long-term, archival

Your environments data volume. - A days worth of logs for Google is much

different than a days worth of logs for ACME Fishing Supplies. The storage system you chose should allow you to scale-out horizontally if your data volume will be large.

How will you need to access the logs - Some storage is not suitable for

help analyzing the data easier than writing native MapReduce jobs.

This approach allows more real-time, interactive access to the data but is not really suited for a mass batch processing.

The last component that is sometimes nice to have is the ability to alert on log patterns or calculated metrics based on log data. Two common uses for this are error reporting and monitoring.

Most log data is not interesting but errors almost always indicate a problem. It’s much more effective to have the logging system email or notify respective parties when errors ocurr instead of having someone repeatedly watch for the events. There are several

which can give you and idea of how frequently an error is occuring.

Another use case is monitoring. For example, you may have hundreds of web servers and want to know if they start returning 500 status codes. If you can parse your web log files and record a metric on the status code, you can then trigger alerts when that metric

Hopefully this helps provide a basic model for designing a centralized logging solution for your environment.

继续阅读