Chapter 7: Mapped Diagnostic Context
Lock the doors.
—LEROY CAIN, Flight Director, Columbia Mission Control
One of the design goals of logback is to audit and debug complex distributed applications. Most real-world distributed systems need to deal with multiple clients simultaneously. In a typical multithreaded implementation of such a system, different threads will handle different clients. A possible but discouraged approach to differentiate the logging output of one client from another consists of instantiating a new and separate logger for each client. This technique promotes the proliferation of loggers and considerably increases their management overhead.
A lighter technique consists of uniquely stamping each log request servicing a given client. Neil Harrison described this method in the book Patterns for Logging Diagnostic Messages in Pattern Languages of Program Design 3, edited by R. Martin, D. Riehle, and F. Buschmann (Addison-Wesley, 1997). Logback leverages a variant of this technique included in the SLF4J API: Mapped Diagnostic Contexts (MDC).
To uniquely stamp each request, the user puts contextual
information into the MDC
,
the abbreviation of Mapped Diagnostic Context. The public
interface of the MDC class is shown below.
package org.slf4j; public class MDC { //Put a context value as identified by key //into the current thread's context map. public static void put(String key, String val); //Get the context identified by thekey
parameter. public static String get(String key); //Remove the the context identified by thekey
parameter. public static void remove(String key); //Clear all entries in the MDC. public static void clear(); //Returns the keys in the MDC as a Set. The returned value can be null. public static Set<String> getKeys(); }
The MDC
class contains only static methods. It
lets the developer place information in a diagnostic
context that can be subsequently retrieved by certain
logback components. The MDC
manages contextual
information on a per thread basis. Typically, while
starting to service a new client request, the developer will
insert pertinent contextual information, such as the client id,
client's IP address, request parameters etc. into the
MDC
. Logback components, if appropriately
configured, will automatically include this information in each
log entry.
The next application named
SimpleMDC
demonstrates this basic principle.
package chapter7; import org.slf4j.LoggerFactory; import org.slf4j.MDC; import ch.qos.logback.classic.Logger; import ch.qos.logback.classic.PatternLayout; import ch.qos.logback.core.ConsoleAppender; public class SimpleMDC { static public void main(String[] args) throws Exception { // You can put values in the MDC at any time. We first put the // first name MDC.put("first", "Dorothy"); // Configure logback PatternLayout layout = new PatternLayout(); layout.setPattern("%X{first} %X{last} - %m%n"); layout.start(); ConsoleAppender<LoggingEvent> appender = new ConsoleAppender<LoggingEvent>(); appender.setLayout(layout); appender.start(); Logger root = (Logger)LoggerFactory.getLogger("root"); root.addAppender(appender); // get a logger Logger logger = (Logger)LoggerFactory.getLogger(SimpleMDC.class); // We now put the last name MDC.put("last", "Parker"); // The most beautiful two words in the English language according // to Dorothy Parker: logger.info("Check enclosed."); logger.debug("The most beautiful two words in English."); MDC.put("first", "Richard"); MDC.put("last", "Nixon"); logger.info("I am not a crook."); logger.info("Attributed to the former US president. 17 Nov 1973."); } }
The main method starts by associating the value Dorothy with
the key first in the MDC
. You can place as many
value/key associations in the MDC
as you wish.
Multiple insertions with the same key will overwrite older values.
The code then proceeds to configure logback.
Note the usage of the %X specifier within the
PatternLayout
conversion pattern. The %X
conversion specifier is employed twice, once for the key first
and once for the key last. After configuring the root logger,
the code associates the value Parker with the key last.
It then invokes the logger twice with different messages.
The code finishes by setting the MDC
to different values
and issuing several logging requests. Running SimpleMDC yields:
Dorothy Parker - Check enclosed. Dorothy Parker - The most beautiful two words in English. Richard Nixon - I am not a crook. Richard Nixon - Attributed to the former US president. 17 Nov 1973.
The SimpleMDC
application illustrates how logback layouts,
if configured appropriately, automatically output MDC
information.
Moreover, the information placed into the MDC
can be used by
multiple logger invocations.
Advanced Use
Mapped Diagnostic Contexts shine brightest within client server architectures.
Typically, multiple clients will be served by multiple threads on the server.
Although the methods in the MDC
class are static,
the diagnostic context is managed on a per thread basis, allowing each server
thread to bear a distinct MDC
stamp. MDC
operations
such as put()
and get()
affect the MDC
of the current thread only. The MDC
in other threads remain
unaffected. Given that MDC
information is managed on a
per thread basis, each thread will have its own copy of the MDC
.
Thus, there is no need for the developer to worry about thread-safety or
synchronization when programming with the MDC
because
it safely and transparently handles these issues.
The next example is somewhat more advanced.
It shows how the MDC
can be used in a client-server setting.
The server-side implements the NumberCruncher
interface shown in
Example 7.2 below. The NumberCruncher
interface contains a single
method named factor()
. Using RMI technology, client invokes the
factor()
method of the server application to retrieve the distinct
factors of an integer.
package chapter7;
import java.rmi.Remote;
import java.rmi.RemoteException;
/**
* NumberCruncher factors positive integers.
*/
public interface NumberCruncher extends Remote {
/**
* Factor a positive integer number
and return its
* distinct factor's as an integer array.
* */
int[] factor(int number) throws RemoteException;
}
The NumberCruncherServer
application, listed in Example 7.3 below,
implements the NumberCruncher
interface. Its main method exports
an RMI Registry on the local host that accepts requests on a well-known port.
package chapter7; import java.rmi.RemoteException; import java.rmi.registry.LocateRegistry; import java.rmi.registry.Registry; import java.rmi.server.UnicastRemoteObject; import java.util.Vector; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.slf4j.MDC; import ch.qos.logback.classic.LoggerContext; import ch.qos.logback.classic.joran.JoranConfigurator; import ch.qos.logback.core.joran.spi.JoranException; /** * A simple NumberCruncher implementation that logs its progress when * factoring numbers. The purpose of the whole exercise is to show the * use of mapped diagnostic contexts in order to distinguish the log * output from different client requests. * */ public class NumberCruncherServer extends UnicastRemoteObject implements NumberCruncher { private static final long serialVersionUID = 1L; static Logger logger = LoggerFactory.getLogger(NumberCruncherServer.class); public NumberCruncherServer() throws RemoteException { } public int[] factor(int number) throws RemoteException { // The client's host is an important source of information. try { MDC.put("client", NumberCruncherServer.getClientHost()); } catch (java.rmi.server.ServerNotActiveException e) { logger.warn("Caught unexpected ServerNotActiveException.", e); } // The information contained within the request is another source // of distinctive information. It might reveal the users name, // date of request, request ID etc. In servlet type environments, // useful information is contained in the HttpRequest or in the // HttpSession. MDC.put("number", String.valueOf(number)); logger.info("Beginning to factor."); if (number <= 0) { throw new IllegalArgumentException(number + " is not a positive integer."); } else if (number == 1) { return new int[] { 1 }; } Vector<Integer> factors = new Vector<Integer>(); int n = number; for (int i = 2; (i <= n) && ((i * i) <= number); i++) { // It is bad practice to place log requests within tight loops. // It is done here to show interleaved log output from // different requests. logger.debug("Trying " + i + " as a factor."); if ((n % i) == 0) { logger.info("Found factor " + i); factors.addElement(new Integer(i)); do { n /= i; } while ((n % i) == 0); } // Placing artificial delays in tight-loops will also lead to // sub-optimal resuts. :-) delay(100); } if (n != 1) { logger.info("Found factor " + n); factors.addElement(new Integer(n)); } int len = factors.size(); int[] result = new int[len]; for (int i = 0; i < len; i++) { result[i] = ((Integer) factors.elementAt(i)).intValue(); } // clean up MDC.remove("client"); MDC.remove("number"); return result; } static void usage(String msg) { System.err.println(msg); System.err.println("Usage: java chapter7.NumberCruncherServer configFile\n" + " where configFile is a logback configuration file."); System.exit(1); } public static void delay(int millis) { try { Thread.sleep(millis); } catch (InterruptedException e) { } } public static void main(String[] args) { if (args.length != 1) { usage("Wrong number of arguments."); } String configFile = args[0]; if (configFile.endsWith(".xml")) { try { LoggerContext lc = (LoggerContext) LoggerFactory.getILoggerFactory(); JoranConfigurator configurator = new JoranConfigurator(); configurator.setContext(lc); lc.shutdownAndReset(); configurator.doConfigure(args[0]); } catch (JoranException je) { je.printStackTrace(); } } NumberCruncherServer ncs; try { ncs = new NumberCruncherServer(); logger.info("Creating registry."); Registry registry = LocateRegistry.createRegistry(Registry.REGISTRY_PORT); registry.rebind("Factor", ncs); logger.info("NumberCruncherServer bound and ready."); } catch (Exception e) { logger.error("Could not bind NumberCruncherServer.", e); return; } } }
The implementation of the factor(int number)
method is
of particular relevance. It starts by putting the client's hostname into the
MDC
under the key client. The number to factor,
as requested by the client, is put into the MDC
under the key
number. After computing the distinct factors of the integer
parameter, the result is returned to the client. Before returning the
result however, the values for the client and number are
cleared by calling the MDC.remove()
method. Normally,
a put()
operation should be balanced by the corresponding
remove()
operation. Otherwise, the MDC
will
contain stale values for certain keys. We would recommend that whenever
possible remove()
operations be performed within finally blocks,
ensuring their invocation regardless of the execution path of the code.
After these theoretical explanations, we are ready to run the number cruncher example. Start the server with the following command:
java chapter7.NumberCruncherServer src/main/java/chapter7/mdc1.xml
The mdc1.xml configuration file is listed below:
Example 7.4: Configuration file (logback-examples/src/main/java/chapter7/mdc1.xml)<configuration> <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender"> <layout class="ch.qos.logback.classic.PatternLayout"> <Pattern>%-4r [%thread] %-5level C:%X{client} N:%X{number} - %msg%n</Pattern> </layout> </appender> <root> <level value ="debug"/> <appender-ref ref="CONSOLE"/> </root> </configuration>
Note the use of the %X conversion specifier within the Pattern option.
The following command starts an instance of NumberCruncherClient
application:
java chapter7.NumberCruncherClient hostname
where hostname is the host where the
NumberCruncherServer
is running
Executing multiple instances of the client and requesting the server to factor the numbers 129 from the first client and shortly thereafter the number 71 from the second client, the server outputs the following:
70984 [RMI TCP Connection(4)-192.168.1.6] INFO C:orion N:129 - Beginning to factor. 70984 [RMI TCP Connection(4)-192.168.1.6] DEBUG C:orion N:129 - Trying 2 as a factor. 71093 [RMI TCP Connection(4)-192.168.1.6] DEBUG C:orion N:129 - Trying 3 as a factor. 71093 [RMI TCP Connection(4)-192.168.1.6] INFO C:orion N:129 - Found factor 3 71187 [RMI TCP Connection(4)-192.168.1.6] DEBUG C:orion N:129 - Trying 4 as a factor. 71297 [RMI TCP Connection(4)-192.168.1.6] DEBUG C:orion N:129 - Trying 5 as a factor. 71390 [RMI TCP Connection(4)-192.168.1.6] DEBUG C:orion N:129 - Trying 6 as a factor. 71453 [RMI TCP Connection(5)-192.168.1.6] INFO C:orion N:71 - Beginning to factor. 71453 [RMI TCP Connection(5)-192.168.1.6] DEBUG C:orion N:71 - Trying 2 as a factor. 71484 [RMI TCP Connection(4)-192.168.1.6] DEBUG C:orion N:129 - Trying 7 as a factor. 71547 [RMI TCP Connection(5)-192.168.1.6] DEBUG C:orion N:71 - Trying 3 as a factor. 71593 [RMI TCP Connection(4)-192.168.1.6] DEBUG C:orion N:129 - Trying 8 as a factor. 71656 [RMI TCP Connection(5)-192.168.1.6] DEBUG C:orion N:71 - Trying 4 as a factor. 71687 [RMI TCP Connection(4)-192.168.1.6] DEBUG C:orion N:129 - Trying 9 as a factor. 71750 [RMI TCP Connection(5)-192.168.1.6] DEBUG C:orion N:71 - Trying 5 as a factor. 71797 [RMI TCP Connection(4)-192.168.1.6] DEBUG C:orion N:129 - Trying 10 as a factor. 71859 [RMI TCP Connection(5)-192.168.1.6] DEBUG C:orion N:71 - Trying 6 as a factor. 71890 [RMI TCP Connection(4)-192.168.1.6] DEBUG C:orion N:129 - Trying 11 as a factor. 71953 [RMI TCP Connection(5)-192.168.1.6] DEBUG C:orion N:71 - Trying 7 as a factor. 72000 [RMI TCP Connection(4)-192.168.1.6] INFO C:orion N:129 - Found factor 43 72062 [RMI TCP Connection(5)-192.168.1.6] DEBUG C:orion N:71 - Trying 8 as a factor. 72156 [RMI TCP Connection(5)-192.168.1.6] INFO C:orion N:71 - Found factor 71
The clients were run from a machine called orion as can be seen in
the above output. Even if the server processes the requests of clients
near-simultaneously in separate threads, the logging output pertaining
to each client request can be distinguished by studying the output of the
MDC
. Note for example the stamp associated with number,
i.e. the number to factor.
The attentive reader might have observed that the thread name could
also have been used to distinguish each request. The thread name can cause
confusion if the server side technology recycles threads. In that case,
it may be hard to determine the boundaries of each request, that is,
when a given thread finishes servicing a request and when it begins servicing the next.
Because the MDC
is under the control of the application developer,
MDC
stamps do not suffer from this problem.
Automating access to the MDC
As we've seen, the MDC
is very useful when dealing with multiple
clients. In the case of a web application that manages user authentication, one
simple solution could be to set the user's name in the MDC
and remove
it once the user logs out. Unfortunately, it is not always possible to achieve
reliable results using this technique. Since MDC
is managing
information on a per thread basis, a server that recycles
threads might lead to false information contained in the MDC
.
To allow the information contained in the MDC
to be correct
at all times when a request is processed, a solution might be to store the
username at the beginning of the process, and remove it at the end of
said process. A servlet
Filter
is a good tool to have at
hand in such case.
By using a servlet filter, one can access the request, try to access
to relevant user information and store it in the MDC
.
Then, after the process of creating the response, one just needs
to remove the user information from the MDC
.
Here is an implementation of such a filter:
Example 7.5: User servlet filter ( logback-examples/src/main/java/chapter7/UserServletFilter.java)package chapter7; import java.io.IOException; import java.security.Principal; import javax.servlet.Filter; import javax.servlet.FilterChain; import javax.servlet.FilterConfig; import javax.servlet.ServletException; import javax.servlet.ServletRequest; import javax.servlet.ServletResponse; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpSession; import org.slf4j.MDC; public class UserServletFilter implements Filter { boolean userRegistered = false; private final String userKey = "username"; public void destroy() { } public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException { HttpServletRequest req = (HttpServletRequest) request; Principal principal = req.getUserPrincipal(); if (principal != null) { String username = principal.getName(); registerUsername(username); } else { HttpSession session = req.getSession(); String username = (String)session.getAttribute(userKey); registerUsername(username); } try { chain.doFilter(request, response); } finally { if (userRegistered) { MDC.remove(userKey); } } } public void init(FilterConfig arg0) throws ServletException { } private void registerUsername(String username) { if (username != null && username.trim().length() > 0) { MDC.put(userKey, username); userRegistered = true; } } }
When the filter's doFilter()
method is called, is first looks for a
java.security.Principal
object in the request. This object contains
the name of the currently authenticated user. In case the user principal is not set,
the filter looks for a session attribute matching a given key (here username).
If a user information is found, it is registered in the MDC
.
Once the filter chain has completed, the filter removes the user information
from the MDC
.
With this filter, the user information is present in the MDC
only
the time it takes to process the request. The thread may be reused to process a
request for another user without risking to display false information in the
logs.