I have been struggling with this problem for the past few days.
Requirement In our application, we are uploading a file through UI developed through nodejs and then the file records are processed via amazon simple work flow (SWF). The call to amazon SWF happens through a Spring App which the nodejs app will call upon processing the file. The requirement is that, for every file being processed, the application needs to create a log file, detailing what happened to the records as they were processed.
How I have implemented? In the spring application, which triggers the SWF, I created a FileLogger class which will maintain a static StringBuffer variable. This fileLogger class is set to workflow scope, meaning, the class will be created for every execution of the workflow and destroyed at the end of it. As the file is processed, I would keep appending the logs to the StringBuffer in the FileLogger class and at the end of the processing, would write to a file and save it.
Problem description This solution seemed to be working fine as long as we had only one instance of the application running. As soon as we deployed the application into multiple amazon ec-2 instances, it appears that the incomplete logs are saved in the file. Further looking into it revealed that, every instance of the application is having its own stringBuffer to maintain the log and when we are writing to the application only reads one of the stringbuffers contents and hence the incomplete logs. The log pattern, needless to say, is random. I observed we would have N instances of StringBuffer if we deploy N instances of the application.
Here is the FileLogger class
private static final Logger logger = LoggerFactory.getLogger(FileLogger.class);
//private static final Logger logger = LoggerFactory.getLogger(FileLogger.class);
private static final SimpleDateFormat logFileDateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSS");
private static final SimpleDateFormat fileNameDateFormat = new SimpleDateFormat("yyyyMMddHHmmss");
private static final String COLON = ":";
private static StringBuffer logAppender = null;
public synchronized void debug(Date date, String logMessage){
if(appConfig.getLogLevel().equalsIgnoreCase(LogLevel.DEBUG.name())){
this.log(LogLevel.DEBUG.name(), date, logMessage);
}
}
public synchronized void info(Date date, String logMessage){
if(appConfig.getLogLevel().equalsIgnoreCase(LogLevel.INFO.name()) ||
appConfig.getLogLevel().equalsIgnoreCase(LogLevel.DEBUG.name())){
this.log(LogLevel.INFO.name(), date, logMessage);
}
}
public synchronized void error(Date date, String logMessage){
if(appConfig.getLogLevel().equalsIgnoreCase(LogLevel.ERROR.name()) ||
appConfig.getLogLevel().equalsIgnoreCase(LogLevel.DEBUG.name())){
this.log(LogLevel.ERROR.name(), date, logMessage);
}
}
private synchronized void log(String logLevel, Date date, String logMessage){
logger.info("logAppender Hashcode: "+logAppender.hashCode());
if(!logLevel.equalsIgnoreCase(LogLevel.NONE.name())){
//StringBuffer logAppender = getLogAppender();
getLogAppender().append(getLogAppender().hashCode());
getLogAppender().append(COLON);
getLogAppender().append(getFormattedDate(date, logFileDateFormat));
getLogAppender().append(COLON);
getLogAppender().append(logLevel);
getLogAppender().append(COLON);
getLogAppender().append(logMessage);
getLogAppender().append(System.getProperty("line.separator"));
}
}
private synchronized StringBuffer getLogAppender(){
logger.info("Getting logAppender .."+logAppender);
if(logAppender == null){
logger.info("Log appender is null");
logAppender = new StringBuffer();
}
return logAppender;
}
Question How do I make sure there is only one instance of the StringBuffer (logAppender) across multiple instances of my application to which I can keep appending the log and then read at the end and write the contents to the file before saving it?
I just wanted to come back and mention that as a solution, I had finally chosen Amazon's elastiCache (redis implementation) for storing my logs temporarily and then at the end of the operation, read all from cache and write to a file in amazon s3. Hope this will help.