pythonloggingaws-glueamazon-cloudwatchlogs

AWS Glue logs are corrupted


I have an issue with the logs from my AWS Glue jobs being corrupted.

The logs are emitted from Glue Python scripts ad go to CloudWatch. I recently started posting structured JSON logs with AWS EMF, and noticed that the logs were being corrupted. Some log statements in excess of 1024 bytes are split into multiple CloudWatch log entries, while sometimes smaller log statements are grouped into a single CloudWatch log entry. This obviously breaks JSON messages, and confuses other log analysis queries.

The issue seemed to begin when many log messages were emitted in quick succession. I saw a log entry that had a large ~6000b JSON payload that was unsplit in the beginning of a log, before everything devolved into the 1024 byte chunks when a lot of lines were printed in quick succession.

The jobs are Spark 3.3 jobs written in Python 3.9 and running on Glue 4.0. The affected logs are the one produced by the Python script and not the ones produced by Spark.

I did some testing, and wrote a script that does nothing except initialize Glue and then output log messages using either Python's print function, a Python logger, or the logger that comes from calling getLogger() on a GlueContext, and my conclusion is that either the CloudWatch agent where the Python is executed is misconfigured, or some intermediary process, between the running script and the CloudWatch agent, is.

The script I used for testing is the following: https://gist.github.com/mhvelplund/961c8868fbfdf857dcd0a623a353870b

Running the script with continuous logging enabled and using the Glue logger (--TYPE=glue), everything works fine but the logs go in the same log stream as the Spark logs, which contain a lot of Java noise.

Running with a Python logger (--TYPE=log) or with print statements (--TYPE=print) is where problems arise. Log lines are grouped or split, seemingly arbitrarily, and not necessarily in the same manner from one run to another. This indicates that the issue is time related. Using a version of the script without the output delegation, but just raw print statements, I was able to get every printed line in a single CloudWatch message.

No delay

Inserting a slight delay of as little as 100 ms (--DELAY=100) before each output statement, the grouping and splitting issues disappeared.

100 ms delay

Unfortunately, using the Glue logger is not a good solution since I have legacy code in my real scripts that use the Python loggers and raw print statements, and changing those would be painful.

Has anyone encountered this problem with CloudWatch or Glue before, and if so, how did you solve it without monkeypatching sys.stdout? 😉


Solution

  • So, I posted this question here and to AWS support. The latter acknowledged that there was a problem but did not provide a work-around.

    Since the logger provided by the GlueContext is actually a wrapper for a Java logger, it doesn't work with the normal logging sematics. It's also spammed with log messages from Spark.

    My eventual solution was to use Watchtower to create a normal logger, and to hack aws-embedded-metrics-python package to use the logger.

    """Utility methods an classes for handling logging to Watchtower."""
    
    import logging
    import sys
    
    import watchtower
    from aws_embedded_metrics.environment.environment_detector import EnvironmentCache
    from aws_embedded_metrics.environment.local_environment import LocalEnvironment
    from aws_embedded_metrics.logger.metrics_context import MetricsContext
    from aws_embedded_metrics.serializers import Serializer
    from aws_embedded_metrics.serializers.log_serializer import LogSerializer
    from aws_embedded_metrics.sinks import Sink
    from awsglue.utils import getResolvedOptions
    
    logger = logging.getLogger(__name__)
    
    
    class LoggerSink(Sink):
        """A sink that logs serialized metrics to the logger."""
    
        def __init__(self, serializer: Serializer = LogSerializer()):
            self.serializer = serializer
    
        def accept(self, context: MetricsContext) -> None:
            for serialized_content in self.serializer.serialize(context):
                if serialized_content:
                    logger.info(serialized_content)
    
        @staticmethod
        def name() -> str:
            return "LoggerSink"
    
    
    opts = getResolvedOptions(sys.argv, [])
    
    watchtower_handler = watchtower.CloudWatchLogHandler(
        log_group_name="/aws-glue/jobs/output", 
        log_stream_name=opts["JOB_RUN_ID"],
    )
    
    watchtower_handler.setFormatter(
        logging.Formatter(
            fmt="%(asctime)s,%(msecs)d %(levelname)s [%(filename)s:%(lineno)d#%(funcName)s] %(message)s",
            datefmt="%Y-%m-%d:%H:%M:%S",
        )
    )
    
    logging.basicConfig(
        handlers=[watchtower_handler],
        datefmt="%Y-%m-%d:%H:%M:%S",
        level=logging.INFO
        force=True,  # Nuke any existing loggers
    )
    
    logger_env = LocalEnvironment()
    logger_env.sink = LoggerSink()
    EnvironmentCache.environment = logger_env
    

    I write to the same logstream that Glue would use. The end result is that the stuff I care about is routed through the Watchtower logger, while the stuff that just blows out on standard out from legacy code is handled by the Glue default capture of the stdout and stderr stream.

    That means that at least the proper logged data and the metrics won't be messed up. Not the best solution.