node.jsreactjsyarnpkgbitbucket-pipelines

yarn build - error Command failed with exit code 137 - Bitbucket Pipelines out of memory - Using max memory 8192mb


Our react app is configured to build and deploy using the CRA scripts and Bitbucket Pipelines.

Most of our builds are failing from running yarn build with the following error:

error Command failed with exit code 137.

This is an out of memory error.

We tried setting GENERATE_SOURCEMAP=false as a deployment env variable but that did not fix the issue https://create-react-app.dev/docs/advanced-configuration/.

We also tried setting the max memory avialable for a step by running the following:

node --max-old-space-size=8192 scripts/build.js

Increasing to max memory did not resolve the issue.

This is blocking our development and we aren't sure what to do to resolve the issue.

We could move to a new CI/CD service but that is a lot more work than desired.

Are there other solutions that could solve this problem?

Below is the bitbucket-pipelines.yml file

image: node:14

definitions:
  steps:
    - step: &test
        name: Test
        script:
          - yarn
          - yarn test --detectOpenHandles --forceExit --changedSince $BITBUCKET_BRANCH
    - step: &build
        name: Build
        size: 2x
        script:
          - yarn
          - NODE_ENV=${BUILD_ENV} yarn build
        artifacts:
            - build/**
    - step: &deploy_s3
        name: Deploy to S3
        script:
          - pipe: atlassian/aws-s3-deploy:0.3.8
            variables:
              AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
              AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
              AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
              S3_BUCKET: $S3_BUCKET
              LOCAL_PATH: "./build/"
              ACL: 'public-read'
    - step: &auto_merge_down
        name: Auto Merge Down
        script:
          - ./autoMerge.sh stage || true
          - ./autoMerge.sh dev || true
  caches:
    jest: /tmp/jest_*
    node-dev: ./node_modules
    node-stage: ./node_modules
    node-release: ./node_modules
    node-prod: ./node_modules


pipelines:
  branches:
    dev:
      - parallel:
          fail-fast: true
          steps:
            - step:
                caches:
                  - node-dev
                  - jest
                <<: *test
            - step:
                caches:
                  - node-dev
                <<: *build
                deployment: Dev Env
      - step:
          <<: *deploy_s3
          deployment: Dev
    stage:
      - parallel:
          fail-fast: true
          steps:
            - step:
                caches:
                  - node-stage
                  - jest
                <<: *test
            - step:
                caches:
                  - node-stage
                <<: *build
                deployment: Staging Env
      - step:
          <<: *deploy_s3
          deployment: Staging
    prod:
      - parallel:
          fail-fast: true
          steps:
            - step:
                caches:
                  - node-prod
                  - jest
                <<: *test
            - step:
                caches:
                  - node-prod
                <<: *build
                deployment: Production Env
      - parallel:
          steps:
            - step:
                <<: *deploy_s3
                deployment: Production
            - step:
                <<: *auto_merge_down

Solution

  • Turns out the terser-webpack-plugin package was running max workers for jest workers during our yarn build step causing the out of memory error https://www.npmjs.com/package/terser-webpack-plugin

    By removing that plugin from our package.json, it no longer fails the build and the jest workers are no longer spawned during the build.

    You can also set parallel to false in the config for TerserWebpackPlugin to not spawn workers.

    This seems incorrect and is causing our pipeline and likely others to go out of memory.