githubgithub-actionssuper-linter

How to configure dialect in SQLfluff for Super-linter actions in GitHub actions YML file?


I have bunch of SQL files in my Private repo in GitHub main branch. I would like to lint all the files before copying them to the DEV branch in my repo. For this I am using Super-linter action from Actions Marketplace. Currently my workflow file looks like below. I am triggering event when issue is created, and if all linting passes (not in below code), then I would like to close the issue. However, I am receiving below error.

How to go about configuring dialect for Snowflake? I have Snowflake queries that I would like to lint and copy to my DEV branch and after that close the issue. I tried using SQL_LINTER_DIALECT: snowflake under environment variable, unfortunately, it did not work.

Main.yml:

name: Linting-Closing the issue
on:
  issues:
    types: [opened]
jobs:
  close-issue:
    runs-on: ubuntu-latest
    permissions:
      issues: write
      contents: read
      packages: read
      statuses: write
    steps:
      - name: Checkout code
        uses: actions/checkout@v4.2.2
        with:
          fetch-depth: 0
      - name: Super-linter
        uses: super-linter/super-linter@v7.2.1
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          VALIDATE_CHECKOV: false
          VALIDATE_YAML_PRETTIER: false
          VALIDATE_MARKDOWN_PRETTIER: false
          SQL_LINTER_DIALECT: snowflake
          
          
      - name: Close
        env: 
          GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          REPO: ${{ github.repository }}
          ISSUE: ${{ github.event.issue.number }}
        run:
          gh issue close --repo "$REPO" --comment "Autoclosing issue $ISSUE"
          "$ISSUE"

Error:

"User Error: No dialect was specified. You must configure a dialect or specify one on the command line using --dialect after the command. Available dialects:
  ansi, athena, bigquery, clickhouse, databricks, db2, duckdb, exasol, greenplum, hive, mariadb, materialize, mysql, oracle, postgres, redshift, snowflake, soql, sparksql, sqlite, teradata, trino, tsql, vertica " 

Solution

  • SQL_LINTER_DIALECT is not a valid configuration.

    To configure the dialect, use the SQLFluff configuration file i.e. .sqlfluff:

    .sqlfluff

    [sqlfluff]
    dialect = snowflake
    

    You may add it to your project root, or per subproject/directory, or to .github/linters directory according to your use case.

    See its template provided by super-linter/super-linter.

    With super-linter/super-linter, the SQLFLUFF_CONFIG_FILE environment variable may be used to set the configuration file if needed.