scalaamazon-web-servicesapache-sparkamazon-emramazon-deequ

Scala Spark: how to add list of generated methods to a function


I am using Amazon deequ to generate test cases which returns following list of methods that I want to use in further function instead of coding it individually.

var rows = suggestionDataFrame.select("_3").collect().map(_.getString(0)).mkString(" ")
// var rows = suggestionDataFrame.select("_3").collect.map { row => row.toString() .mkString("")} 

The rows is returning below list of methods

.hasCompleteness("Id", _ >= 0.95, Some("It should be above 0.95!")) .isNonNegative("Id") 
.isComplete("LastModifiedDate")

Further in next function I want to pass these values below like

 val verificationResult: VerificationResult = {
      VerificationSuite()
        .onData(datasource)
        .addCheck(
          Check(CheckLevel.Error, "Data Validation Check")
             //this is how i want to pass
            .hasCompleteness("Id", _ >= 0.95, Some("It should be above 0.95!"))
            .isNonNegative("Id")
            .isComplete("LastModifiedDate"))
          .run()
    }

When I pass the rows directly like below, it is throwing error

 val verificationResult: VerificationResult = {
      VerificationSuite()
        .onData(datasource)
        .addCheck(
          Check(CheckLevel.Error, "Data Validation Check")
           rows).run() //throwing error here
    }

Is there any way to do it??

Reference: https://aws.amazon.com/blogs/big-data/test-data-quality-at-scale-with-deequ/

This is what I have tried so far

package com.myorg.dataquality

import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.SparkSession
import com.amazon.deequ.suggestions.{ ConstraintSuggestionRunner, Rules }
import com.amazon.deequ.{ VerificationSuite, VerificationResult }
import com.amazon.deequ.VerificationResult.checkResultsAsDataFrame
import com.amazon.deequ.checks.{ Check, CheckLevel }
import scala.collection.mutable.ArrayBuffer

object DataVerification2 {
  def main(args: Array[String]) {

    val spark = SparkSession.builder.appName("Sample")
      .master("local")
      .getOrCreate()

    val datasource = spark.read.format("jdbc").option("url", "jdbc:sqlserver://host:1433;database=mydb").option("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver").option("dbtable", "dbo.table").option("user", "myuser").option("password", "password").option("useSSL", "false").load()
    datasource.printSchema()

    val datadestination = spark.read.format("jdbc").option("url", "jdbc:sqlserver://host:1433;database=mydb").option("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver").option("dbtable", "dbo.table").option("user", "myuser").option("password", "password").option("useSSL", "false").load()
    //datapond.printSchema()

    import spark.implicits._
    //Compute constraint suggestions for us on the data
    val suggestionResult = {
      ConstraintSuggestionRunner()
        // data to suggest constraints for
        .onData(datasource)
        // default set of rules for constraint suggestion
        .addConstraintRules(Rules.DEFAULT)
        // run data profiling and constraint suggestion
        .run()
    }

    // We can now investigate the constraints that Deequ suggested.
    val suggestionDataFrame = suggestionResult.constraintSuggestions.flatMap {
      case (column, suggestions) =>
        suggestions.map { constraint =>
          (column, constraint.description, constraint.codeForConstraint)
        }
    }.toSeq.toDS()

    suggestionDataFrame.toJSON.collect.foreach(println)

    var rows = suggestionDataFrame.select("_3").collect().map(_.getString(0)).mkString(" ")
    // var rows = suggestionDataFrame.select("_3").collect.map { row => row.toString() .mkString("")}
    // var rows = suggestionDataFrame.select("_3").collect().map(t => println(t))
    // var rows = suggestionDataFrame.select("_3").collect.map(_.toSeq)

    var checks = Array[Check]()
    var checkLevel = "Check(CheckLevel.Error)"
    var finalcheck = checkLevel.concat(rows)

    checks :+ finalcheck

    // I am expecting validation result but this is returning me empty result
    val verificationResult: VerificationResult = {
      VerificationSuite().onData(datadestination).addChecks(checks).run()

    }

    val resultDataFrame = checkResultsAsDataFrame(spark, verificationResult)
    resultDataFrame.show()
    resultDataFrame.filter(resultDataFrame("constraint_status") === "Failure").toJSON.collect.foreach(println)

  }
}

This is returning a empty result:

+-----+-----------+------------+----------+-----------------+------------------+
|check|check_level|check_status|constraint|constraint_status|constraint_message|
+-----+-----------+------------+----------+-----------------+------------------+
+-----+-----------+------------+----------+-----------------+------------------+

Looks like I am missing to add element in array or implementing it in wrong way and looking for some suggestion on this.

Update 1:

I have tried using below code,however it is throwing error:

val constraints = suggestionResult.constraintSuggestions.flatMap {
      case (column, suggestions) =>
        suggestions.map { constraint =>
          (constraint.codeForConstraint)
        }
    }

   val generatedCheck  =  Check(CheckLevel.Warning, "generated constraints", constraints)
    val verificationResult = VerificationSuite()
      .onData(datadestination)
      .addChecks(generatedCheck)
      .run()

Error:

type mismatch; found : scala.collection.immutable.Iterable[String] required: Seq[com.amazon.deequ.constraints.Constraint]

Update 2:

    var rows = suggestionDataFrame.select("_3").collect.map(_.toSeq)
    var checks: Seq[Check] = Seq()
    checks :+ rows

   val generatedCheck  =  Check(CheckLevel.Warning, "generated constraints", checks)
   val verificationResult = VerificationSuite()
      .onData(datadestination)
      .addChecks(generatedCheck)
      .run()

Error:

type mismatch; found : Seq[com.amazon.deequ.checks.Check] required: Seq[com.amazon.deequ.constraints.Constraint]


Solution

  • If I understand your question correctly, then you want to add the suggested constraints to your verification run. Here is a link to a code snippet inside deequ which does something similar:

    https://github.com/awslabs/deequ/blob/master/src/main/scala/com/amazon/deequ/suggestions/ConstraintSuggestionRunner.scala#L294

    I hope this can serve as a template for you on how to proceed. You need to collect the constraints from the constraint suggestions (not the dataframe) and create a check based on them.

    Update 1:

    We actually provide the constraint methods with the suggestion result, if you replace the above lines as follows, your code should work:

     val allConstraints = suggestionResult.constraintSuggestions
          .flatMap { case (_, suggestions) => suggestions.map { _.constraint }}
          .toSeq
    
        val generatedCheck = Check(CheckLevel.Error, "generated constraints", allConstraints)
    
        val verificationResult = VerificationSuite()
          .onData(datasource)
          .addChecks(Seq(generatedCheck))
          .run()