t

com.holdenkarau.spark.testing

DataFrameSuiteBaseLike

trait DataFrameSuiteBaseLike extends SparkContextProvider with TestSuiteLike with Serializable

Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. DataFrameSuiteBaseLike
  2. Serializable
  3. Serializable
  4. TestSuiteLike
  5. SparkContextProvider
  6. AnyRef
  7. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Abstract Value Members

  1. abstract def assert[U](message: String, expected: U, actual: U)(implicit CT: ClassTag[U]): Unit
    Definition Classes
    TestSuiteLike
  2. abstract def assert[U](expected: U, actual: U)(implicit CT: ClassTag[U]): Unit
    Definition Classes
    TestSuiteLike
  3. abstract def assertEmpty[U](arr: Array[U])(implicit CT: ClassTag[U]): Unit
    Definition Classes
    TestSuiteLike
  4. abstract def assertTrue(expected: Boolean): Unit
    Definition Classes
    TestSuiteLike
  5. abstract def fail(message: String): Unit
    Definition Classes
    TestSuiteLike
  6. abstract def sc: SparkContext
    Definition Classes
    SparkContextProvider

Concrete Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. def appID: String
    Definition Classes
    SparkContextProvider
  5. def appName: String
    Definition Classes
    SparkContextProvider
  6. def approxEquals(r1: Row, r2: Row, tol: Double, tolTimestamp: Duration): Boolean
  7. def approxEquals(r1: Row, r2: Row, tolTimestamp: Duration): Boolean
  8. def approxEquals(r1: Row, r2: Row, tol: Double): Boolean
  9. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  10. def assertDataFrameApproximateEquals(expected: DataFrame, result: DataFrame, tol: Double, tolTimestamp: Duration, customShow: (DataFrame) ⇒ Unit = _.show()): Unit

    Compares if two DataFrames are equal, checks that the schemas are the same.

    Compares if two DataFrames are equal, checks that the schemas are the same. When comparing inexact fields uses tol & tolTimestamp.

    tol

    max acceptable numeric tolerance, should be less than 1.

    tolTimestamp

    max acceptable timestamp tolerance.

    customShow

    unit function to customize the show method when dataframes are not equal. IE: df.show(false) or df.toJSON.show(false).

  11. def assertDataFrameDataEquals(expected: DataFrame, result: DataFrame): Unit

    Compares if two DataFrames are equal without caring about order of rows, by finding elements in one DataFrame that is not in the other.

    Compares if two DataFrames are equal without caring about order of rows, by finding elements in one DataFrame that is not in the other. The resulting DataFrame should be empty inferring the two DataFrames have the same elements. Does not compare the schema.

  12. def assertDataFrameEquals(expected: DataFrame, result: DataFrame, customShow: (DataFrame) ⇒ Unit = _.show()): Unit

    Compares if two DataFrames are equal, checks the schema and then if that matches checks if the rows are equal.

    Compares if two DataFrames are equal, checks the schema and then if that matches checks if the rows are equal.

    customShow

    unit function to customize the show method when dataframes are not equal. IE: df.show(false) or df.toJSON.show(false).

  13. def assertDataFrameNoOrderEquals(expected: DataFrame, result: DataFrame): Unit

    Compares if two DataFrames are equal without caring about order of rows, by finding elements in one DataFrame that is not in the other.

    Compares if two DataFrames are equal without caring about order of rows, by finding elements in one DataFrame that is not in the other. The resulting DataFrame should be empty inferring the two DataFrames have the same elements. Also verifies that the schema is identical.

  14. def assertSchemasEqual(expected: StructType, result: StructType): Unit

    Compare if two schemas are equal, ignoring autoGeneratedAlias magic

  15. def assertSmallDataFrameDataEquals(expected: DataFrame, result: DataFrame): Unit

    Compares if two DataFrames are equal without caring about order of rows, by finding elements in one DataFrame that is not in the other.

    Compares if two DataFrames are equal without caring about order of rows, by finding elements in one DataFrame that is not in the other. Similar to the function assertDataFrameDataEquals but for small DataFrame that can be collected in memory for the comparison.

  16. def builder(): Builder

    Constructs a configuration for hive or iceberg, where the metastore is located in a temp directory.

  17. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native() @IntrinsicCandidate()
  18. def conf: SparkConf
    Definition Classes
    SparkContextProvider
  19. def enableHiveSupport: Boolean
    Attributes
    protected
  20. def enableIcebergSupport: Boolean
    Attributes
    protected
  21. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  22. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  23. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native() @IntrinsicCandidate()
  24. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native() @IntrinsicCandidate()
  25. val icebergWarehouse: String
  26. implicit def impSqlContext: SQLContext
    Attributes
    protected
  27. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  28. lazy val localMetastorePath: String
  29. lazy val localWarehousePath: String
  30. val maxUnequalRowsToShow: Int
  31. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  32. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @IntrinsicCandidate()
  33. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @IntrinsicCandidate()
  34. def setup(sc: SparkContext): Unit

    Setup work to be called when creating a new SparkContext.

    Setup work to be called when creating a new SparkContext. Default implementation currently sets a checkpoint directory.

    This _should_ be called by the context provider automatically.

    Definition Classes
    SparkContextProvider
  35. lazy val spark: SparkSession
    Annotations
    @transient()
  36. def sqlBeforeAllTestCases(): Unit
  37. lazy val sqlContext: SQLContext
    Annotations
    @transient()
  38. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  39. lazy val tempDir: File
  40. def toString(): String
    Definition Classes
    AnyRef → Any
  41. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  42. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  43. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Deprecated Value Members

  1. def assertDataFrameApproximateEquals(expected: DataFrame, result: DataFrame, tol: Double): Unit

    Compares if two DataFrames are equal, checks that the schemas are the same.

    Compares if two DataFrames are equal, checks that the schemas are the same. When comparing inexact fields uses tol.

    tol

    max acceptable tolerance for numeric (between(0, 1)) & timestamp (millis).

    Annotations
    @deprecated
    Deprecated

    (Since version 1.5.0) Use assertDataFrameApproximateEquals with timestamp tolerance

  2. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] ) @Deprecated
    Deprecated

Inherited from Serializable

Inherited from Serializable

Inherited from TestSuiteLike

Inherited from SparkContextProvider

Inherited from AnyRef

Inherited from Any

Ungrouped