代码之家  ›  专栏  ›  技术社区  ›  lodo

Scala Spark:flatMap不接受返回选项[duplicate]的函数

  •  1
  • lodo  · 技术社区  · 6 年前

    我正在使用Spark 2.2,在尝试打电话时遇到了麻烦 spark.createDataset Seq 属于 Map .

    Spark Shell会话的代码和输出如下:

    // createDataSet on Seq[T] where T = Int works
    scala> spark.createDataset(Seq(1, 2, 3)).collect
    res0: Array[Int] = Array(1, 2, 3)
    
    scala> spark.createDataset(Seq(Map(1 -> 2))).collect
    <console>:24: error: Unable to find encoder for type stored in a Dataset.  
    Primitive types (Int, String, etc) and Product types (case classes) are 
    supported by importing spark.implicits._
    Support for serializing other types will be added in future releases.
           spark.createDataset(Seq(Map(1 -> 2))).collect
                              ^
    
    // createDataSet on a custom case class containing Map works
    scala> case class MapHolder(m: Map[Int, Int])
    defined class MapHolder
    
    scala> spark.createDataset(Seq(MapHolder(Map(1 -> 2)))).collect
    res2: Array[MapHolder] = Array(MapHolder(Map(1 -> 2)))
    

    我试过了 import spark.implicits._ ,尽管我相当确定这是Spark shell会话隐式导入的。

    这是当前编码器未涵盖的情况吗?

    0 回复  |  直到 6 年前
        1
  •  7
  •   zero323 little_kid_pea    7 年前

    2.2中没有介绍,但很容易解决。您可以添加所需的 Encoder 使用 ExpressionEncoder ,或者明确地:

    import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder  
    import org.apache.spark.sql.Encoder
    
    spark
      .createDataset(Seq(Map(1 -> 2)))(ExpressionEncoder(): Encoder[Map[Int, Int]])
    

    implicitly :

    implicit def mapIntIntEncoder: Encoder[Map[Int, Int]] = ExpressionEncoder()
    spark.createDataset(Seq(Map(1 -> 2)))
    
        2
  •  2
  •   Jacek Laskowski    7 年前

    仅供参考,上述表达式仅适用于Spark 2.3(截至 this commit 如果我没弄错的话)。

    scala> spark.version
    res0: String = 2.3.0
    
    scala> spark.createDataset(Seq(Map(1 -> 2))).collect
    res1: Array[scala.collection.immutable.Map[Int,Int]] = Array(Map(1 -> 2))
    

    我想是因为 newMapEncoder 现在是 spark.implicits .

    scala> :implicits
    ...
      implicit def newMapEncoder[T <: scala.collection.Map[_, _]](implicit evidence$3: reflect.runtime.universe.TypeTag[T]): org.apache.spark.sql.Encoder[T]
    

    您可以使用以下技巧“禁用”隐式表达式,并尝试上面的表达式(这将导致错误)。

    trait ThatWasABadIdea
    implicit def newMapEncoder(ack: ThatWasABadIdea) = ack
    
    scala> spark.createDataset(Seq(Map(1 -> 2))).collect
    <console>:26: error: Unable to find encoder for type stored in a Dataset.  Primitive types (Int, String, etc) and Product types (case classes) are supported by importing spark.implicits._  Support for serializing other types will be added in future releases.
           spark.createDataset(Seq(Map(1 -> 2))).collect
                              ^