代码之家  ›  专栏  ›  技术社区  ›  Arnab Biswas

java.sql.SQLException:其他错误:使用mysql连接器java 5.1.6连接器从Spark连接到TIDB时请求过期

  •  0
  • Arnab Biswas  · 技术社区  · 6 年前

    连接到时出现以下错误 TIDB Spark 使用 mysql-connector-java 5.1.6 connector .

    Spark然后通过将列名的下限和上限划分为相等的大小,将其分解为(分区数)查询。

    java.sql.SQLException: other error: request outdated.
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1055)
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:956)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3536)
    at com.mysql.jdbc.MysqlIO.nextRowFast(MysqlIO.java:1551)
    at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1407)
    at com.mysql.jdbc.MysqlIO.readSingleRowSet(MysqlIO.java:2861)
    at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:474)
    at com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:2554)
    at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:1755)
    at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2165)
    at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2648)
    at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2086)
    at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:2237)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD.compute(JDBCRDD.scala:301)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD$$anonfun$7.apply(RDD.scala:337)
    at org.apache.spark.rdd.RDD$$anonfun$7.apply(RDD.scala:335)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1092)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1083)
    at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1018)
    at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1083)
    at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:809)
    at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:335)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:286)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
    at org.apache.spark.scheduler.Task.run(Task.scala:109)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
    
    1 回复  |  直到 6 年前
        1
  •  1
  •   birdstorm    6 年前

    other error: request outdated. 是由引发的错误 TiKV ,表示查询超出了超时限制 end-point-request-max-handle-duration

    由于Spark从JDBC中检索到这个错误,这意味着请求太重了 TiDB 使某些请求等待太久。这主要是因为Spark分割每个分区的请求,导致TiDB的工作负载很重。当您使用并行连接时,情况会更糟。

    事实上,事实上, TiSpark 是一个使用Spark和TiDB进行查询的解决方案。它现在支持Spark 2.1,几天后将支持Spark 2.3。试试看!

    推荐文章