天天看點

pyspark 使用pandas_udf時的一個坑

使用了pyspark官方文檔給的代碼

pyspark 使用pandas_udf時的一個坑

報錯資訊如下:

19/11/14 15:59:36 ERROR TaskSetManager: Task 44 in stage 10.0 failed 1 times; aborting job
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/spark-2.4.4-bin-hadoop2.7/python/pyspark/sql/dataframe.py", line 380, in show
    print(self._jdf.showString(n, 20, vertical))
  File "/opt/spark-2.4.4-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
  File "/opt/spark-2.4.4-bin-hadoop2.7/python/pyspark/sql/utils.py", line 63, in deco
    return f(*a, **kw)
  File "/opt/spark-2.4.4-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o64.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 44 in stage 10.0 failed 1 times, most recent failure: Lost task 44.0 in stage 10.0 (TID 133, localhost, executor driver): java.lang.IllegalArgumentException
        at java.nio.ByteBuffer.allocate(ByteBuffer.java:334)
        at org.apache.arrow.vector.ipc.message.MessageSerializer.readMessage(MessageSerializer.java:543)
        at org.apache.arrow.vector.ipc.message.MessageChannelReader.readNext(MessageChannelReader.java:58)
        at org.apache.arrow.vector.ipc.ArrowStreamReader.readSchema(ArrowStreamReader.java:132)
        at org.apache.arrow.vector.ipc.ArrowReader.initialize(ArrowReader.java:181)
           

一開始以為是JDK的版本過高,換成了官方推薦的JDK8,結果還是報錯

繼續找阿找找阿找找阿找找阿找找阿找找阿找找阿找

最後發現是pyarrow的版本過高....

安裝Pyarrow 0.14.1或者更低版本就可以解決這個問題了

參考:https://stackoverflow.com/questions/58458415/pandas-scalar-udf-failing-illegalargumentexception?newreg=0945d0b1ec2e434d96c8d76f55792a30

繼續閱讀