AttributeError: #39;NoneType#39; object has no attribute #39;_jvm - PySpark UDF(AttributeError:#39;NoneType#39;对象没有属性#39;_JVM-PySpark UDF)
本文介绍了AttributeError:';NoneType';对象没有属性';_JVM-PySpark UDF的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我有杂志订阅及其创建时间的数据,还有一个包含与给定用户关联的所有订阅到期日期数组的列:
user_id created_date expiration_dates_for_user
202394 '2018-05-04' ['2019-1-03', '2018-10-06', '2018-07-05']
202394 '2017-01-04' ['2019-1-03', '2018-10-06', '2018-07-05']
202394 '2016-05-04' ['2019-1-03', '2018-10-06', '2018-07-05']
我正在尝试创建一个新列,该列由CREATED_DATE 45天内的所有到期日期组成,如下所示:
user_id created_date expiration_dates_for_user near_expiration_dates
202394 '2018-05-04' ['2019-1-03', '2018-10-06', '2020-07-05'] []
202394 '2019-01-04' ['2019-1-03', '2018-10-06', '2020-07-05'] ['2019-1-03']
202394 '2016-05-04' ['2019-1-03', '2018-10-06', '2020-07-05'] []
以下是我使用的代码:
def check_if_sub_connected(created_at, expiration_array):
if not expiration_array:
return []
if created_at == None:
return []
else:
close_to_array = []
for i in expiration_array:
if datediff(created_at, i) < 45:
if created_at != i:
if datediff(created_at, i) > -45:
close_to_array.append(i)
return close_to_array
check_if_sub_connected = udf(check_if_sub_connected, ArrayType(TimestampType()))
但当我应用该函数创建列时...
df = df.withColumn('near_expiration-dates', check_if_sub_connected(df.created_date, df.expiration_dates_for_user)
我收到这个疯狂的错误:
AttributeError: 'NoneType' object has no attribute '_jvm'
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:317)
at org.apache.spark.sql.execution.python.PythonUDFRunner$$anon$1.read(PythonUDFRunner.scala:83)
at org.apache.spark.sql.execution.python.PythonUDFRunner$$anon$1.read(PythonUDFRunner.scala:66)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:271)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage17.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:620)
at org.apache.spark.sql.execution.collect.UnsafeRowBatchUtils$.encodeUnsafeRows(UnsafeRowBatchUtils.scala:49)
at org.apache.spark.sql.execution.collect.Collector$$anonfun$2.apply(Collector.scala:126)
at org.apache.spark.sql.execution.collect.Collector$$anonfun$2.apply(Collector.scala:125)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:112)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:384)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1747)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1735)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1734)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1734)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:962)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:962)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:962)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1970)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1918)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1906)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:759)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2141)
at org.apache.spark.sql.execution.collect.Collector.runSparkJobs(Collector.scala:237)
at org.apache.spark.sql.execution.collect.Collector.collect(Collector.scala:247)
at org.apache.spark.sql.execution.collect.Collector$.collect(Collector.scala:64)
at org.apache.spark.sql.execution.collect.Collector$.collect(Collector.scala:70)
at org.apache.spark.sql.execution.ResultCacheManager.getOrComputeResult(ResultCacheManager.scala:497)
at org.apache.spark.sql.execution.CollectLimitExec.executeCollectResult(limit.scala:48)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectResult(Dataset.scala:2775)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3350)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2504)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2504)
at org.apache.spark.sql.Dataset$$anonfun$53.apply(Dataset.scala:3334)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withCustomExecutionEnv$1.apply(SQLExecution.scala:89)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:175)
at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:84)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:126)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3333)
at org.apache.spark.sql.Dataset.head(Dataset.scala:2504)
at org.apache.spark.sql.Dataset.take(Dataset.scala:2718)
at org.apache.spark.sql.Dataset.showString(Dataset.scala:259)
at sun.reflect.GeneratedMethodAccessor472.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)
at py4j.Gateway.invoke(Gateway.java:295)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:251)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/databricks/spark/python/pyspark/worker.py", line 262, in main
process()
File "/databricks/spark/python/pyspark/worker.py", line 257, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/databricks/spark/python/pyspark/worker.py", line 183, in <lambda>
func = lambda _, it: map(mapper, it)
File "<string>", line 1, in <lambda>
File "/databricks/spark/python/pyspark/worker.py", line 77, in <lambda>
return lambda *a: toInternal(f(*a))
File "/databricks/spark/python/pyspark/util.py", line 55, in wrapper
return f(*args, **kwargs)
File "<command-30583>", line 9, in check_if_sub_connected
File "/databricks/spark/python/pyspark/sql/functions.py", line 1045, in datediff
return Column(sc._jvm.functions.datediff(_to_java_column(end), _to_java_column(start)))
AttributeError: 'NoneType' object has no attribute '_jvm'
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:317)
at org.apache.spark.sql.execution.python.PythonUDFRunner$$anon$1.read(PythonUDFRunner.scala:83)
at org.apache.spark.sql.execution.python.PythonUDFRunner$$anon$1.read(PythonUDFRunner.scala:66)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:271)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage17.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:620)
at org.apache.spark.sql.execution.collect.UnsafeRowBatchUtils$.encodeUnsafeRows(UnsafeRowBatchUtils.scala:49)
at org.apache.spark.sql.execution.collect.Collector$$anonfun$2.apply(Collector.scala:126)
at org.apache.spark.sql.execution.collect.Collector$$anonfun$2.apply(Collector.scala:125)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:112)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:384)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
... 1 more
在UDF中不允许使用Datediff函数吗?或者这是某种导入错误?我正在使用最新版本在Databricks上运行Spark。谢谢!
推荐答案
def check_if_sub_connected(created_at, expiration_array):
if not expiration_array:
return []
else:
close_to_array = []
for i in expiration_array:
if created_at - i < pd.Timedelta(days=45):
if created_at - i > pd.Timedelta(days=-45):
close_to_array.append(i)
return close_to_array
check_if_sub_connected = udf(check_if_sub_connected, ArrayType(TimestampType()))
正如@user10465355指出的那样,pyspark.sql.functions
在UDF中不起作用。因此,以下是我的替代解决方案。谢谢
这篇关于AttributeError:';NoneType';对象没有属性';_JVM-PySpark UDF的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!
沃梦达教程
本文标题为:AttributeError:';NoneType';对象没有属性';_JVM-PySpark UDF
基础教程推荐
猜你喜欢
- 使用 Google App Engine (Python) 将文件上传到 Google Cloud Storage 2022-01-01
- 将 YAML 文件转换为 python dict 2022-01-01
- 如何在Python中绘制多元函数? 2022-01-01
- 使用Python匹配Stata加权xtil命令的确定方法? 2022-01-01
- 合并具有多索引的两个数据帧 2022-01-01
- 症状类型错误:无法确定关系的真值 2022-01-01
- 使 Python 脚本在 Windows 上运行而不指定“.py";延期 2022-01-01
- 如何在 Python 中检测文件是否为二进制(非文本)文 2022-01-01
- 哪些 Python 包提供独立的事件系统? 2022-01-01
- Python 的 List 是如何实现的? 2022-01-01