Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整

Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整个事务还是只是有问题的行?)

本文介绍了Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整个事务还是只是有问题的行?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

Pyspark DataFrameWriter 类有一个 jdbc 函数 用于将数据帧写入 sql.这个函数有一个 --ignore 选项,文档说:

The Pyspark DataFrameWriter class has a jdbc function for writing a dataframe to sql. This function has an --ignore option that the documentation says will:

如果数据已经存在,则静默忽略此操作.

Silently ignore this operation if data already exists.

但是它会忽略整个事务,还是只会忽略插入重复的行?如果我将 --ignore--append 标志结合起来会怎样?行为会改变吗?

But will it ignore the entire transaction, or will it only ignore inserting the rows that are duplicates? What if I were to combine --ignore with the --append flag? Would the behavior change?

推荐答案

mode("ingore") 如果表(或另一个接收器)已经存在并且无法组合写入模式,则只是 NOOP.如果您正在寻找诸如 INSERT IGNOREINSERT INTO ... WHERE NOT EXISTS ... 之类的内容,则必须手动执行,例如使用 mapPartitions.

mode("ingore") is just NOOP if table (or another sink) already exists and writing modes cannot be combined. If you're looking for something like INSERT IGNORE or INSERT INTO ... WHERE NOT EXISTS ... you'll have to do it manually, for example with mapPartitions.

这篇关于Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整个事务还是只是有问题的行?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!

本文标题为:Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整

基础教程推荐