Hadoop: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected(Hadoop:java.lang.IncompatibleClassChangeError:找到接口 org.apache.hadoop.mapreduce.JobContext,但需要类)
问题描述
我的 MapReduce 作业在 Eclipse 中组装时运行正常,Eclipse 项目中包含所有可能的 Hadoop 和 Hive jar 作为依赖项.(这些是单节点本地 Hadoop 安装附带的 jar).
My MapReduce jobs runs ok when assembled in Eclipse with all possible Hadoop and Hive jars included in Eclipse project as dependencies. (These are the jars that come with single node, local Hadoop installation).
然而,当尝试运行使用 Maven 项目(见下文)组装的相同程序时,我得到:
Yet when trying to run the same program assembled using Maven project (see below) I get:
Exception in thread "main" java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected
使用以下 Maven 项目组装程序时会发生此异常:
This exception happens when program is assembled using the following Maven project:
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.bigdata.hadoop</groupId>
<artifactId>FieldCounts</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>
<name>FieldCounts</name>
<url>http://maven.apache.org</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-mapreduce-client-jobclient</artifactId>
<version>2.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.hive.hcatalog</groupId>
<artifactId>hcatalog-core</artifactId>
<version>0.12.0</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>16.0.1</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.3.2</version>
<configuration>
<source>${jdk.version}</source>
<target>${jdk.version}</target>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<executions>
<execution>
<goals>
<goal>attached</goal>
</goals>
<phase>package</phase>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<archive>
<manifest>
<mainClass>com.bigdata.hadoop.FieldCounts</mainClass>
</manifest>
</archive>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
* 请告知在哪里以及如何找到兼容的 Hadoop jar?*
[update_1]我正在运行 Hadoop 2.2.0.2.0.6.0-101
[update_1] I am running Hadoop 2.2.0.2.0.6.0-101
正如我在这里找到的:https://github.com/kevinweil/elephant-bird/issues/247
Hadoop 1.0.3:JobContext 是一个类
Hadoop 2.0.0:JobContext 是一个接口
在我的 pom.xml 中,我有三个版本为 2.2.0 的 jar
In my pom.xml I have three jars with version 2.2.0
hadoop-hdfs 2.2.0
hadoop-common 2.2.0
hadoop-mapreduce-client-jobclient 2.2.0
hcatalog-core 0.12.0
唯一的例外是 hcatalog-core
,它的版本是 0.12.0,我找不到 此 jar 的任何更新版本,我需要它!
The only exception is hcatalog-core
which version is 0.12.0, I could not find any more recent version of this jar and I need it!
如何找到这 4 个 jar 中的哪一个产生 java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class is expected
?
How can I find which of these 4 jars produces java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected
?
请告诉我如何解决这个问题.(我看到的唯一解决方案是从源代码编译所有内容!)
Please, give me an idea how to solve this. (The only solution I see is to compile everything from source!)
[/update_1]
我的 MarReduce 作业的全文:
Full text of my MarReduce Job:
package com.bigdata.hadoop;
import java.io.IOException;
import java.util.*;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.util.*;
import org.apache.hcatalog.mapreduce.*;
import org.apache.hcatalog.data.*;
import org.apache.hcatalog.data.schema.*;
import org.apache.log4j.Logger;
public class FieldCounts extends Configured implements Tool {
public static class Map extends Mapper<WritableComparable, HCatRecord, TableFieldValueKey, IntWritable> {
static Logger logger = Logger.getLogger("com.foo.Bar");
static boolean firstMapRun = true;
static List<String> fieldNameList = new LinkedList<String>();
/**
* Return a list of field names not containing `id` field name
* @param schema
* @return
*/
static List<String> getFieldNames(HCatSchema schema) {
// Filter out `id` name just once
if (firstMapRun) {
firstMapRun = false;
List<String> fieldNames = schema.getFieldNames();
for (String fieldName : fieldNames) {
if (!fieldName.equals("id")) {
fieldNameList.add(fieldName);
}
}
} // if (firstMapRun)
return fieldNameList;
}
@Override
protected void map( WritableComparable key,
HCatRecord hcatRecord,
//org.apache.hadoop.mapreduce.Mapper
//<WritableComparable, HCatRecord, Text, IntWritable>.Context context)
Context context)
throws IOException, InterruptedException {
HCatSchema schema = HCatBaseInputFormat.getTableSchema(context.getConfiguration());
//String schemaTypeStr = schema.getSchemaAsTypeString();
//logger.info("******** schemaTypeStr ********** : "+schemaTypeStr);
//List<String> fieldNames = schema.getFieldNames();
List<String> fieldNames = getFieldNames(schema);
for (String fieldName : fieldNames) {
Object value = hcatRecord.get(fieldName, schema);
String fieldValue = null;
if (null == value) {
fieldValue = "<NULL>";
} else {
fieldValue = value.toString();
}
//String fieldNameValue = fieldName+"."+fieldValue;
//context.write(new Text(fieldNameValue), new IntWritable(1));
TableFieldValueKey fieldKey = new TableFieldValueKey();
fieldKey.fieldName = fieldName;
fieldKey.fieldValue = fieldValue;
context.write(fieldKey, new IntWritable(1));
}
}
}
public static class Reduce extends Reducer<TableFieldValueKey, IntWritable,
WritableComparable, HCatRecord> {
protected void reduce( TableFieldValueKey key,
java.lang.Iterable<IntWritable> values,
Context context)
//org.apache.hadoop.mapreduce.Reducer<Text, IntWritable,
//WritableComparable, HCatRecord>.Context context)
throws IOException, InterruptedException {
Iterator<IntWritable> iter = values.iterator();
int sum = 0;
// Sum up occurrences of the given key
while (iter.hasNext()) {
IntWritable iw = iter.next();
sum = sum + iw.get();
}
HCatRecord record = new DefaultHCatRecord(3);
record.set(0, key.fieldName);
record.set(1, key.fieldValue);
record.set(2, sum);
context.write(null, record);
}
}
public int run(String[] args) throws Exception {
Configuration conf = getConf();
args = new GenericOptionsParser(conf, args).getRemainingArgs();
// To fix Hadoop "META-INFO" (http://stackoverflow.com/questions/17265002/hadoop-no-filesystem-for-scheme-file)
conf.set("fs.hdfs.impl",
org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
conf.set("fs.file.impl",
org.apache.hadoop.fs.LocalFileSystem.class.getName());
// Get the input and output table names as arguments
String inputTableName = args[0];
String outputTableName = args[1];
// Assume the default database
String dbName = null;
Job job = new Job(conf, "FieldCounts");
HCatInputFormat.setInput(job,
InputJobInfo.create(dbName, inputTableName, null));
job.setJarByClass(FieldCounts.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
// An HCatalog record as input
job.setInputFormatClass(HCatInputFormat.class);
// Mapper emits TableFieldValueKey as key and an integer as value
job.setMapOutputKeyClass(TableFieldValueKey.class);
job.setMapOutputValueClass(IntWritable.class);
// Ignore the key for the reducer output; emitting an HCatalog record as
// value
job.setOutputKeyClass(WritableComparable.class);
job.setOutputValueClass(DefaultHCatRecord.class);
job.setOutputFormatClass(HCatOutputFormat.class);
HCatOutputFormat.setOutput(job,
OutputJobInfo.create(dbName, outputTableName, null));
HCatSchema s = HCatOutputFormat.getTableSchema(job);
System.err.println("INFO: output schema explicitly set for writing:"
+ s);
HCatOutputFormat.setSchema(job, s);
return (job.waitForCompletion(true) ? 0 : 1);
}
public static void main(String[] args) throws Exception {
String classpath = System.getProperty("java.class.path");
//System.out.println("*** CLASSPATH: "+classpath);
int exitCode = ToolRunner.run(new FieldCounts(), args);
System.exit(exitCode);
}
}
复杂键的类:
package com.bigdata.hadoop;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import org.apache.hadoop.io.WritableComparable;
import com.google.common.collect.ComparisonChain;
public class TableFieldValueKey implements WritableComparable<TableFieldValueKey> {
public String fieldName;
public String fieldValue;
public TableFieldValueKey() {} //must have a default constructor
//
public void readFields(DataInput in) throws IOException {
fieldName = in.readUTF();
fieldValue = in.readUTF();
}
public void write(DataOutput out) throws IOException {
out.writeUTF(fieldName);
out.writeUTF(fieldValue);
}
public int compareTo(TableFieldValueKey o) {
return ComparisonChain.start().compare(fieldName, o.fieldName)
.compare(fieldValue, o.fieldValue).result();
}
}
推荐答案
Hadoop 经历了从 Hadoop 1.0
到 Hadoop 2.0
的巨大代码重构.一个副作用是针对 Hadoop 1.0 编译的代码与 Hadoop 2.0 不兼容,反之亦然.但是源代码大部分是兼容的,因此只需要使用目标重新编译代码Hadoop 发行版.
Hadoop has gone through a huge code refactoring from Hadoop 1.0
to Hadoop 2.0
. One side effect
is that code compiled against Hadoop 1.0 is not compatible with Hadoop 2.0 and vice-versa.
However source code is mostly compatible and thus one just need to recompile code with target
Hadoop distribution.
异常Found interface X, but class was expected
"在你运行时很常见在 Hadoop 2.0 上为 Hadoop 1.0 编译的代码,反之亦然.
The exception "Found interface X, but class was expected
" is very common when you're running
code that is compiled for Hadoop 1.0 on Hadoop 2.0 or vice-versa.
您可以找到集群中使用的正确 hadoop 版本,然后在 pom.xml 文件中指定该 hadoop 版本使用集群中使用的相同版本的 hadoop 构建您的项目并部署它.
You can find the correct hadoop version used in the cluster, then specify that hadoop version in the pom.xml file Build your project with the same version of hadoop used in the cluster and deploy it.
这篇关于Hadoop:java.lang.IncompatibleClassChangeError:找到接口 org.apache.hadoop.mapreduce.JobContext,但需要类的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!
本文标题为:Hadoop:java.lang.IncompatibleClassChangeError:找到接口 org.apache.hadoop.mapreduce.JobContext,但需要类
基础教程推荐
- 降序排序:Java Map 2022-01-01
- 如何使用 Java 创建 X509 证书? 2022-01-01
- 设置 bean 时出现 Nullpointerexception 2022-01-01
- FirebaseListAdapter 不推送聊天应用程序的单个项目 - Firebase-Ui 3.1 2022-01-01
- Java Keytool 导入证书后出错,"keytool error: java.io.FileNotFoundException &拒绝访问" 2022-01-01
- 无法使用修饰符“public final"访问 java.util.Ha 2022-01-01
- 在 Libgdx 中处理屏幕的正确方法 2022-01-01
- 减少 JVM 暂停时间 >1 秒使用 UseConcMarkSweepGC 2022-01-01
- “未找到匹配项"使用 matcher 的 group 方法时 2022-01-01
- Java:带有char数组的println给出乱码 2022-01-01