Type mismatch in key from map: expected org.apache.hadoop.io.Text, recieved org.apache.hadoop.io.LongWritable(映射中键的类型不匹配:预期 org.apache.hadoop.io.Text,收到 org.apache.hadoop.io.LongWritable)
问题描述
我正在尝试在 java 中运行 map/reducer.以下是我的文件
I am trying to run a map/reducer in java. Below are my files
WordCount.java
WordCount.java
package counter;
public class WordCount extends Configured implements Tool {
public int run(String[] arg0) throws Exception {
Configuration conf = new Configuration();
Job job = new Job(conf, "wordcount");
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(WordCountMapper.class);
job.setReducerClass(WordCountReducer.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.addInputPath(job, new Path("counterinput"));
// Erase previous run output (if any)
FileSystem.get(conf).delete(new Path("counteroutput"), true);
FileOutputFormat.setOutputPath(job, new Path("counteroutput"));
job.waitForCompletion(true);
return 0;
}
public static void main(String[] args) throws Exception {
int res = ToolRunner.run(new Configuration(), new WordCount(), args);
System.exit(res);
}
}
WordCountMapper.java
WordCountMapper.java
public class WordCountMapper extends
Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, OutputCollector<Text,IntWritable> output, Reporter reporter)
throws IOException, InterruptedException {
System.out.println("hi");
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
output.collect(word, one);
}
}
}
WordCountReducer.java
WordCountReducer.java
public class WordCountReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterator<IntWritable> values,
OutputCollector<Text,IntWritable> output, Reporter reporter) throws IOException, InterruptedException {
System.out.println("hello");
int sum = 0;
while (values.hasNext()) {
sum += values.next().get();
}
output.collect(key, new IntWritable(sum));
}
}
我收到以下错误
13/06/23 23:13:25 INFO jvm.JvmMetrics: Initializing JVM Metrics with
processName=JobTracker, sessionId=
13/06/23 23:13:25 WARN mapred.JobClient: Use GenericOptionsParser for parsing the
arguments. Applications should implement Tool for the same.
13/06/23 23:13:26 INFO input.FileInputFormat: Total input paths to process : 1
13/06/23 23:13:26 INFO mapred.JobClient: Running job: job_local_0001
13/06/23 23:13:26 INFO input.FileInputFormat: Total input paths to process : 1
13/06/23 23:13:26 INFO mapred.MapTask: io.sort.mb = 100
13/06/23 23:13:26 INFO mapred.MapTask: data buffer = 79691776/99614720
13/06/23 23:13:26 INFO mapred.MapTask: record buffer = 262144/327680
13/06/23 23:13:26 WARN mapred.LocalJobRunner: job_local_0001
java.io.IOException: Type mismatch in key from map: expected org.apache.hadoop.io.Text,
recieved org.apache.hadoop.io.LongWritable
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:845)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:541)
at org.
apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
at org.apache.hadoop.mapreduce.Mapper.map(Mapper.java:124)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
13/06/23 23:13:27 INFO mapred.JobClient: map 0% reduce 0%
13/06/23 23:13:27 INFO mapred.JobClient: Job complete: job_local_0001
13/06/23 23:13:27 INFO mapred.JobClient: Counters: 0
我认为它无法找到 Mapper 和 reducer 类.我已经在主类中编写了代码,它正在获取默认的 Mapper 和 reducer 类.
I think it is not able to find Mapper and reducer class. I have written the code in main class, It is getting default Mapper and reducer class.
推荐答案
在代码中添加这两行:
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
您正在使用 TextOutputFormat
默认情况下发出 LongWritable 键和 Text 值,但您将 Text 作为键和 IntWritable 作为值发出.你需要把这件事告诉名人堂.
You are using TextOutputFormat
which emits LongWritable key and Text value by default, but you are emitting Text as key and IntWritable as value. You need to tell this to the famework.
HTH
这篇关于映射中键的类型不匹配:预期 org.apache.hadoop.io.Text,收到 org.apache.hadoop.io.LongWritable的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!
本文标题为:映射中键的类型不匹配:预期 org.apache.hadoop.io.Text,收到 org.apache.hadoop.io.LongWritable
基础教程推荐
- 降序排序:Java Map 2022-01-01
- 在 Libgdx 中处理屏幕的正确方法 2022-01-01
- 无法使用修饰符“public final"访问 java.util.Ha 2022-01-01
- Java Keytool 导入证书后出错,"keytool error: java.io.FileNotFoundException &拒绝访问" 2022-01-01
- “未找到匹配项"使用 matcher 的 group 方法时 2022-01-01
- Java:带有char数组的println给出乱码 2022-01-01
- 设置 bean 时出现 Nullpointerexception 2022-01-01
- 减少 JVM 暂停时间 >1 秒使用 UseConcMarkSweepGC 2022-01-01
- 如何使用 Java 创建 X509 证书? 2022-01-01
- FirebaseListAdapter 不推送聊天应用程序的单个项目 - Firebase-Ui 3.1 2022-01-01