Hadoop:java.lang.ClassCastException:org.apache.hadoop.io.LongWritable无法强制转换为org.apache.hadoop.io.Text
我的程序看起来像
public class TopKRecord extends Configured implements Tool { public static class MapClass extends Mapper { public void map(Text key, Text value, Context context) throws IOException, InterruptedException { // your map code goes here String[] fields = value.toString().split(","); String year = fields[1]; String claims = fields[8]; if (claims.length() > 0 && (!claims.startsWith("\""))) { context.write(new Text(year.toString()), new Text(claims.toString())); } } } public int run(String args[]) throws Exception { Job job = new Job(); job.setJarByClass(TopKRecord.class); job.setMapperClass(MapClass.class); FileInputFormat.setInputPaths(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); job.setJobName("TopKRecord"); job.setMapOutputValueClass(Text.class); job.setNumReduceTasks(0); boolean success = job.waitForCompletion(true); return success ? 0 : 1; } public static void main(String args[]) throws Exception { int ret = ToolRunner.run(new TopKRecord(), args); System.exit(ret); } }
数据看起来像
"PATENT","GYEAR","GDATE","APPYEAR","COUNTRY","POSTATE","ASSIGNEE","ASSCODE","CLAIMS","NCLASS","CAT","SUBCAT","CMADE","CRECEIVE","RATIOCIT","GENERAL","ORIGINAL","FWDAPLAG","BCKGTLAG","SELFCTUB","SELFCTLB","SECDUPBD","SECDLWBD" 3070801,1963,1096,,"BE","",,1,,269,6,69,,1,,0,,,,,,, 3070802,1963,1096,,"US","TX",,1,,2,6,63,,0,,,,,,,,, 3070803,1963,1096,,"US","IL",,1,,2,6,63,,9,,0.3704,,,,,,, 3070804,1963,1096,,"US","OH",,1,,2,6,63,,3,,0.6667,,,,,,,
在运行此程序时,我在控制台上看到以下内容
12/08/02 12:43:34 INFO mapred.JobClient: Task Id : attempt_201208021025_0007_m_000000_0, Status : FAILED java.lang.ClassCastException: org.apache.hadoop.io.LongWritable cannot be cast to org.apache.hadoop.io.Text at com.hadoop.programs.TopKRecord$MapClass.map(TopKRecord.java:26) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370) at org.apache.hadoop.mapred.Child$4.run(Child.java:255) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at org.apache.hadoop.mapred.Child.main(Child.java:249)
我相信Class Types正确映射, Class Mapper ,
请让我知道我在这里做错了什么?
当您使用M / R程序读取文件时,映射器的输入键应该是文件中行的索引 ,而输入值将是整行。
所以这里发生的事情是你正在尝试将行索引作为Text
对象,这是错误的,你需要一个LongWritable
来让Hadoop不抱怨类型。
试试这个:
public class TopKRecord extends Configured implements Tool { public static class MapClass extends Mapper { public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { // your map code goes here String[] fields = value.toString().split(","); String year = fields[1]; String claims = fields[8]; if (claims.length() > 0 && (!claims.startsWith("\""))) { context.write(new Text(year.toString()), new Text(claims.toString())); } } } ... }
您可能需要重新考虑代码中的一件事,即为正在处理的每条记录创建2个Text
对象。 您应该只在开头创建这两个对象,然后在映射器中使用set
方法设置它们的值。 如果您正在处理大量数据,这将为您节省大量时间。
你需要设置输入格式类
job.setInputFormatClass(KeyValueTextInputFormat.class); job.setOutputFormatClass(TextOutputFormat.class);
- 我该如何调试Hadoop map reduce
- hdfs中的文件路径
- 记录MapReduce作业的标准做法
- 多输出路径(Java – Hadoop – MapReduce)
- Hadoop:将多个IP地址绑定到群集NameNode
- 线程“main”中的exceptionjava.lang.UnsupportedClassVersionError,不支持的major.minor版本52.0
- hadoop – map reduce任务和静态变量
- 如何在运行Hadoop MapReduce作业时将文件名/文件内容作为MAP的键/值输入?
- 当尝试从Java中读取HDFS中的文件时,“错误的FS …期望:file:///”