使用Stanford CoreNLP解析共享 – 无法加载解析器模型

我想做一个非常简单的工作:给一个包含代词的字符串,我想解决它们。

例如,我想把句子改为“玛丽有一只小羊羔。她很可爱。” 在“玛丽有一只小羊羔。玛丽很可爱。”

我曾尝试使用Stanford CoreNLP。 但是,我似乎无法启动解析器。 我已经使用Eclipse在我的项目中导入了所有包含的jar,并且我已经为JVM(-Xmx3g)分配了3GB。

错误非常尴尬:

线程“main”中的exceptionjava.lang.NoSuchMethodError:edu.stanford.nlp.parser.lexparser.LexicalizedParser.loadModel(Ljava / lang / String; [Ljava / lang / String;)Ledu / stanford / nlp / parser / lexparser / LexicalizedParser;

我不明白L来自哪里,我认为这是我问题的根源……这很奇怪。 我试图进入源文件,但那里没有错误的引用。

码:

import edu.stanford.nlp.semgraph.SemanticGraphCoreAnnotations.CollapsedCCProcessedDependenciesAnnotation; import edu.stanford.nlp.dcoref.CorefCoreAnnotations.CorefChainAnnotation; import edu.stanford.nlp.dcoref.CorefCoreAnnotations.CorefGraphAnnotation; import edu.stanford.nlp.ling.CoreAnnotations.NamedEntityTagAnnotation; import edu.stanford.nlp.ling.CoreAnnotations.PartOfSpeechAnnotation; import edu.stanford.nlp.ling.CoreAnnotations.SentencesAnnotation; import edu.stanford.nlp.ling.CoreAnnotations.TextAnnotation; import edu.stanford.nlp.ling.CoreAnnotations.TokensAnnotation; import edu.stanford.nlp.trees.TreeCoreAnnotations.TreeAnnotation; import edu.stanford.nlp.ling.CoreLabel; import edu.stanford.nlp.dcoref.CorefChain; import edu.stanford.nlp.pipeline.*; import edu.stanford.nlp.trees.Tree; import edu.stanford.nlp.semgraph.SemanticGraph; import edu.stanford.nlp.util.CoreMap; import edu.stanford.nlp.util.IntTuple; import edu.stanford.nlp.util.Pair; import edu.stanford.nlp.util.Timing; import java.io.File; import java.io.FileInputStream; import java.io.IOException; import java.util.ArrayList; import java.util.List; import java.util.Map; import java.util.Properties; public class Coref { /** * @param args the command line arguments */ public static void main(String[] args) throws IOException, ClassNotFoundException { // creates a StanfordCoreNLP object, with POS tagging, lemmatization, NER, parsing, and coreference resolution Properties props = new Properties(); props.put("annotators", "tokenize, ssplit, pos, lemma, ner, parse, dcoref"); StanfordCoreNLP pipeline = new StanfordCoreNLP(props); // read some text in the text variable String text = "Mary has a little lamb. She is very cute."; // Add your text here! // create an empty Annotation just with the given text Annotation document = new Annotation(text); // run all Annotators on this text pipeline.annotate(document); // these are all the sentences in this document // a CoreMap is essentially a Map that uses class objects as keys and has values with custom types List sentences = document.get(SentencesAnnotation.class); for(CoreMap sentence: sentences) { // traversing the words in the current sentence // a CoreLabel is a CoreMap with additional token-specific methods for (CoreLabel token: sentence.get(TokensAnnotation.class)) { // this is the text of the token String word = token.get(TextAnnotation.class); // this is the POS tag of the token String pos = token.get(PartOfSpeechAnnotation.class); // this is the NER label of the token String ne = token.get(NamedEntityTagAnnotation.class); } // this is the parse tree of the current sentence Tree tree = sentence.get(TreeAnnotation.class); System.out.println(tree); // this is the Stanford dependency graph of the current sentence SemanticGraph dependencies = sentence.get(CollapsedCCProcessedDependenciesAnnotation.class); } // This is the coreference link graph // Each chain stores a set of mentions that link to each other, // along with a method for getting the most representative mention // Both sentence and token offsets start at 1! Map graph = document.get(CorefChainAnnotation.class); System.out.println(graph); } } 

完整堆栈跟踪:

添加注释器标记大小添加注释器ssplit添加注释器pos加载POS模型[edu / stanford / nlp / models / pos-tagger / english-left3words / english-left3words-distsim.tagger] …从训练有素的标记器edu / stanford /加载默认属性nlp / models / pos-tagger / english-left3words / english-left3words-distsim.tagger从edu / stanford / nlp / models / pos-tagger / english-left3words / english-left3words-distsim.tagger中读取POS标记模型…完成[2.1秒]。 完成[2.2秒]。 添加注释器引理添加注释器从edu / stanford / nlp / models / ner / english.all.3class.distsim.crf.ser.gz加载分类器…完成[4.0秒]。 从edu / stanford / nlp / models / ner / english.muc.distsim.crf.ser.gz加载分类器…完成[3.0秒]。 从edu / stanford / nlp / models / ner / english.conll.distsim.crf.ser.gz加载分类器…完成[3.3秒]。 添加注释器解析线程“main”中的exceptionjava.lang.NoSuchMethodError:edu.stanford.nlp.parser.lexparser.LexicalizedParser.loadModel(Ljava / lang / String; [Ljava / lang / String;)Ledu / stanford / nlp / parser / lexparser / LexicalizedParser; at edu.stanford.nlp.pipeline.ParserAnnotator.loadModel(ParserAnnotator.java:115)at edu.stanford.nlp.pipeline.ParserAnnotator。(ParserAnnotator.java:64)at edu.stanford.nlp.pipeline.StanfordCoreNLP $ 12.create (StanfordCoreNLP.java:603)edu.stanford.nlp.pipeline.StanfordCoreNLP $ 12.create(StanfordCoreNLP.java:585)at edu.stanford.nlp.pipeline.AnnotatorPool.get(AnnotatorPool.java:62)at edu.stanford .nlp.pipeline.StanfordCoreNLP.construct(StanfordCoreNLP.java:329)at edu.stanford.nlp.pipeline.StanfordCoreNLP。(StanfordCoreNLP.java:196)at edu.stanford.nlp.pipeline.StanfordCoreNLP。(StanfordCoreNLP.java:186) )在Coref.main的edu.stanford.nlp.pipeline.StanfordCoreNLP。(StanfordCoreNLP.java:178)(Coref.java:41)

是的,从Java 1.0开始,L只是一个奇怪的Sun事物。

LexicalizedParser.loadModel(String, String ...)是添加到解析器的新方法,该方法未找到。 我怀疑这意味着你的类路径中有另一个版本的解析器正在被使用。

试试这个:在任何IDE外部的shell中,给出这些命令(适当地给出stanford-corenlp的路径,并改变:to;如果在Windows上:

 javac -cp ".:stanford-corenlp-2012-04-09/*" Coref.java java -mx3g -cp ".:stanford-corenlp-2012-04-09/*" Coref 

解析器加载并且您的代码正确运行 – 只需要添加一些打印语句,这样您就可以看到它做了什么:-)。