目录
摘要
1. Towards Minimal Supervision BERT-based Grammar Error Correction [PDF] 返回目录
Yiyuan Li, Antonios Anastasopoulos, Alan W Black
Abstract: Current grammatical error correction (GEC) models typically consider the task as sequence generation, which requires large amounts of annotated data and limit the applications in data-limited settings. We try to incorporate contextual information from pre-trained language model to leverage annotation and benefit multilingual scenarios. Results show strong potential of Bidirectional Encoder Representations from Transformers (BERT) in grammatical error correction task.
摘要:当前语法纠错(GEC)模型通常考虑的任务,因为序列产生,这需要大量的注释数据,并限制在数据有限的情况下的应用程序。我们尝试从预先训练语言模型来杠杆注释结合上下文信息,有利于多语言情景。结果表明,从变形金刚(BERT)的语法纠错任务双向编码器交涉的巨大潜力。
Yiyuan Li, Antonios Anastasopoulos, Alan W Black
Abstract: Current grammatical error correction (GEC) models typically consider the task as sequence generation, which requires large amounts of annotated data and limit the applications in data-limited settings. We try to incorporate contextual information from pre-trained language model to leverage annotation and benefit multilingual scenarios. Results show strong potential of Bidirectional Encoder Representations from Transformers (BERT) in grammatical error correction task.
摘要:当前语法纠错(GEC)模型通常考虑的任务,因为序列产生,这需要大量的注释数据,并限制在数据有限的情况下的应用程序。我们尝试从预先训练语言模型来杠杆注释结合上下文信息,有利于多语言情景。结果表明,从变形金刚(BERT)的语法纠错任务双向编码器交涉的巨大潜力。
2. Co-evolution of language and agents in referential games [PDF] 返回目录
Gautier Dagan, Dieuwke Hupkes, Elia Bruni
Abstract: Referential games offer a grounded learning environment for neural agents, that accounts for the functional aspects of language. However, they fail to account for another fundamental aspect of human language: Because languages are transmitted from generation to generation, they have to be learnable by new language users, which makes them subject to cultural evolution. Recent work has shown that incorporating cultural evolution in referential game results in considerable improvements in the properties of the languages that emerge in the game. In this work, we first substantiate this claim with a different data set and a wider array of evaluation metrics. Then, drawing inspiration from linguistic theories of human language evolution, we consider a scenario in which not only cultural but also genetic evolution is integrated. As our core contribution, we introduce the Language Transmission Engine, in which cultural evolution of the language is combined with genetic evolution of the agents' architecture. We show that this co-evolution scenario leads to across-the-board improvements on all considered metrics. These results stress that cultural evolution is important for language emergence studies, but also the suitability of the architecture itself should be considered.
摘要:参照游戏提供接地的学习环境,为神经剂,这占了语言的功能方面。然而,他们无法解释人类语言的另一个重要方面:由于语言从代代相传,他们必须通过新的语言的用户,这使得他们受到文化的演变可以学习的。最近的研究显示,在纳入参考的比赛结果在在游戏中出现的语言的性质相当大的改善文化的演变。在这项工作中,我们首先证实这一要求与不同的数据集和评价度量的更广泛的阵列。然后,从人类语言进化的语言学理论中汲取灵感,我们认为这不仅是文化,而且基因进化集成的场景。作为我们的核心贡献,我们介绍了语言传输引擎,其中语言文化演进与代理的架构的遗传进化相结合。我们表明,这种协同进化的情况导致对所有考虑的指标,全面的板的改进。这些结果强调的是文化进化是语言出现的研究很重要,而且建筑本身的适用性应予以考虑。
Gautier Dagan, Dieuwke Hupkes, Elia Bruni
Abstract: Referential games offer a grounded learning environment for neural agents, that accounts for the functional aspects of language. However, they fail to account for another fundamental aspect of human language: Because languages are transmitted from generation to generation, they have to be learnable by new language users, which makes them subject to cultural evolution. Recent work has shown that incorporating cultural evolution in referential game results in considerable improvements in the properties of the languages that emerge in the game. In this work, we first substantiate this claim with a different data set and a wider array of evaluation metrics. Then, drawing inspiration from linguistic theories of human language evolution, we consider a scenario in which not only cultural but also genetic evolution is integrated. As our core contribution, we introduce the Language Transmission Engine, in which cultural evolution of the language is combined with genetic evolution of the agents' architecture. We show that this co-evolution scenario leads to across-the-board improvements on all considered metrics. These results stress that cultural evolution is important for language emergence studies, but also the suitability of the architecture itself should be considered.
摘要:参照游戏提供接地的学习环境,为神经剂,这占了语言的功能方面。然而,他们无法解释人类语言的另一个重要方面:由于语言从代代相传,他们必须通过新的语言的用户,这使得他们受到文化的演变可以学习的。最近的研究显示,在纳入参考的比赛结果在在游戏中出现的语言的性质相当大的改善文化的演变。在这项工作中,我们首先证实这一要求与不同的数据集和评价度量的更广泛的阵列。然后,从人类语言进化的语言学理论中汲取灵感,我们认为这不仅是文化,而且基因进化集成的场景。作为我们的核心贡献,我们介绍了语言传输引擎,其中语言文化演进与代理的架构的遗传进化相结合。我们表明,这种协同进化的情况导致对所有考虑的指标,全面的板的改进。这些结果强调的是文化进化是语言出现的研究很重要,而且建筑本身的适用性应予以考虑。
3. Machine Learning Approaches for Amharic Parts-of-speech Tagging [PDF] 返回目录
Ibrahim Gashaw, H L. Shashirekha
Abstract: Part-of-speech (POS) tagging is considered as one of the basic but necessary tools which are required for many Natural Language Processing (NLP) applications such as word sense disambiguation, information retrieval, information processing, parsing, question answering, and machine translation. Performance of the current POS taggers in Amharic is not as good as that of the contemporary POS taggers available for English and other European languages. The aim of this work is to improve POS tagging performance for the Amharic language, which was never above 91%. Usage of morphological knowledge, an extension of the existing annotated data, feature extraction, parameter tuning by applying grid search and the tagging algorithms have been examined and obtained significant performance difference from the previous works. We have used three different datasets for POS experiments.
摘要:部分的词类(POS)标记被认为是其所需的许多自然语言处理(NLP)的应用,如词义消歧,信息检索,信息处理,分析,问题解答基本而必要的工具之一,和机器翻译。在阿姆哈拉语当前POS标注器的性能还不如说可用于英语和其他欧洲语言的当代POS标注器的。这项工作的目的是为了改进为阿姆哈拉语,这是从来没有91%以上的词性标注的性能。形态的知识,现有的注解数据的扩展,特征提取,参数整定运用网格搜索和标记算法的使用已经被检查,并从以前的作品获得显著的性能差异。我们使用了三种不同的数据集用于POS实验。
Ibrahim Gashaw, H L. Shashirekha
Abstract: Part-of-speech (POS) tagging is considered as one of the basic but necessary tools which are required for many Natural Language Processing (NLP) applications such as word sense disambiguation, information retrieval, information processing, parsing, question answering, and machine translation. Performance of the current POS taggers in Amharic is not as good as that of the contemporary POS taggers available for English and other European languages. The aim of this work is to improve POS tagging performance for the Amharic language, which was never above 91%. Usage of morphological knowledge, an extension of the existing annotated data, feature extraction, parameter tuning by applying grid search and the tagging algorithms have been examined and obtained significant performance difference from the previous works. We have used three different datasets for POS experiments.
摘要:部分的词类(POS)标记被认为是其所需的许多自然语言处理(NLP)的应用,如词义消歧,信息检索,信息处理,分析,问题解答基本而必要的工具之一,和机器翻译。在阿姆哈拉语当前POS标注器的性能还不如说可用于英语和其他欧洲语言的当代POS标注器的。这项工作的目的是为了改进为阿姆哈拉语,这是从来没有91%以上的词性标注的性能。形态的知识,现有的注解数据的扩展,特征提取,参数整定运用网格搜索和标记算法的使用已经被检查,并从以前的作品获得显著的性能差异。我们使用了三种不同的数据集用于POS实验。
4. Learning to Multi-Task Learn for Better Neural Machine Translation [PDF] 返回目录
Poorya Zaremoodi, Gholamreza Haffari
Abstract: Scarcity of parallel sentence pairs is a major challenge for training high quality neural machine translation (NMT) models in bilingually low-resource scenarios, as NMT is data-hungry. Multi-task learning is an elegant approach to inject linguistic-related inductive biases into NMT, using auxiliary syntactic and semantic tasks, to improve generalisation. The challenge, however, is to devise effective training schedules, prescribing when to make use of the auxiliary tasks during the training process to fill the knowledge gaps of the main translation task, a setting referred to as biased-MTL. Current approaches for the training schedule are based on hand-engineering heuristics, whose effectiveness vary in different MTL settings. We propose a novel framework for learning the training schedule, ie learning to multi-task learn, for the MTL setting of interest. We formulate the training schedule as a Markov decision process which paves the way to employ policy learning methods to learn the scheduling policy. We effectively and efficiently learn the training schedule policy within the imitation learning framework using an oracle policy algorithm that dynamically sets the importance weights of auxiliary tasks based on their contributions to the generalisability of the main NMT task. Experiments on low-resource NMT settings show the resulting automatically learned training schedulers are competitive with the best heuristics, and lead to up to +1.1 BLEU score improvements.
摘要:平行句对稀缺是在双语低资源方案培养高素质神经机器翻译(NMT)车型的一大挑战,因为NMT是大量数据的。多任务学习是注入语言相关的感性偏见到NMT,利用辅助句法和语义的任务,以提高泛化一个优雅的方法。我们面临的挑战,但是,是制定有效的培训计划,开处方时,在训练过程中使用的辅助任务,填补了主要翻译任务的知识差距,设定被称为偏压MTL。对于训练计划目前的做法是基于手工工程启发式,其有效性在不同MTL设置而异。我们提出了学习培训计划,即学习多任务学习,感兴趣的MTL设置一个新的框架。我们制定的训练计划为马尔可夫决策过程,铺平了道路雇用政策的学习方法来学习调度策略。我们有效地学习使用Oracle策略算法,动态设置的基础上他们的主要任务NMT的普适性贡献辅助任务的重要性权重模仿学习框架内的培训计划政策。在低资源NMT设置实验表明所产生的自动学习训练调度与最好的启发式竞争力,并导致高达+1.1 BLEU得分的改善。
Poorya Zaremoodi, Gholamreza Haffari
Abstract: Scarcity of parallel sentence pairs is a major challenge for training high quality neural machine translation (NMT) models in bilingually low-resource scenarios, as NMT is data-hungry. Multi-task learning is an elegant approach to inject linguistic-related inductive biases into NMT, using auxiliary syntactic and semantic tasks, to improve generalisation. The challenge, however, is to devise effective training schedules, prescribing when to make use of the auxiliary tasks during the training process to fill the knowledge gaps of the main translation task, a setting referred to as biased-MTL. Current approaches for the training schedule are based on hand-engineering heuristics, whose effectiveness vary in different MTL settings. We propose a novel framework for learning the training schedule, ie learning to multi-task learn, for the MTL setting of interest. We formulate the training schedule as a Markov decision process which paves the way to employ policy learning methods to learn the scheduling policy. We effectively and efficiently learn the training schedule policy within the imitation learning framework using an oracle policy algorithm that dynamically sets the importance weights of auxiliary tasks based on their contributions to the generalisability of the main NMT task. Experiments on low-resource NMT settings show the resulting automatically learned training schedulers are competitive with the best heuristics, and lead to up to +1.1 BLEU score improvements.
摘要:平行句对稀缺是在双语低资源方案培养高素质神经机器翻译(NMT)车型的一大挑战,因为NMT是大量数据的。多任务学习是注入语言相关的感性偏见到NMT,利用辅助句法和语义的任务,以提高泛化一个优雅的方法。我们面临的挑战,但是,是制定有效的培训计划,开处方时,在训练过程中使用的辅助任务,填补了主要翻译任务的知识差距,设定被称为偏压MTL。对于训练计划目前的做法是基于手工工程启发式,其有效性在不同MTL设置而异。我们提出了学习培训计划,即学习多任务学习,感兴趣的MTL设置一个新的框架。我们制定的训练计划为马尔可夫决策过程,铺平了道路雇用政策的学习方法来学习调度策略。我们有效地学习使用Oracle策略算法,动态设置的基础上他们的主要任务NMT的普适性贡献辅助任务的重要性权重模仿学习框架内的培训计划政策。在低资源NMT设置实验表明所产生的自动学习训练调度与最好的启发式竞争力,并导致高达+1.1 BLEU得分的改善。
5. A Scalable Chatbot Platform Leveraging Online Community Posts: A Proof-of-Concept Study [PDF] 返回目录
Sihyeon Jo, Sangwon Im, SangWook Han, Seung Hee Yang, Hee-Eun Kim, Seong-Woo Kim
Abstract: The development of natural language processing algorithms and the explosive growth of conversational data are encouraging researches on the human-computer conversation. Still, getting qualified conversational data on a large scale is difficult and expensive. In this paper, we verify the feasibility of constructing a data-driven chatbot with processed online community posts by using them as pseudo-conversational data. We argue that chatbots for various purposes can be built extensively through the pipeline exploiting the common structure of community posts. Our experiment demonstrates that chatbots created along the pipeline can yield the proper responses.
摘要:自然语言处理算法的开发和会话数据的爆炸性增长是令人鼓舞的人机对话的研究。尽管如此,越来越大规模合格的会话数据难,看病贵。在本文中,我们核实使用它们作为伪会话数据建设有处理在线社区的帖子一个数据驱动的聊天机器人的可行性。我们认为,出于各种目的聊天机器人,可以通过管道利用社区帖子的共同结构广泛建立。我们的实验表明,沿管道创建聊天机器人能得到适当的回应。
Sihyeon Jo, Sangwon Im, SangWook Han, Seung Hee Yang, Hee-Eun Kim, Seong-Woo Kim
Abstract: The development of natural language processing algorithms and the explosive growth of conversational data are encouraging researches on the human-computer conversation. Still, getting qualified conversational data on a large scale is difficult and expensive. In this paper, we verify the feasibility of constructing a data-driven chatbot with processed online community posts by using them as pseudo-conversational data. We argue that chatbots for various purposes can be built extensively through the pipeline exploiting the common structure of community posts. Our experiment demonstrates that chatbots created along the pipeline can yield the proper responses.
摘要:自然语言处理算法的开发和会话数据的爆炸性增长是令人鼓舞的人机对话的研究。尽管如此,越来越大规模合格的会话数据难,看病贵。在本文中,我们核实使用它们作为伪会话数据建设有处理在线社区的帖子一个数据驱动的聊天机器人的可行性。我们认为,出于各种目的聊天机器人,可以通过管道利用社区帖子的共同结构广泛建立。我们的实验表明,沿管道创建聊天机器人能得到适当的回应。
6. Simulating Lexical Semantic Change from Sense-Annotated Data [PDF] 返回目录
Dominik Schlechtweg, Sabine Schulte im Walde
Abstract: We present a novel procedure to simulate lexical semantic change from synchronic sense-annotated data, and demonstrate its usefulness for assessing lexical semantic change detection models. The induced dataset represents a stronger correspondence to empirically observed lexical semantic change than previous synthetic datasets, because it exploits the intimate relationship between synchronic polysemy and diachronic change. We publish the data and provide the first large-scale evaluation gold standard for LSC detection models.
摘要:本文提出了一种新的方法来模拟从共时性意义标注的数据词汇语义变化,并展示其评估词汇语义变化检测模型有效性。感应数据集表示更强的对应于比以前的合成数据集经验观察词汇语义变化,因为它利用共时多义性和历时变化之间的亲密关系。我们发布的数据,并提供了LSC检测模型的首次大规模评估的黄金标准。
Dominik Schlechtweg, Sabine Schulte im Walde
Abstract: We present a novel procedure to simulate lexical semantic change from synchronic sense-annotated data, and demonstrate its usefulness for assessing lexical semantic change detection models. The induced dataset represents a stronger correspondence to empirically observed lexical semantic change than previous synthetic datasets, because it exploits the intimate relationship between synchronic polysemy and diachronic change. We publish the data and provide the first large-scale evaluation gold standard for LSC detection models.
摘要:本文提出了一种新的方法来模拟从共时性意义标注的数据词汇语义变化,并展示其评估词汇语义变化检测模型有效性。感应数据集表示更强的对应于比以前的合成数据集经验观察词汇语义变化,因为它利用共时多义性和历时变化之间的亲密关系。我们发布的数据,并提供了LSC检测模型的首次大规模评估的黄金标准。
7. Debate Dynamics for Human-comprehensible Fact-checking on Knowledge Graphs [PDF] 返回目录
Marcel Hildebrandt, Jorge Andres Quintero Serna, Yunpu Ma, Martin Ringsquandl, Mitchell Joblin, Volker Tresp
Abstract: We propose a novel method for fact-checking on knowledge graphs based on debate dynamics. The underlying idea is to frame the task of triple classification as a debate game between two reinforcement learning agents which extract arguments -- paths in the knowledge graph -- with the goal to justify the fact being true (thesis) or the fact being false (antithesis), respectively. Based on these arguments, a binary classifier, referred to as the judge, decides whether the fact is true or false. The two agents can be considered as sparse feature extractors that present interpretable evidence for either the thesis or the antithesis. In contrast to black-box methods, the arguments enable the user to gain an understanding for the decision of the judge. Moreover, our method allows for interactive reasoning on knowledge graphs where the users can raise additional arguments or evaluate the debate taking common sense reasoning and external information into account. Such interactive systems can increase the acceptance of various AI applications based on knowledge graphs and can further lead to higher efficiency, robustness, and fairness.
摘要:本文提出了基于辩论动力学知识图其实检查的新方法。其基本思想是将框架三重分类的任务,两个加强学习剂,其提取参数之间辩论的游戏 - 在知识图上的路径 - 用进球来证明的事实是真实的(论文)或事实是假的(对立面),分别。根据这些参数,一个二元分类,简称判断,决定是否其实是真还是假。这两种药剂可以看作是稀疏的特征提取,对于无论是论文或对立面目前可解释的证据。相较于黑箱方法,参数使用户获得了法官的决定的理解。此外,我们的方法允许对知识图,其中用户可以提出额外的参数或评估的辩论采取常识推理和外部信息纳入考虑交互推理。这样的交互系统可以增加接受的基于知识的图表各种AI应用,并进一步导致更高的效率,稳健性和公平性。
Marcel Hildebrandt, Jorge Andres Quintero Serna, Yunpu Ma, Martin Ringsquandl, Mitchell Joblin, Volker Tresp
Abstract: We propose a novel method for fact-checking on knowledge graphs based on debate dynamics. The underlying idea is to frame the task of triple classification as a debate game between two reinforcement learning agents which extract arguments -- paths in the knowledge graph -- with the goal to justify the fact being true (thesis) or the fact being false (antithesis), respectively. Based on these arguments, a binary classifier, referred to as the judge, decides whether the fact is true or false. The two agents can be considered as sparse feature extractors that present interpretable evidence for either the thesis or the antithesis. In contrast to black-box methods, the arguments enable the user to gain an understanding for the decision of the judge. Moreover, our method allows for interactive reasoning on knowledge graphs where the users can raise additional arguments or evaluate the debate taking common sense reasoning and external information into account. Such interactive systems can increase the acceptance of various AI applications based on knowledge graphs and can further lead to higher efficiency, robustness, and fairness.
摘要:本文提出了基于辩论动力学知识图其实检查的新方法。其基本思想是将框架三重分类的任务,两个加强学习剂,其提取参数之间辩论的游戏 - 在知识图上的路径 - 用进球来证明的事实是真实的(论文)或事实是假的(对立面),分别。根据这些参数,一个二元分类,简称判断,决定是否其实是真还是假。这两种药剂可以看作是稀疏的特征提取,对于无论是论文或对立面目前可解释的证据。相较于黑箱方法,参数使用户获得了法官的决定的理解。此外,我们的方法允许对知识图,其中用户可以提出额外的参数或评估的辩论采取常识推理和外部信息纳入考虑交互推理。这样的交互系统可以增加接受的基于知识的图表各种AI应用,并进一步导致更高的效率,稳健性和公平性。
8. Inductive Document Network Embedding with Topic-Word Attention [PDF] 返回目录
Robin Brochier, Adrien Guille, Julien Velcin
Abstract: Document network embedding aims at learning representations for a structured text corpus i.e. when documents are linked to each other. Recent algorithms extend network embedding approaches by incorporating the text content associated with the nodes in their formulations. In most cases, it is hard to interpret the learned representations. Moreover, little importance is given to the generalization to new documents that are not observed within the network. In this paper, we propose an interpretable and inductive document network embedding method. We introduce a novel mechanism, the Topic-Word Attention (TWA), that generates document representations based on the interplay between word and topic representations. We train these word and topic vectors through our general model, Inductive Document Network Embedding (IDNE), by leveraging the connections in the document network. Quantitative evaluations show that our approach achieves state-of-the-art performance on various networks and we qualitatively show that our model produces meaningful and interpretable representations of the words, topics and documents.
摘要:文档在网络学习表示了结构化文本语料库即当文档相互链接嵌入目标。最近算法扩展网络嵌入通过将在它们的制剂中的节点相关联的文本内容接近。在大多数情况下,这是很难解释学表示。此外,小的重要性是考虑到泛化到未在网络内观察到新文档。在本文中,我们提出了一个解释和归纳文档网络嵌入方法。我们引入新的机制,主题字注意(TWA),其基于字和主题陈述之间的相互文档表示。我们培养这些词和话题载体通过我们的一般模型,归纳文档网络嵌入(IDNE),通过利用文档网络中的连接。定量评估表明,我们的方法实现了各种网络上的国家的最先进的性能和我们定性地表明,我们的模型产生的话,主题和文件有意义的,可解释的表示。
Robin Brochier, Adrien Guille, Julien Velcin
Abstract: Document network embedding aims at learning representations for a structured text corpus i.e. when documents are linked to each other. Recent algorithms extend network embedding approaches by incorporating the text content associated with the nodes in their formulations. In most cases, it is hard to interpret the learned representations. Moreover, little importance is given to the generalization to new documents that are not observed within the network. In this paper, we propose an interpretable and inductive document network embedding method. We introduce a novel mechanism, the Topic-Word Attention (TWA), that generates document representations based on the interplay between word and topic representations. We train these word and topic vectors through our general model, Inductive Document Network Embedding (IDNE), by leveraging the connections in the document network. Quantitative evaluations show that our approach achieves state-of-the-art performance on various networks and we qualitatively show that our model produces meaningful and interpretable representations of the words, topics and documents.
摘要:文档在网络学习表示了结构化文本语料库即当文档相互链接嵌入目标。最近算法扩展网络嵌入通过将在它们的制剂中的节点相关联的文本内容接近。在大多数情况下,这是很难解释学表示。此外,小的重要性是考虑到泛化到未在网络内观察到新文档。在本文中,我们提出了一个解释和归纳文档网络嵌入方法。我们引入新的机制,主题字注意(TWA),其基于字和主题陈述之间的相互文档表示。我们培养这些词和话题载体通过我们的一般模型,归纳文档网络嵌入(IDNE),通过利用文档网络中的连接。定量评估表明,我们的方法实现了各种网络上的国家的最先进的性能和我们定性地表明,我们的模型产生的话,主题和文件有意义的,可解释的表示。
9. Linking Social Media Posts to News with Siamese Transformers [PDF] 返回目录
Jacob Danovitch
Abstract: Many computational social science projects examine online discourse surrounding a specific trending topic. These works often involve the acquisition of large-scale corpora relevant to the event in question to analyze aspects of the response to the event. Keyword searches present a precision-recall trade-off and crowd-sourced annotations, while effective, are costly. This work aims to enable automatic and accurate ad-hoc retrieval of comments discussing a trending topic from a large corpus, using only a handful of seed news articles.
摘要:许多计算社会科学的研究项目围绕特定热门话题的在线话语。这些作品往往涉及收购有关问题的情况下大规模语料库的分析应对事件的各个方面。关键字搜索呈现精密召回权衡和人群来源的注解,而有效的,是昂贵的。这项工作的目的在于使的意见,从大语料库讨论一个热门话题自动精确的ad-hoc检索,仅使用种子的新闻报道屈指可数。
Jacob Danovitch
Abstract: Many computational social science projects examine online discourse surrounding a specific trending topic. These works often involve the acquisition of large-scale corpora relevant to the event in question to analyze aspects of the response to the event. Keyword searches present a precision-recall trade-off and crowd-sourced annotations, while effective, are costly. This work aims to enable automatic and accurate ad-hoc retrieval of comments discussing a trending topic from a large corpus, using only a handful of seed news articles.
摘要:许多计算社会科学的研究项目围绕特定热门话题的在线话语。这些作品往往涉及收购有关问题的情况下大规模语料库的分析应对事件的各个方面。关键字搜索呈现精密召回权衡和人群来源的注解,而有效的,是昂贵的。这项工作的目的在于使的意见,从大语料库讨论一个热门话题自动精确的ad-hoc检索,仅使用种子的新闻报道屈指可数。
注:中文为机器翻译结果!