目录
4. NASE: Learning Knowledge Graph Embedding for Link Prediction via Neural Architecture Search [PDF] 摘要
5. Are Neural Open-Domain Dialog Systems Robust to Speech Recognition Errors in the Dialog History? An Empirical Study [PDF] 摘要
9. Word2vec Skip-gram Dimensionality Selection via Sequential Normalized Maximum Likelihood [PDF] 摘要
摘要
1. Glancing Transformer for Non-Autoregressive Neural Machine Translation [PDF] 返回目录
Lihua Qian, Hao Zhou, Yu Bao, Mingxuan Wang, Lin Qiu, Weinan Zhang, Yong Yu, Lei Li
Abstract: Non-autoregressive neural machine translation achieves remarkable inference acceleration compared to autoregressive models. However, current non-autoregressive models still fall behind their autoregressive counterparts in prediction accuracy. We attribute the accuracy gaps to two disadvantages of non-autoregressive models: a) learning simultaneous generation under the overly strong conditional independence assumption; b) lacking explicit target language modeling. In this paper, we propose Glancing Transformer (GLAT) to address the above disadvantages, which reduces the difficulty of learning simultaneous generation and introduces explicit target language modeling in the non-autoregressive setting at the same time. Experiments on several benchmarks demonstrate that our approach significantly improves the accuracy of non-autoregressive models without sacrificing any inference efficiency. In particular, GLAT achieves 30.91 BLEU on WMT 2014 German-English, which narrows the gap between autoregressive models and non-autoregressive models to less than 0.5 BLEU score.
摘要:相对于自回归模型非自回归神经机器翻译实现了显着的推论加速。然而,当前的非自回归模型仍然落在预测精度的自回归同行的后面。我们认为精度空白,非自回归模型的两个缺点:1)过于强烈的条件独立性假设条件下学习的同时产生;二)缺乏明确的目标语言模型。在本文中,我们提出了掠入变压器(GLAT),以解决上述问题,从而降低了在学习的同时在非自回归设定同时生成,并介绍了明确的目标语言建模的难度。在几个基准实验表明,我们的方法显著提高非自回归模型的准确性不牺牲任何推论效率。特别是,GLAT实现了对WMT 2014 30.91 BLEU德语英语,这缩小到小于0.5 BLEU分数自回归模型和非自回归模型之间的差距。
Lihua Qian, Hao Zhou, Yu Bao, Mingxuan Wang, Lin Qiu, Weinan Zhang, Yong Yu, Lei Li
Abstract: Non-autoregressive neural machine translation achieves remarkable inference acceleration compared to autoregressive models. However, current non-autoregressive models still fall behind their autoregressive counterparts in prediction accuracy. We attribute the accuracy gaps to two disadvantages of non-autoregressive models: a) learning simultaneous generation under the overly strong conditional independence assumption; b) lacking explicit target language modeling. In this paper, we propose Glancing Transformer (GLAT) to address the above disadvantages, which reduces the difficulty of learning simultaneous generation and introduces explicit target language modeling in the non-autoregressive setting at the same time. Experiments on several benchmarks demonstrate that our approach significantly improves the accuracy of non-autoregressive models without sacrificing any inference efficiency. In particular, GLAT achieves 30.91 BLEU on WMT 2014 German-English, which narrows the gap between autoregressive models and non-autoregressive models to less than 0.5 BLEU score.
摘要:相对于自回归模型非自回归神经机器翻译实现了显着的推论加速。然而,当前的非自回归模型仍然落在预测精度的自回归同行的后面。我们认为精度空白,非自回归模型的两个缺点:1)过于强烈的条件独立性假设条件下学习的同时产生;二)缺乏明确的目标语言模型。在本文中,我们提出了掠入变压器(GLAT),以解决上述问题,从而降低了在学习的同时在非自回归设定同时生成,并介绍了明确的目标语言建模的难度。在几个基准实验表明,我们的方法显著提高非自回归模型的准确性不牺牲任何推论效率。特别是,GLAT实现了对WMT 2014 30.91 BLEU德语英语,这缩小到小于0.5 BLEU分数自回归模型和非自回归模型之间的差距。
2. COVID-SEE: Scientific Evidence Explorer for COVID-19 Related Research [PDF] 返回目录
Karin Verspoor, Simon Šuster, Yulia Otmakhova, Shevon Mendis, Zenan Zhai, Biaoyan Fang, Jey Han Lau, Timothy Baldwin, Antonio Jimeno Yepes, David Martinez
Abstract: We present COVID-SEE, a system for medical literature discovery based on the concept of information exploration, which builds on several distinct text analysis and natural language processing methods to structure and organise information in publications, and augments search by providing a visual overview supporting exploration of a collection to identify key articles of interest. We developed this system over COVID-19 literature to help medical professionals and researchers explore the literature evidence, and improve findability of relevant information. COVID-SEE is available at this http URL.
摘要:我们目前COVID看的,基于信息的探索,它建立在几个不同的文本分析和自然语言处理方法,结构和出版物组织信息的概念,医学文献中发现的系统,并且增加搜索通过提供可视化概览支持收藏的探索,以确定感兴趣的主要条款。我们开发了这个系统通过COVID-19文学帮助医疗专业人士和研究人员探索文学的证据,并提高相关资料可寻。 COVID-SEE是可以在这个HTTP URL。
Karin Verspoor, Simon Šuster, Yulia Otmakhova, Shevon Mendis, Zenan Zhai, Biaoyan Fang, Jey Han Lau, Timothy Baldwin, Antonio Jimeno Yepes, David Martinez
Abstract: We present COVID-SEE, a system for medical literature discovery based on the concept of information exploration, which builds on several distinct text analysis and natural language processing methods to structure and organise information in publications, and augments search by providing a visual overview supporting exploration of a collection to identify key articles of interest. We developed this system over COVID-19 literature to help medical professionals and researchers explore the literature evidence, and improve findability of relevant information. COVID-SEE is available at this http URL.
摘要:我们目前COVID看的,基于信息的探索,它建立在几个不同的文本分析和自然语言处理方法,结构和出版物组织信息的概念,医学文献中发现的系统,并且增加搜索通过提供可视化概览支持收藏的探索,以确定感兴趣的主要条款。我们开发了这个系统通过COVID-19文学帮助医疗专业人士和研究人员探索文学的证据,并提高相关资料可寻。 COVID-SEE是可以在这个HTTP URL。
3. Very Deep Transformers for Neural Machine Translation [PDF] 返回目录
Xiaodong Liu, Kevin Duh, Liyuan Liu, Jianfeng Gao
Abstract: We explore the application of very deep Transformer models for Neural Machine Translation (NMT). Using a simple yet effective initialization technique that stabilizes training, we show that it is feasible to build standard Transformer-based models with up to 60 encoder layers and 12 decoder layers. These deep models outperform their baseline 6-layer counterparts by as much as 2.5 BLEU, and achieve new state-of-the-art benchmark results on WMT14 English-French (43.8 BLEU) and WMT14 English-German (30.1 BLEU).The code and trained models will be publicly available at: this https URL.
摘要:本文探讨很深的变压器型号为神经机器翻译(NMT)的应用。使用一个简单而有效的初始化的技术,稳定的培训,我们表明,它是建立标准的基于变压器的型号多达60编码器层和12层的解码器是可行的。这些深层次的机型多达2.5 BLEU跑赢其基准6层对口,实现对WMT14英语 - 法国(43.8 BLEU)和WMT14英语德国(30.1 BLEU)。该代码新的国家的最先进的基准测试结果和训练有素的车型将公布于:此HTTPS URL。
Xiaodong Liu, Kevin Duh, Liyuan Liu, Jianfeng Gao
Abstract: We explore the application of very deep Transformer models for Neural Machine Translation (NMT). Using a simple yet effective initialization technique that stabilizes training, we show that it is feasible to build standard Transformer-based models with up to 60 encoder layers and 12 decoder layers. These deep models outperform their baseline 6-layer counterparts by as much as 2.5 BLEU, and achieve new state-of-the-art benchmark results on WMT14 English-French (43.8 BLEU) and WMT14 English-German (30.1 BLEU).The code and trained models will be publicly available at: this https URL.
摘要:本文探讨很深的变压器型号为神经机器翻译(NMT)的应用。使用一个简单而有效的初始化的技术,稳定的培训,我们表明,它是建立标准的基于变压器的型号多达60编码器层和12层的解码器是可行的。这些深层次的机型多达2.5 BLEU跑赢其基准6层对口,实现对WMT14英语 - 法国(43.8 BLEU)和WMT14英语德国(30.1 BLEU)。该代码新的国家的最先进的基准测试结果和训练有素的车型将公布于:此HTTPS URL。
4. NASE: Learning Knowledge Graph Embedding for Link Prediction via Neural Architecture Search [PDF] 返回目录
Xiaoyu Kou, Bingfeng Luo, Huang Hu, Yan Zhang
Abstract: Link prediction is the task of predicting missing connections between entities in the knowledge graph (KG). While various forms of models are proposed for the link prediction task, most of them are designed based on a few known relation patterns in several well-known datasets. Due to the diversity and complexity nature of the real-world KGs, it is inherently difficult to design a model that fits all datasets well. To address this issue, previous work has tried to use Automated Machine Learning (AutoML) to search for the best model for a given dataset. However, their search space is limited only to bilinear model families. In this paper, we propose a novel Neural Architecture Search (NAS) framework for the link prediction task. First, the embeddings of the input triplet are refined by the Representation Search Module. Then, the prediction score is searched within the Score Function Search Module. This framework entails a more general search space, which enables us to take advantage of several mainstream model families, and thus it can potentially achieve better performance. We relax the search space to be continuous so that the architecture can be optimized efficiently using gradient-based search strategies. Experimental results on several benchmark datasets demonstrate the effectiveness of our method compared with several state-of-the-art approaches.
摘要:链接的预测是预测在知识图(KG)实体之间缺少连接的任务。虽然提出了链接预测任务形式多样的车型,其中大部分都被设计基础上几个著名的数据集的几个已知关系模式。由于现实世界的幼稚园的多样性和复杂性质,本来就很难设计出适用于所有数据集以及模型。为了解决这个问题,以前的工作试图用自动化机器学习(AutoML)来搜索特定的数据集的最佳模式。然而,他们的搜索空间仅限于双线性模型的家庭。在本文中,我们提出了链接预测任务新颖的神经结构搜索(NAS)的框架。首先,输入三重的嵌入物由代表搜索模块细化。然后,预测分数搜索的记分函数搜索模块内。这一框架需要一个更一般的搜索空间,这使我们能够采取几种主流机型家庭的优势,因此它有可能获得更好的性能。我们放松的搜索空间是连续的,这样的结构能够有效地利用基于梯度搜索策略进行优化。在几个基准数据集实验结果表明,与国家的最先进的几种方法相比,我们的方法的有效性。
Xiaoyu Kou, Bingfeng Luo, Huang Hu, Yan Zhang
Abstract: Link prediction is the task of predicting missing connections between entities in the knowledge graph (KG). While various forms of models are proposed for the link prediction task, most of them are designed based on a few known relation patterns in several well-known datasets. Due to the diversity and complexity nature of the real-world KGs, it is inherently difficult to design a model that fits all datasets well. To address this issue, previous work has tried to use Automated Machine Learning (AutoML) to search for the best model for a given dataset. However, their search space is limited only to bilinear model families. In this paper, we propose a novel Neural Architecture Search (NAS) framework for the link prediction task. First, the embeddings of the input triplet are refined by the Representation Search Module. Then, the prediction score is searched within the Score Function Search Module. This framework entails a more general search space, which enables us to take advantage of several mainstream model families, and thus it can potentially achieve better performance. We relax the search space to be continuous so that the architecture can be optimized efficiently using gradient-based search strategies. Experimental results on several benchmark datasets demonstrate the effectiveness of our method compared with several state-of-the-art approaches.
摘要:链接的预测是预测在知识图(KG)实体之间缺少连接的任务。虽然提出了链接预测任务形式多样的车型,其中大部分都被设计基础上几个著名的数据集的几个已知关系模式。由于现实世界的幼稚园的多样性和复杂性质,本来就很难设计出适用于所有数据集以及模型。为了解决这个问题,以前的工作试图用自动化机器学习(AutoML)来搜索特定的数据集的最佳模式。然而,他们的搜索空间仅限于双线性模型的家庭。在本文中,我们提出了链接预测任务新颖的神经结构搜索(NAS)的框架。首先,输入三重的嵌入物由代表搜索模块细化。然后,预测分数搜索的记分函数搜索模块内。这一框架需要一个更一般的搜索空间,这使我们能够采取几种主流机型家庭的优势,因此它有可能获得更好的性能。我们放松的搜索空间是连续的,这样的结构能够有效地利用基于梯度搜索策略进行优化。在几个基准数据集实验结果表明,与国家的最先进的几种方法相比,我们的方法的有效性。
5. Are Neural Open-Domain Dialog Systems Robust to Speech Recognition Errors in the Dialog History? An Empirical Study [PDF] 返回目录
Karthik Gopalakrishnan, Behnam Hedayatnia, Longshaokan Wang, Yang Liu, Dilek Hakkani-Tur
Abstract: Large end-to-end neural open-domain chatbots are becoming increasingly popular. However, research on building such chatbots has typically assumed that the user input is written in nature and it is not clear whether these chatbots would seamlessly integrate with automatic speech recognition (ASR) models to serve the speech modality. We aim to bring attention to this important question by empirically studying the effects of various types of synthetic and actual ASR hypotheses in the dialog history on TransferTransfo, a state-of-the-art Generative Pre-trained Transformer (GPT) based neural open-domain dialog system from the NeurIPS ConvAI2 challenge. We observe that TransferTransfo trained on written data is very sensitive to such hypotheses introduced to the dialog history during inference time. As a baseline mitigation strategy, we introduce synthetic ASR hypotheses to the dialog history during training and observe marginal improvements, demonstrating the need for further research into techniques to make end-to-end open-domain chatbots fully speech-robust. To the best of our knowledge, this is the first study to evaluate the effects of synthetic and actual ASR hypotheses on a state-of-the-art neural open-domain dialog system and we hope it promotes speech-robustness as an evaluation criterion in open-domain dialog.
摘要:大终端到终端的神经开放域聊天机器人正变得越来越受欢迎。然而,在建立这种聊天机器人的研究通常假设用户输入是写在本质上,它是不清楚这些聊天机器人会无缝地与自动语音识别(ASR)模型集成服务于语音模式。我们的目标是通过实证研究在TransferTransfo的对话历史各类化纤和实际ASR假说的影响,让人们关注这个重要问题,一个国家的最先进的剖成预先训练变压器(GPT)基于神经开放域对话系统从NeurIPS ConvAI2挑战。我们观察到TransferTransfo书面数据训练是在推理时间引入到对话历史中,这种假设非常敏感。作为基线的缓解策略,我们在训练中引入合成ASR假设的对话历史,观察微小的改善,这表明进一步研究的必要到技术,使终端到终端的开放域聊天机器人全语音强劲。据我们所知,这是第一次研究,以评估一个国家的最先进的神经开放领域对话系统的合成和实际ASR假设的影响,我们希望它可以促进语音稳健性作为评价标准开域对话框。
Karthik Gopalakrishnan, Behnam Hedayatnia, Longshaokan Wang, Yang Liu, Dilek Hakkani-Tur
Abstract: Large end-to-end neural open-domain chatbots are becoming increasingly popular. However, research on building such chatbots has typically assumed that the user input is written in nature and it is not clear whether these chatbots would seamlessly integrate with automatic speech recognition (ASR) models to serve the speech modality. We aim to bring attention to this important question by empirically studying the effects of various types of synthetic and actual ASR hypotheses in the dialog history on TransferTransfo, a state-of-the-art Generative Pre-trained Transformer (GPT) based neural open-domain dialog system from the NeurIPS ConvAI2 challenge. We observe that TransferTransfo trained on written data is very sensitive to such hypotheses introduced to the dialog history during inference time. As a baseline mitigation strategy, we introduce synthetic ASR hypotheses to the dialog history during training and observe marginal improvements, demonstrating the need for further research into techniques to make end-to-end open-domain chatbots fully speech-robust. To the best of our knowledge, this is the first study to evaluate the effects of synthetic and actual ASR hypotheses on a state-of-the-art neural open-domain dialog system and we hope it promotes speech-robustness as an evaluation criterion in open-domain dialog.
摘要:大终端到终端的神经开放域聊天机器人正变得越来越受欢迎。然而,在建立这种聊天机器人的研究通常假设用户输入是写在本质上,它是不清楚这些聊天机器人会无缝地与自动语音识别(ASR)模型集成服务于语音模式。我们的目标是通过实证研究在TransferTransfo的对话历史各类化纤和实际ASR假说的影响,让人们关注这个重要问题,一个国家的最先进的剖成预先训练变压器(GPT)基于神经开放域对话系统从NeurIPS ConvAI2挑战。我们观察到TransferTransfo书面数据训练是在推理时间引入到对话历史中,这种假设非常敏感。作为基线的缓解策略,我们在训练中引入合成ASR假设的对话历史,观察微小的改善,这表明进一步研究的必要到技术,使终端到终端的开放域聊天机器人全语音强劲。据我们所知,这是第一次研究,以评估一个国家的最先进的神经开放领域对话系统的合成和实际ASR假设的影响,我们希望它可以促进语音稳健性作为评价标准开域对话框。
6. Stock Index Prediction with Multi-task Learning and Word Polarity Over Time [PDF] 返回目录
Yue Zhou, Kerstin Voigt
Abstract: Sentiment-based stock prediction systems aim to explore sentiment or event signals from online corpora and attempt to relate the signals to stock price variations. Both the feature-based and neural-networks-based approaches have delivered promising results. However, the frequently minor fluctuations of the stock prices restrict learning the sentiment of text from price patterns, and learning market sentiment from text can be biased if the text is irrelevant to the underlying market. In addition, when using discrete word features, the polarity of a certain term can change over time according to different events. To address these issues, we propose a two-stage system that consists of a sentiment extractor to extract the opinion on the market trend and a summarizer that predicts the direction of the index movement of following week given the opinions of the news over the current week. We adopt BERT with multitask learning which additionally predicts the worthiness of the news and propose a metric called Polarity-Over-Time to extract the word polarity among different event periods. A Weekly-Monday prediction framework and a new dataset, the 10-year Reuters financial news dataset, are also proposed.
摘要:基于情感股票预测系统的目标是从网上语料库探索情绪或事件信号,并尝试相关的信号,以股票价格的变化。无论是基于神经网络的基于特征的和方法已经交付可喜的成果。然而,股票价格的频繁小幅波动限制学习文字的感悟,从价格形态,并从文字学市场情绪可偏置如果文本无关的潜在市场。另外,使用离散的字特征时,有一定的术语的极性可以随时间根据不同的事件而改变。为了解决这些问题,我们建议由情绪提取的提取,预测给出的消息的看法在本周接下来的一周的指数运动的方向对市场走势的看法和汇总程序两级系统。我们采用BERT与多任务学习,其额外预测的消息是否值得,并提出一个指标叫做极性 - 时间提取不同的事件期间中字极性。 A每星期,周一预测框架和新的数据集,在10年的路透财经新闻数据集,也提出了。
Yue Zhou, Kerstin Voigt
Abstract: Sentiment-based stock prediction systems aim to explore sentiment or event signals from online corpora and attempt to relate the signals to stock price variations. Both the feature-based and neural-networks-based approaches have delivered promising results. However, the frequently minor fluctuations of the stock prices restrict learning the sentiment of text from price patterns, and learning market sentiment from text can be biased if the text is irrelevant to the underlying market. In addition, when using discrete word features, the polarity of a certain term can change over time according to different events. To address these issues, we propose a two-stage system that consists of a sentiment extractor to extract the opinion on the market trend and a summarizer that predicts the direction of the index movement of following week given the opinions of the news over the current week. We adopt BERT with multitask learning which additionally predicts the worthiness of the news and propose a metric called Polarity-Over-Time to extract the word polarity among different event periods. A Weekly-Monday prediction framework and a new dataset, the 10-year Reuters financial news dataset, are also proposed.
摘要:基于情感股票预测系统的目标是从网上语料库探索情绪或事件信号,并尝试相关的信号,以股票价格的变化。无论是基于神经网络的基于特征的和方法已经交付可喜的成果。然而,股票价格的频繁小幅波动限制学习文字的感悟,从价格形态,并从文字学市场情绪可偏置如果文本无关的潜在市场。另外,使用离散的字特征时,有一定的术语的极性可以随时间根据不同的事件而改变。为了解决这些问题,我们建议由情绪提取的提取,预测给出的消息的看法在本周接下来的一周的指数运动的方向对市场走势的看法和汇总程序两级系统。我们采用BERT与多任务学习,其额外预测的消息是否值得,并提出一个指标叫做极性 - 时间提取不同的事件期间中字极性。 A每星期,周一预测框架和新的数据集,在10年的路透财经新闻数据集,也提出了。
7. Deploying Lifelong Open-Domain Dialogue Learning [PDF] 返回目录
Kurt Shuster, Jack Urbanek, Emily Dinan, Arthur Szlam, Jason Weston
Abstract: Much of NLP research has focused on crowdsourced static datasets and the supervised learning paradigm of training once and then evaluating test performance. As argued in de Vries et al. (2020), crowdsourced data has the issues of lack of naturalness and relevance to real-world use cases, while the static dataset paradigm does not allow for a model to learn from its experiences of using language (Silver et al., 2013). In contrast, one might hope for machine learning systems that become more useful as they interact with people. In this work, we build and deploy a role-playing game, whereby human players converse with learning agents situated in an open-domain fantasy world. We show that by training models on the conversations they have with humans in the game the models progressively improve, as measured by automatic metrics and online engagement scores. This learning is shown to be more efficient than crowdsourced data when applied to conversations with real users, as well as being far cheaper to collect.
摘要:很多NLP的研究都集中在众包的静态数据集和训练一次监督式学习模式,然后评估测试中的表现。由于认为在德弗里斯等。 (2020年),众包数据具有缺乏自然性和相关性,以真实世界的使用案例的问题,而静态数据集模式不允许的典范,从它的使用语言的经验中学习(银等,2013)。相反,人们可能希望机器学习系统即成为他们与人互动更加有用。在这项工作中,我们构建和部署一个角色扮演游戏,因此人类玩家交谈与坐落在一个开放的领域幻想世界学习代理商。我们表明,通过自动的度量和在线参与度分数测量了他们在游戏中的车型逐步改善人类的谈话培训模式。这种学习所示,当应用于与实际用户的对话,以及作为更便宜的收集比众包的数据更有效。
Kurt Shuster, Jack Urbanek, Emily Dinan, Arthur Szlam, Jason Weston
Abstract: Much of NLP research has focused on crowdsourced static datasets and the supervised learning paradigm of training once and then evaluating test performance. As argued in de Vries et al. (2020), crowdsourced data has the issues of lack of naturalness and relevance to real-world use cases, while the static dataset paradigm does not allow for a model to learn from its experiences of using language (Silver et al., 2013). In contrast, one might hope for machine learning systems that become more useful as they interact with people. In this work, we build and deploy a role-playing game, whereby human players converse with learning agents situated in an open-domain fantasy world. We show that by training models on the conversations they have with humans in the game the models progressively improve, as measured by automatic metrics and online engagement scores. This learning is shown to be more efficient than crowdsourced data when applied to conversations with real users, as well as being far cheaper to collect.
摘要:很多NLP的研究都集中在众包的静态数据集和训练一次监督式学习模式,然后评估测试中的表现。由于认为在德弗里斯等。 (2020年),众包数据具有缺乏自然性和相关性,以真实世界的使用案例的问题,而静态数据集模式不允许的典范,从它的使用语言的经验中学习(银等,2013)。相反,人们可能希望机器学习系统即成为他们与人互动更加有用。在这项工作中,我们构建和部署一个角色扮演游戏,因此人类玩家交谈与坐落在一个开放的领域幻想世界学习代理商。我们表明,通过自动的度量和在线参与度分数测量了他们在游戏中的车型逐步改善人类的谈话培训模式。这种学习所示,当应用于与实际用户的对话,以及作为更便宜的收集比众包的数据更有效。
8. Just another quantum assembly language (Jaqal) [PDF] 返回目录
Benjamin C. A. Morrison, Andrew J. Landahl, Daniel S. Lobser, Kenneth M. Rudinger, Antonio E. Russo, Jay W. Van Der Wall, Peter Maunz
Abstract: The Quantum Scientific Computing Open User Testbed (QSCOUT) is a trapped-ion quantum computer testbed realized at Sandia National Laboratories on behalf of the Department of Energy's Office of Science and its Advanced Scientific Computing (ASCR) program. Here we describe Jaqal, for Just another quantum assembly language, the programming language we invented to specify programs executed on QSCOUT. Jaqal is useful beyond QSCOUT---it can support mutliple hardware targets because it offloads gate names and their pulse-sequence definitions to external files. We describe the capabilities of the Jaqal language, our approach in designing it, and the reasons for its creation. To learn more about QSCOUT, Jaqal, or JaqalPaq, the metaprogramming Python package we developed for Jaqal, please visit this https URL, this https URL, or send an e-mail to qscout@sandia.gov.
摘要:量子科学计算的开放式用户测试平台(QSCOUT)是一名被困离子量子计算机实验平台桑迪亚国家实验室代表科学的能源办公室和其先进的科学计算(ASCR)计划署的实现。这里,我们介绍Jaqal,只是另一种量子汇编语言,我们发明来指定QSCOUT执行的程序的编程语言。 Jaqal是有用的超越QSCOUT ---它可以支持硬件复式目标,因为它卸载门名称和它们的脉冲序列定义到外部文件。我们描述了Jaqal的语言,我们在设计它的方法的能力,原因在于其创造。要了解更多关于QSCOUT,Jaqal,或JaqalPaq,我们为Jaqal开发的元编程Python包,请访问该HTTPS URL,这HTTPS URL,或发送电子邮件至qscout@sandia.gov。
Benjamin C. A. Morrison, Andrew J. Landahl, Daniel S. Lobser, Kenneth M. Rudinger, Antonio E. Russo, Jay W. Van Der Wall, Peter Maunz
Abstract: The Quantum Scientific Computing Open User Testbed (QSCOUT) is a trapped-ion quantum computer testbed realized at Sandia National Laboratories on behalf of the Department of Energy's Office of Science and its Advanced Scientific Computing (ASCR) program. Here we describe Jaqal, for Just another quantum assembly language, the programming language we invented to specify programs executed on QSCOUT. Jaqal is useful beyond QSCOUT---it can support mutliple hardware targets because it offloads gate names and their pulse-sequence definitions to external files. We describe the capabilities of the Jaqal language, our approach in designing it, and the reasons for its creation. To learn more about QSCOUT, Jaqal, or JaqalPaq, the metaprogramming Python package we developed for Jaqal, please visit this https URL, this https URL, or send an e-mail to qscout@sandia.gov.
摘要:量子科学计算的开放式用户测试平台(QSCOUT)是一名被困离子量子计算机实验平台桑迪亚国家实验室代表科学的能源办公室和其先进的科学计算(ASCR)计划署的实现。这里,我们介绍Jaqal,只是另一种量子汇编语言,我们发明来指定QSCOUT执行的程序的编程语言。 Jaqal是有用的超越QSCOUT ---它可以支持硬件复式目标,因为它卸载门名称和它们的脉冲序列定义到外部文件。我们描述了Jaqal的语言,我们在设计它的方法的能力,原因在于其创造。要了解更多关于QSCOUT,Jaqal,或JaqalPaq,我们为Jaqal开发的元编程Python包,请访问该HTTPS URL,这HTTPS URL,或发送电子邮件至qscout@sandia.gov。
9. Word2vec Skip-gram Dimensionality Selection via Sequential Normalized Maximum Likelihood [PDF] 返回目录
Pham Thuc Hung, Kenji Yamanishi
Abstract: In this paper, we propose a novel information criteria-based approach to select the dimensionality of the word2vec Skip-gram (SG). From the perspective of the probability theory, SG is considered as an implicit probability distribution estimation under the assumption that there exists a true contextual distribution among words. Therefore, we apply information criteria with the aim of selecting the best dimensionality so that the corresponding model can be as close as possible to the true distribution. We examine the following information criteria for the dimensionality selection problem: the Akaike Information Criterion, Bayesian Information Criterion, and Sequential Normalized Maximum Likelihood (SNML) criterion. SNML is the total codelength required for the sequential encoding of a data sequence on the basis of the minimum description length. The proposed approach is applied to both the original SG model and the SG Negative Sampling model to clarify the idea of using information criteria. Additionally, as the original SNML suffers from computational disadvantages, we introduce novel heuristics for its efficient computation. Moreover, we empirically demonstrate that SNML outperforms both BIC and AIC. In comparison with other evaluation methods for word embedding, the dimensionality selected by SNML is significantly closer to the optimal dimensionality obtained by word analogy or word similarity tasks.
摘要:在本文中,我们提出了一种新的基于标准的信息的方法来选择word2vec跳过克(SG)的维数。从概率论的角度来看,SG可以看作是一个存在的单词中的真实语境分布的假设下,一个隐含的概率分布估计。因此,我们运用信息标准与选择最佳的维度,使相应的模型可以尽可能接近的真实分布的目的。我们审查的维度选择问题以下信息标准:赤池信息量准则,贝叶斯信息准则,并连续标准化最大似然(SNML)标准。 SNML为最小描述长度的基础上的数据序列的顺序编码所需的总编码长度。所提出的方法适用于原SG模型和SG负两个采样模式来澄清使用信息标准的想法。此外,从计算的缺点原来SNML受苦,我们介绍了有效的计算小说启发。此外,我们经验证明,SNML性能优于BIC和AIC。与用于字嵌入其它的评价方法相比,通过选择SNML维度是显著更接近由字类比或字相似任务获得的最优维数。
Pham Thuc Hung, Kenji Yamanishi
Abstract: In this paper, we propose a novel information criteria-based approach to select the dimensionality of the word2vec Skip-gram (SG). From the perspective of the probability theory, SG is considered as an implicit probability distribution estimation under the assumption that there exists a true contextual distribution among words. Therefore, we apply information criteria with the aim of selecting the best dimensionality so that the corresponding model can be as close as possible to the true distribution. We examine the following information criteria for the dimensionality selection problem: the Akaike Information Criterion, Bayesian Information Criterion, and Sequential Normalized Maximum Likelihood (SNML) criterion. SNML is the total codelength required for the sequential encoding of a data sequence on the basis of the minimum description length. The proposed approach is applied to both the original SG model and the SG Negative Sampling model to clarify the idea of using information criteria. Additionally, as the original SNML suffers from computational disadvantages, we introduce novel heuristics for its efficient computation. Moreover, we empirically demonstrate that SNML outperforms both BIC and AIC. In comparison with other evaluation methods for word embedding, the dimensionality selected by SNML is significantly closer to the optimal dimensionality obtained by word analogy or word similarity tasks.
摘要:在本文中,我们提出了一种新的基于标准的信息的方法来选择word2vec跳过克(SG)的维数。从概率论的角度来看,SG可以看作是一个存在的单词中的真实语境分布的假设下,一个隐含的概率分布估计。因此,我们运用信息标准与选择最佳的维度,使相应的模型可以尽可能接近的真实分布的目的。我们审查的维度选择问题以下信息标准:赤池信息量准则,贝叶斯信息准则,并连续标准化最大似然(SNML)标准。 SNML为最小描述长度的基础上的数据序列的顺序编码所需的总编码长度。所提出的方法适用于原SG模型和SG负两个采样模式来澄清使用信息标准的想法。此外,从计算的缺点原来SNML受苦,我们介绍了有效的计算小说启发。此外,我们经验证明,SNML性能优于BIC和AIC。与用于字嵌入其它的评价方法相比,通过选择SNML维度是显著更接近由字类比或字相似任务获得的最优维数。
10. PopMAG: Pop Music Accompaniment Generation [PDF] 返回目录
Yi Ren, Jinzheng He, Xu Tan, Tao Qin, Zhou Zhao, Tie-Yan Liu
Abstract: In pop music, accompaniments are usually played by multiple instruments (tracks) such as drum, bass, string and guitar, and can make a song more expressive and contagious by arranging together with its melody. Previous works usually generate multiple tracks separately and the music notes from different tracks not explicitly depend on each other, which hurts the harmony modeling. To improve harmony, in this paper, we propose a novel MUlti-track MIDI representation (MuMIDI), which enables simultaneous multi-track generation in a single sequence and explicitly models the dependency of the notes from different tracks. While this greatly improves harmony, unfortunately, it enlarges the sequence length and brings the new challenge of long-term music modeling. We further introduce two new techniques to address this challenge: 1) We model multiple note attributes (e.g., pitch, duration, velocity) of a musical note in one step instead of multiple steps, which can shorten the length of a MuMIDI sequence. 2) We introduce extra long-context as memory to capture long-term dependency in music. We call our system for pop music accompaniment generation as PopMAG. We evaluate PopMAG on multiple datasets (LMD, FreeMidi and CPMD, a private dataset of Chinese pop songs) with both subjective and objective metrics. The results demonstrate the effectiveness of PopMAG for multi-track harmony modeling and long-term context modeling. Specifically, PopMAG wins 42\%/38\%/40\% votes when comparing with ground truth musical pieces on LMD, FreeMidi and CPMD datasets respectively and largely outperforms other state-of-the-art music accompaniment generation models and multi-track MIDI representations in terms of subjective and objective metrics.
摘要:在流行音乐的伴奏通常由多台仪器(轨道),如鼓,贝斯,弦乐和吉他演奏,而且可以使通过它的旋律一起安排一首歌曲更具表现力和感染力。以前的作品通常会产生单独多轨道和来自不同轨道上的音符不明确依赖于对方,这伤害了和谐的造型。为了提高和谐,在本文中,我们提出了一种新的多轨MIDI表示(MuMIDI),这使得同时多轨道生成单一序列,并从不同轨道的音符明确模型的依赖性。虽然这极大地提高了和谐,不幸的是,它扩大了序列长度,带来长期的音乐造型的新的挑战。我们进一步引入两个新的技术来解决这一挑战:1)我们在一个步骤中,而不是多个步骤,这可以缩短MuMIDI序列的长度的音符的音符的多个属性(例如,音调,持续时间,速度)进行建模。 2)我们额外引进长期上下文记忆捕捉音乐的长期依赖。我们呼吁我们的流行音乐伴奏一代PopMAG系统。我们评估对多个数据集(LMD,FreeMidi和CPMD,对中国流行歌曲的私人数据集)与主观和客观指标PopMAG。结果表明PopMAG的多轨和谐建模和长期上下文建模的有效性。具体而言,PopMAG与LMD地面实况音乐作品进行比较时,获得42 \%/ 38 \%/ 40 \%票,FreeMidi和CPMD数据集分别并在很大程度上优于状态的最先进的其他音乐伴奏产生的模型和多轨道MIDI申述主观和客观指标方面。
Yi Ren, Jinzheng He, Xu Tan, Tao Qin, Zhou Zhao, Tie-Yan Liu
Abstract: In pop music, accompaniments are usually played by multiple instruments (tracks) such as drum, bass, string and guitar, and can make a song more expressive and contagious by arranging together with its melody. Previous works usually generate multiple tracks separately and the music notes from different tracks not explicitly depend on each other, which hurts the harmony modeling. To improve harmony, in this paper, we propose a novel MUlti-track MIDI representation (MuMIDI), which enables simultaneous multi-track generation in a single sequence and explicitly models the dependency of the notes from different tracks. While this greatly improves harmony, unfortunately, it enlarges the sequence length and brings the new challenge of long-term music modeling. We further introduce two new techniques to address this challenge: 1) We model multiple note attributes (e.g., pitch, duration, velocity) of a musical note in one step instead of multiple steps, which can shorten the length of a MuMIDI sequence. 2) We introduce extra long-context as memory to capture long-term dependency in music. We call our system for pop music accompaniment generation as PopMAG. We evaluate PopMAG on multiple datasets (LMD, FreeMidi and CPMD, a private dataset of Chinese pop songs) with both subjective and objective metrics. The results demonstrate the effectiveness of PopMAG for multi-track harmony modeling and long-term context modeling. Specifically, PopMAG wins 42\%/38\%/40\% votes when comparing with ground truth musical pieces on LMD, FreeMidi and CPMD datasets respectively and largely outperforms other state-of-the-art music accompaniment generation models and multi-track MIDI representations in terms of subjective and objective metrics.
摘要:在流行音乐的伴奏通常由多台仪器(轨道),如鼓,贝斯,弦乐和吉他演奏,而且可以使通过它的旋律一起安排一首歌曲更具表现力和感染力。以前的作品通常会产生单独多轨道和来自不同轨道上的音符不明确依赖于对方,这伤害了和谐的造型。为了提高和谐,在本文中,我们提出了一种新的多轨MIDI表示(MuMIDI),这使得同时多轨道生成单一序列,并从不同轨道的音符明确模型的依赖性。虽然这极大地提高了和谐,不幸的是,它扩大了序列长度,带来长期的音乐造型的新的挑战。我们进一步引入两个新的技术来解决这一挑战:1)我们在一个步骤中,而不是多个步骤,这可以缩短MuMIDI序列的长度的音符的音符的多个属性(例如,音调,持续时间,速度)进行建模。 2)我们额外引进长期上下文记忆捕捉音乐的长期依赖。我们呼吁我们的流行音乐伴奏一代PopMAG系统。我们评估对多个数据集(LMD,FreeMidi和CPMD,对中国流行歌曲的私人数据集)与主观和客观指标PopMAG。结果表明PopMAG的多轨和谐建模和长期上下文建模的有效性。具体而言,PopMAG与LMD地面实况音乐作品进行比较时,获得42 \%/ 38 \%/ 40 \%票,FreeMidi和CPMD数据集分别并在很大程度上优于状态的最先进的其他音乐伴奏产生的模型和多轨道MIDI申述主观和客观指标方面。
11. Resolving Intent Ambiguities by Retrieving Discriminative Clarifying Questions [PDF] 返回目录
Kaustubh D. Dhole
Abstract: Task oriented Dialogue Systems generally employ intent detection systems in order to map user queries to a set of pre-defined intents. However, user queries appearing in natural language can be easily ambiguous and hence such a direct mapping might not be straightforward harming intent detection and eventually the overall performance of a dialogue system. Moreover, acquiring domain-specific clarification questions is costly. In order to disambiguate queries which are ambiguous between two intents, we propose a novel method of generating discriminative questions using a simple rule based system which can take advantage of any question generation system without requiring annotated data of clarification questions. Our approach aims at discrimination between two intents but can be easily extended to clarification over multiple intents. Seeking clarification from the user to classify user intents not only helps understand the user intent effectively, but also reduces the roboticity of the conversation and makes the interaction considerably natural.
摘要:任务定向对话系统中通常使用的意图的检测系统,以便用户查询映射到一组预先定义的意图。然而,出现在自然语言的用户查询可以很容易模糊,因此这样的直接映射可能不是简单的汉宁波意图检测并最终对话系统的整体性能。此外,收购特定领域的澄清问题是昂贵的。为了它们是两个目的之间暧昧歧义查询,我们建议生成使用基于简单的规则系统,该系统可以利用任何问题发生系统,而不需要的澄清问题注释的数据判别的问题的新颖方法。我们的方法的目的是两个目的,但之间的歧视可以在多个意图很容易地扩展到澄清。寻求澄清从用户分类用户意图不仅可以帮助有效地了解用户的意图,同时也降低了谈话的roboticity,使交互相当自然。
Kaustubh D. Dhole
Abstract: Task oriented Dialogue Systems generally employ intent detection systems in order to map user queries to a set of pre-defined intents. However, user queries appearing in natural language can be easily ambiguous and hence such a direct mapping might not be straightforward harming intent detection and eventually the overall performance of a dialogue system. Moreover, acquiring domain-specific clarification questions is costly. In order to disambiguate queries which are ambiguous between two intents, we propose a novel method of generating discriminative questions using a simple rule based system which can take advantage of any question generation system without requiring annotated data of clarification questions. Our approach aims at discrimination between two intents but can be easily extended to clarification over multiple intents. Seeking clarification from the user to classify user intents not only helps understand the user intent effectively, but also reduces the roboticity of the conversation and makes the interaction considerably natural.
摘要:任务定向对话系统中通常使用的意图的检测系统,以便用户查询映射到一组预先定义的意图。然而,出现在自然语言的用户查询可以很容易模糊,因此这样的直接映射可能不是简单的汉宁波意图检测并最终对话系统的整体性能。此外,收购特定领域的澄清问题是昂贵的。为了它们是两个目的之间暧昧歧义查询,我们建议生成使用基于简单的规则系统,该系统可以利用任何问题发生系统,而不需要的澄清问题注释的数据判别的问题的新颖方法。我们的方法的目的是两个目的,但之间的歧视可以在多个意图很容易地扩展到澄清。寻求澄清从用户分类用户意图不仅可以帮助有效地了解用户的意图,同时也降低了谈话的roboticity,使交互相当自然。
注:中文为机器翻译结果!封面为论文标题词云图!