科学网

 找回密码
  注册

tag 标签: Machine Intelligence

相关帖子

版块 作者 回复/查看 最后发表

没有相关内容

相关日志

Information Discovery with Machine Intelligence for Language
timy 2019-7-7 22:01
Call for papers : Information Discovery with Machine Intelligence for Language Special issue call for papers from Information Discovery and Delivery Machine Intelligence for Natural Language Processing is a rapidly developing research area in recent years (Young et al. 2018). With the development of neural language modeling and transfer learning techniques, such as ULMfit (Howard and Ruder 2018), BERT (Devlin et al. 2018) and GPT-2, researchers have achieved state-of-the-art results on a variety of NLP tasks, at times even claiming to have out-performed human beings (Czapla, Howard, and Kardas 2018). We are interested in the question of whether and how the exciting new technologies currently being developed in deep learning and Natural Language Processing can lead to a boom of applications in the field of information discovery, and further to a consequent benefit to human beings. Here are some examples of the types of questions we hope may be addressed by submissions to this special issue: We know labeling datasets can be very hard and/or expensive. Can we transfer a model trained on vast amounts of English text to another language which has only thousands or even hundreds of examples? Should we exploit existing word-level embeddings, or instead try sentence- and/or paragraph-level language modeling, or should we go in the opposite direction and employ character-level modeling? In computer vision, data augmentation is a common practice. Images are cropped or rotated to make new” images to feed deep learning models, hoping to avoid overfitting the model to the training and test data. Can we do similar things by using synthetic text data? Why or why not? We need experiments to show the results, and we need analytics to explain the reasons for those results. How can we lead a machine to understand the meaning of a text, instead of just making predictions based on things like frequency, probability, or pure luck? We cannot forget that real people generate most texts. Can we use data generated by people to model the users and find their potential information needs? We invite authors to submit papers that address the questions above, as well as related questions not outlined in this proposal. Whether a particular paper address the concerns of the special issue will be left to the discretion of the guest editors, in consultation with the senior editor(s) where necessary. Topics of interest include, but are not limited to: Language Modeling for Information Retrieval Transfer Learning for Text Classification Word and Character Representations for Cross-Lingual Analysis Information Extraction and Knowledge Graph Building Discourse Analysis at Sentence Level and Beyond Synthetic Text Data for Machine Learning Purposes User Modeling and Information Recommendation based on Text Analysis Semantic Analysis with Machine Learning Other applications of CL/NLP for Information Discovery Other related topics Guest Editors Dr. Shuyi Wang , Tianjin Normal University, nkwshuyi@gmail.com Dr. Alexis Palmer , University of North Texas, alexis.palmer@unt.edu Dr. Chengzhi Zhang , Nanjing University of Science and Technology, zhangcz@njust.edu.cn Important Dates First announcement/CfP: June 3, 2019 Second CfP: October 15, 2019 Final Reminder: November 11, 2019 Submissions due: November 18, 2019 Papers sent to reviewers: November 25, 2019 Reviews due: December 20, 2019 Author notification: January 13, 2020 Final papers: February 7, 2020 Submissions must comply with the journal author guidelines which are here – see www.emeraldgrouppublishing.com/products/journals/author_guidelines.htm?id=idd Submissions must be made through ScholarOne Manuscripts, the online submission, and peer review system. Registration and access is available at mc.manuscriptcentral.com/idd Information Discovery and Delivery covers information discovery and access for digital information rese archers. This includes educators, knowledge professionals in education and cultural organizations, knowledge managers in media, health care and government, as well as librarians. IDD is a member of and subscribes to the principles of the Committee on Publication Ethics. References: Czapla, Piotr, Jeremy Howard, and Marcin Kardas. 2018. “Universal Language Model Fine-Tuning with Subword Tokenization for Polish.” arXiv:1810.10222 , October. http://arxiv.org/abs/1810.10222 . Devlin, Jacob, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. “BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding.” arXiv:1810.04805 , October. http://arxiv.org/abs/1810.04805 . Howard, Jeremy, and Sebastian Ruder. 2018. “Universal Language Model Fine-Tuning for Text Classification.” In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 328–39. Melbourne, Australia: Association for Computational Linguistics. https://aclweb.org/anthology/P18-1031 Young, T., D. Hazarika, S. Poria, and E. Cambria. 2018. “Recent Trends in Deep Learning Based Natural Language Processing .” IEEE Computational Intelligence Magazine 13 (3): 55–75. https://doi.org/10.1109/MCI.2018.2840738 .
个人分类: 自然语言处理|3372 次阅读|0 个评论

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2024-4-20 02:59

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部