Journal

Volume 21, Issue 1 (March 31, 2020)

6 articles

  • Introduction to the Special Issue: AI at the Crossroads of NLP and Neurosciences
    by Michael Zock
    J. CS. 2020, 21(1), 1-14;
    Abstract Introduction to the Special Issue: AI at the Crossroads of NLP and Neurosciences  [Read more].
    Abstract Introduction to the Special Issue: AI at the Crossroads of NLP and Neurosciences  [Collapse]
  • Data-driven models and computational tools for neurolinguistics: a language technology perspective
    by Ekaterina Artemova, Amir Bakarov, Aleksey Artemov, Evgeny Burnaev, Maxim Sharaev
    J. CS. 2020, 21(1), 15-52;
    Abstract In this paper, our focus is the connection and influence of language technologies on the research in neurolinguistics. We present a review of brain imaging-based neurolinguistic studies with a focus on the natural language representations, such as word embeddings and pre-trained language models.  Mu... [Read more].
    Abstract In this paper, our focus is the connection and influence of language technologies on the research in neurolinguistics. We present a review of brain imaging-based neurolinguistic studies with a focus on the natural language representations, such as word embeddings and pre-trained language models.  Mutual enrichment of neurolinguistics and language technologies leads to development of brain-aware natural language representations. The importance of this research area is emphasized by medical applications. [Collapse]
  • A Unified Hierarchy for AI and Natural Intelligence through Auto-Programming for General Purposes
    by Juyang Weng
    J. CS. 2020, 21(1), 53-102;
    Abstract Despite the current great public interest in neural network based artificial intelligence, there exists a huge gap between artificial intelligence (AI) and natural intelligence (NI). Furthermore, there is such a lack of unified hierarchy of intelligence that intelligence is widely regarded piecemeal... [Read more].
    Abstract Despite the current great public interest in neural network based artificial intelligence, there exists a huge gap between artificial intelligence (AI) and natural intelligence (NI). Furthermore, there is such a lack of unified hierarchy of intelligence that intelligence is widely regarded piecemeal. This status quo results in highly brittle AI systems. This paper proposes how an autonomous agent, natural or artificial, develops a unified intelligence hierarchy in the brain.  The term “unified” means not only for AI and NI both, but also for all practical sensory modalities and motor modalities, including perception, representation, reasoning, learning, societal activities and politics. This line of work has been supported by rigorous mathematical proofs and initial experimental verifications.  However, this paper minimizes mathematical material so that the new information here can reach a wide audience. We should ask a new and general question: “How can a machine, natural or artificial, Autonomously Program For General Purposes (APFGP) from the real physical world?” We have given this question a solution, theoretically, experimentally, and mathematically. A clear but powerful learning engine—Developmental Network (DN) enables APFGP.  Hopefully, understanding APFGP for both AI and NI not only fully automates development of AI systems but also improve human development, individually and societally. [Collapse]
  • Assisting Authors to Convert Raw Products into Polished Prose
    by Takumi Ito, Tatsuki Kuribayashi, Hayato Kobayashi, Ana Brassard, Masato Hagiwara, Jun Suzuki, Kentaro Inui
    J. CS. 2020, 21(1), 103-140;
    Abstract Being a notoriously complex problem, writing is generally decomposed into a series of subtasks: idea generation, expression, revision, etc. Given some goal, the author generates a set of ideas (brainstorming), which he integrates into some skeleton (outline, text plan, outline). This leads to a firs... [Read more].
    Abstract Being a notoriously complex problem, writing is generally decomposed into a series of subtasks: idea generation, expression, revision, etc. Given some goal, the author generates a set of ideas (brainstorming), which he integrates into some skeleton (outline, text plan, outline). This leads to a first draft which is submitted then for revision possibly yielding changes at various levels (content, structure, form). Having made a draft, authors usually revise, edit, and proofread their documents. We confine ourselves here only to academic writing, focusing on sentence production. While there has been quite some work on this topic, most writing assistance has mainly dealt with grammatical errors, editing and proofreading, the goal being the correction of surface-level problems such as typography, spelling, or grammatical errors.  We broaden the scope by also including cases where the entire sentence needs to be rewritten in order to express properly all of the information planned. Hence, Sentence-level Revision (SentRev) becomes part of our writing assistance task. Obviously, systems performing well in this task can be of considerable help for inexperienced authors by producing fluent, well-formed sentences based on the user’s drafts.  In order to evaluate our SentRev model, we have built a new, freely available crowdsourced evaluation dataset which consists of a set of incomplete sentences produced by nonnative writers paired with final version sentences extracted from published academic papers. We also used this dataset to establish baseline performance on SentRev. [Collapse]
  • Computational Representation of Chinese Characters: Comparison Between Singular Value Decomposition and Variational Autoencoder
    by Yu-Hsiang Tseng, Shu-Kai Hsieh
    J. CS. 2020, 21(1), 141-160;
    Abstract Being a notoriously complex problem, writing is generally decomposed into a series of subtasks: idea generation, expression, revision, etc. Given some goal, the author generates a set of ideas (brainstorming), which he integrates into some skeleton (outline, text plan, outline). This leads to a firs... [Read more].
    Abstract Being a notoriously complex problem, writing is generally decomposed into a series of subtasks: idea generation, expression, revision, etc. Given some goal, the author generates a set of ideas (brainstorming), which he integrates into some skeleton (outline, text plan, outline). This leads to a first draft which is submitted then for revision possibly yielding changes at various levels (content, structure, form). Having made a draft, authors usually revise, edit, and proofread their documents.  We confine ourselves here only to academic writing, focusing on sentence production. While there has been quite some work on this topic, most writing assistance has mainly dealt with grammatical errors, editing and proofreading, the goal being the correction of surface-level problems such as typography, spelling, or grammatical errors.  We broaden the scope by also including cases where the entire sentence needs to be rewritten in order to express properly all of the information planned. Hence, Sentence-level Revision (SentRev) becomes part of our writing assistance task.  Obviously, systems performing well in this task can be of considerable help for inexperienced authors by producing fluent, well-formed sentences based on the user’s drafts.  In order to evaluate our SentRev model, we have built a new, freely available crowdsourced evaluation dataset which consists of a set of incomplete sentences produced by nonnative writers paired with final version sentences extracted from published academic papers. We also used this dataset to establish baseline performance on SentRev. [Collapse]
  • NLP's Clever Hans Moment Has Arrived
    by Benjamin Heinzerling
    J. CS. 2020, 21(1), 161-170;
    Abstract Large, pretrained language models have led to a flurry of new state-of-the-art results being reported in many areas of natural language processing. However, recent work has also shown that such models tend to solve language tasks by relying on superficial cues found in benchmark datasets, instead of... [Read more].
    Abstract Large, pretrained language models have led to a flurry of new state-of-the-art results being reported in many areas of natural language processing. However, recent work has also shown that such models tend to solve language tasks by relying on superficial cues found in benchmark datasets, instead of acquiring the capabilities envisioned by the task designers. In this short opinion piece, I review a report by Niven & Kao (2019) of this so-called Clever Hans effect on an argument reasoning task and discuss possible solutions for its prevention. [Collapse]

Login

Submit & Review

Submit to JCS Review for JCS

Journal Browser

Subscribe

Add your e-mail address to receive forthcoming issues of this journal: