-
September Special Issue Preface
by Martha Lewis, Michael Moortgat
J. CS. 2021, 22(3), 0-;
-
Cats Climb Entails Mammals Move: Preserving Hyponymy in Compositional Distributional Semantics
by Gemma De las Cuevas, Andreas Klingler, Martha Lewis, and Tim Netzer
J. CS. 2021, 22(3), 311-353;
Abstract To give vector-based representations of meaning more structure, an approach proposed in Piedeleu et al. (2015); Sadrzadeh et al. (2018);Bankova et al. (2018) is to use positive semidefinite (psd) matrices. These allow us to model similarity of words as well as the hyponymyor is-a relationship. To co...
[Read more].
Abstract To give vector-based representations of meaning more structure, an approach proposed in Piedeleu et al. (2015); Sadrzadeh et al. (2018);Bankova et al. (2018) is to use positive semidefinite (psd) matrices. These allow us to model similarity of words as well as the hyponymyor is-a relationship. To compose words to form phrases and sentences, we may represent adjectives, verbs, and other functional words as multilinear,positivity preserving maps, following the compositional distributional approach introduced in Coecke et al. (2010) and extended tothe realm of psd matrices in Piedeleu et al. (2015), but it is not clear how to learn representations of functional words when working withpsd matrices. In this paper, we introduce a generic way of composing the psd matrices corresponding to words. We propose that psd matricesfor verbs, adjectives, and other functional words be lifted to completely positive (CP) maps that match their grammatical type. This lifting iscarried out by our composition rule called Compression, Compr. In contrast to previous composition rules like Fuzz and Phaser (Coecke andMeichanetzidis, 2020) (a.k.a. KMult and BMult (Lewis, 2019a)), Compr preserves hyponymy. Mathematically, Compr is itself a CP map, andis therefore linear and generally non-commutative. We give a number of proposals for the structure of Compr, based on spiders, cups, andcaps, and generate a range of composition rules. We test these rules on sentence entailment datasets from Kartsaklis and Sadrzadeh (2016), andsee some improvements over the performance of Fuzz and Phaser. We go on to estimate the parameters of a simplified form of Compr based onentailment information from the aforementioned datasets, and find that whilst this learnt operator does not consistently outperform previouslyproposed mechanisms, it is competitive and has the potential to improve with the use of a less simplified version.
[Collapse]
-
Solving Logical Puzzles in DisCoCirc
by Tiffany Duneau
J. CS. 2021, 22(3), 355-389;
Abstract Finding a full solution to logical puzzles, from parsing the text to arriving at the answer, forms an active area of research in artificial intelligence.In this paper, we address an initial subset of these puzzles that take the form of constraint satisfaction problems, providing a method forsolving ...
[Read more].
Abstract Finding a full solution to logical puzzles, from parsing the text to arriving at the answer, forms an active area of research in artificial intelligence.In this paper, we address an initial subset of these puzzles that take the form of constraint satisfaction problems, providing a method forsolving them by encoding the puzzle meaning as a relation informed by the individual sentences that make up the puzzle text. To build thisrelation from the text we make use of a diagrammatic, distributional compositional framework called DisCoCirc. We then show that thepuzzle solution can be extracted from this encoding with minimal extra work, as the logical form of the puzzle is modelled and evaluated as themeaning encoding is computed.
[Collapse]
-
Analysing Ambiguous Nouns and Verbs with QuantumContextuality Tools
by Daphne Wang, Mehrnoosh Sadrzadeh, Samson Abramsky, and Víctor H. Cervantes
J. CS. 2021, 22(3), 391-420;
Abstract Psycholinguistic research uses eye-tracking to show that polysemous words are disambiguated differently from homonymous words, and thatambiguous verbs are disambiguated differently than ambiguous nouns. Research in Compositional Distributional Semantics uses cosine distancesto show that verbs are di...
[Read more].
Abstract Psycholinguistic research uses eye-tracking to show that polysemous words are disambiguated differently from homonymous words, and thatambiguous verbs are disambiguated differently than ambiguous nouns. Research in Compositional Distributional Semantics uses cosine distancesto show that verbs are disambiguated more efficiently in the context of their subjects and objects than when on their own. These twoframeworks both focus on one ambiguous word at a time and neither considers ambiguous phrases with two (or more) ambiguous words. Weborrow methods and measures from Quantum Information Theory, the framework of Contextuality-by-Default and degrees of contextual influences,and work with ambiguous subject-verb and verb-object phrases of English, where both the subject/object and the verb are ambiguous.We show that differences in the processing of ambiguous verbs versus ambiguous nouns, as well as between different levels of ambiguity inhomonymous versus polysemous nouns and verbs can be modelled using the averages of the degrees of their contextual influences.
[Collapse]
-
Talking Space: Inference from Spatial Linguistic Meanings
by Vincent Wang-Mascianica and Bob Coecke
J. CS. 2021, 22(3), 421-463;
Abstract This paper concerns the intersection of natural language and the physical space around us in which we live, that we observe and/or imaginethings within. Many important features of language have spatial connotations, for example, many prepositions (like in, next to, after, on, etc.)are fundamentally ...
[Read more].
Abstract This paper concerns the intersection of natural language and the physical space around us in which we live, that we observe and/or imaginethings within. Many important features of language have spatial connotations, for example, many prepositions (like in, next to, after, on, etc.)are fundamentally spatial. Space is also a key factor of the meanings of manywords/phrases/sentences/text, and space is a, if not the key, contextfor referencing (e.g. pointing) and embodiment. We propose a mechanism for how space and linguistic structure can be made to interact ina matching compositional fashion. Examples include Cartesian space, subway stations, chesspieces on a chess-board, and Penrose’s staircase.The starting point for our construction is the DisCoCat model of compositional natural language meaning, which we relax to accommodatephysical space. We address the issue of having multiple agents/objects in a space, including the case that each agent has different capabilitieswith respect to that space, e.g. the specific moves each chesspiece can make, or the different velocities one may be able to reach. Once ourmodel is in place, we show how inferences drawing from the structure of physical space can be made. We also show how our linguisticmodel of space can interact with other such models related to our senses and/or embodiment, such as the conceptual spaces of colour, taste andsmell, resulting in a rich compositional model of meaning that is close to human experience and embodiment in the world.
[Collapse]