New Landscapes in Theoretical Computational Linguistics

New Landscapes in Theoretical Computational Linguistics


Robert Levine (, Ohio State University

Yusuke Kubota (, University of Tsukuba

Workshop description

This workshop is planned to commemorate the Department of Linguistics' past half-century at Ohio State University, and in particular, to highlight the emphasis on formal methods and mathematical and logical rigor which has contributed to the Department's status as one of the world's premier graduate programs in this field. Much of the reputational excellence of the department (reflected in both domestic and international independent rankings) comes from the leadership role played by the members of the department in the theoretical development of data structures which have proven critical in understanding complex patterns in human languages. Many of these data structures had their origins in the mathematics of computation, and hence a workshop devoted to novel developments in theoretical computer science which can be insightfully applied to natural languages seems a particularly appropriate way to celebrate our department's contributions to foundational work in linguistics.


October 14-16, 2016


Ohio State University, Jennings Hall Room 040

Colocated event

OSU Ling colloquium talk by Richard Moot, 10/14 15:30- Jennings Hall 040

Title: Towards Wide-Coverage Semantics for Type-Logical Grammars

Abstract: As applications of natural language technology become more sophisticated, there is a growing need for applications which display some understanding of natural language texts. Type-logical grammars, because of their natural correspondence with logical semantics in the tradition of Montague, are an especially attractive framework for such systems. I will present work which, based on a corpus to type-logical analyses, uses machine learning techniques to assign parses/proofs and their corresponding meanings to unseen French sentences. Though the semantics is rudimentary in many ways, it still provides a basic account of many interesting phenomena such as clitics, extraction, coordination, control and presupposition.


10/14 Fri

12:00-12:30 Coffee, bagels

12:30-14:00 Jordan Needle, Carl Pollard, and Murat Yasavul: Constructive Hyperintensional Semantics PDF

14:00-14:15 [Short Break]

14:15-15:00 Ribeka Tanaka: Interpretation of dependent pronouns and dependent types

15:00-15:30 Coffee Break

15:30- Richard Moot (colloquium talk): Towards Wide-Coverage Semantics for Type-Logical Grammars

10/15 Sat

10:00-10:30 Coffee, bagels

10:30-12:00 Erhard Hinrichs: Syntactic and Semantic Parsing: Past Successes, Current Trends, and Future Challenges

12:00-14:00 Lunch

14:00-15:30 Koji Mineshima: Towards a proof-theoretic natural language semantics for wide-coverage grammars PDF, code

15:30-16:00 Coffee Break

16:00-17:30 Alastair Butler: Parsed Corpus Semantics

10/16 Sun

10:00-10:30 Coffee, bagels

10:30-12:00 Oleg Kiselyov: Gradually transforming syntax to semantics PDF, code

12:00-14:00 Lunch

14:00-15:30 Greg Kobele: Efficient parsing of ellipsis in discourse

15:30-16:00 Coffee Break

16:00-17:30 Richard Moot: Extending Lambda Grammars


Alastair Butler
Erhard Hinrichs
Oleg Kiselyov
Greg Kobele
Koji Mineshima
Richard Moot
Jordan Needle
Ribeka Tanaka
Murat Yasavul

Talk abstracts

Alastair Butler Parsed Corpus Semantics

This talk will introduce Parsed Corpus Semantics. This makes use of parsed data, so essentially syntactic trees, as foundations for building semantic analyses, which, in turn, can serve as a basis for building syntactic trees. The point of departure is having parsed data for a language that conforms to an annotation scheme. Annotation discussed will follow the scheme of the Penn Historical Parsed Corpus family. Thereafter transformations are made to the parsed data. First, tree transformations are made to normalise syntactic analysis. This takes parsed data to a level where many language particulars are regularised, having the consequence of providing a common interface supportive of further processing to different language constructions and languages (illustrated with English and Japanese examples). From the normalised analysis, a richer form of transformation is invoked to change expressions from one formal language to another, during which information about dependencies is made explicit to deliver a form of predicate language semantic analysis. A further transformation involving tree growth facilitates the (re-)creation of natural language specific structure. Discussed transformations will lead to more abstract levels of analysis with the aim of reaching levels of meaning representation, as well as involving changes to levels with less abstraction as a return to generated language in the form of parse trees that yield strings.

Erhard Hinrichs Syntactic and Semantic Parsing: Past Successes, Current Trends, and Future Challenges

In this presentation, I will take the results of a parsing workshop held at The Ohio State University in May 1982 as a starting point for re-tracing the state-of-the-art in syntactic and semantic parsing of natural language over the past 30 years. In the first part of my presentation, I will focus on grammar-based and on data-driven approaches to parsing, as well as hybrids thereof, and I will comment on the relative strengths and weaknesses of these approaches. In the second part of the talk, I will report on recent joint research with Daniel de Kok, Corina Dima, and Jianqiang Ma on the use of word embeddings and of deep learning methods for dependency parsing of German and for the semantic interpretation of nominal compounds in German. In the final part of the talk, I will outline some directions for future research, and I will identify some challenges for syntactic and semantic parsing.

Oleg Kiselyov (joint work with Leo Tingchen Hsu) Gradually transforming syntax to semantics

Recently introduced AACG formalizes, restraints and makes rigorous the transformational approach epitomized by QR: deriving a meaning (in the form of a logical formula or a logical form) by a series of transformations from a suitably abstract (tecto-) form of a sentence. AACG generalizes various 'monad' or 'continuation-based' computational approaches, abstracting away irrelevant details (such as monads, etc) while overcoming their rigidity and brittleness. Unlike QR, each transformation in AACG is rigorously and precisely defined, typed, and deterministic. The restraints of AACG and the sparsity of the choice points (in the order of applying the deterministic transformation steps) make it easier to derive negative predictions and control over-generation.

We illustrate the AACG approach on quantifier ambiguity, scoping islands and inverse linking.

Greg Kobele Efficient parsing of ellipsis in discourse

I show how popular transformational analyses of ellipsis can be faithfully formalized in such a manner as to allow for efficient parsing of elliptical sentences in discourse in the following sense: the complexity of obtaining a (lambda term representing the) meaning for a discourse composed of sentences possibly involving cross-sentential ellipsis, is a polynomial function of the number of its sentences; it is the sum of the (polynomial) complexities of obtaining the meanings for its component sentences individually. The bound is loose, and should improve once restrictions on possible antecedents are taken into account.

The account of ellipsis herein is of the same kind as recent proposals in categorial grammar (Barker; Kubota and Levine) and in dynamic syntax (Kempson et al), which treat ellipsis resolution as retrieving the meaning of an antecedent with certain syntactic properties, and which therefore allow certain aspects of structure sensitivity and insensitivity to coexist. The fundamental difference lies in whether antecedents are identified with syntactic constituents, and thus are (higher-order) objects directly generated by the grammar. I provide some reasons for rejecting this identification, and discuss the linguistic prospects of this general approach to ellipsis.

Koji Mineshima Towards a proof-theoretic natural language semantics for wide-coverage grammars

In this talk, I present an on-going work on developing formal semantics and inference system for English and Japanese wide-coverage parsers based on a modern categorial grammar. The overall goal is to illustrate one way in which logical and statistical approaches are combined to achieve a natural language understanding system with broad-coverage and high-precision. I will introduce a pipeline presented in Mineshima et al. (EMNLP2015, 2016), evaluate the current system on two RTE datasets (FraCaS and SICK), present details on continuation-based compositional semantics implemented in the system, and show how to combine the system with semantic underspecification based on Dependent Type Semantics (DTS; see ESSLLI2016 lecture course:

Richard Moot Extending Lambda Grammars

When lambda grammars were introduced, their main attraction was a particularly elegant treatment of quantifier scope and extraction. However, because of architectural choices of the formalism, lambda grammars have problems with coordination and a number of other phenomena. This raises the question: are there extensions of lambda grammars which solve these problems while keeping the formalism simple and elegant? As has often been remarked, simplicity is a complex notion. I will discuss different extensions of lambda grammars and compare them using different notions of simplicity.

Ribeka Tanaka Interpretation of dependent pronouns and dependent types

Anaphora resolution may involve a reference to a dependency relation between objects. One typical example is a dependent interpretation of pronoun "it" in the mini-discourse "Every child received a present. They each opened it." Under the reading where the first sentence receives a subject wide-scope reading, the second sentence can be understood to mean that each child opened the present he or she received. Here, the interpretation of pronoun "it" depends on each child who stands in the 'receiving' relation to a present, which is introduced by the first sentence.

The standard way to account for dependent interpretation is to record dependency relations by sets of assignment functions (van den Berg 1996; Nouwen 2003; Brasoveanu 2008). This approach, however, has to make substantial changes to the central notion of context in a way that is specialized for the treatment of dependent interpretation. In this talk, we provide an alternative account from the perspective of dependent type theory. We handle dependency relations in terms of dependent function types, which are independently motivated objects provided in dependent type theory (Martin-Löf 1984). We will adopt dependent type semantics (Bekki 2014) as a semantic framework and illustrate how dependent function types encode dependency relations and naturally provide a resource for dependent interpretation.


We gratefully acknowledge the support of the following institutions/programs: