King’s College London
The course considers four central questions that an adequate theory of semantic representation must
answer. First, how can the theory express fine grained distinctions of meaning? Second, how is semantic
entailment expressed in the theory? Third, how is the pervasive gradience (in particular the vagueness) of
semantic properties captured? Finally, how can language learners acquire the class of representations that
the theory makes available? I consider these questions with reference to three main approaches to
formal and computational semantics: model theory, proof theory, and distributional treatments of meaning
(Vector Space Models). I also explore ways of developing a probabilistic semantics for natural language.
These questions are addressed in the context of the guiding concern of computational semantics to develop
robust, wide coverage systems for representing the semantic properties of natural languages, where these
systems can be effectively learned and their representations of meanings can be efficiently computed.
Chris Fox and Shalom Lappin (2005), Foundations of Intensional Semantics, Blackwell, Oxford.
Chris Fox and Shalom Lappin (2010), Expressiveness and Complexity in Underspecified Semantics,
Linguistic Analysis 36, Festschrift for Joachim Lambek, pp. 385-417.
Jan van Eijck and Shalom Lappin (2012), Probabilistic Semantics for Natural Language, unpublished
ms, CWI, Amsterdam and King’s College London.
Representing Meaning and Entailment
Fine Grained Intensional Theories
Property Theory with Curry Typing
Expressive and Computational Power