Get the
.dvi or the PostScript file.
Back to the table of contents.
This note is on cautious cut elimination for one of Veltman's might logics. Syntactically, the logic is presented as an extension of a sequent system for classical proposition logic (hence: {\sc cpl}). I show that this extension preserves the completeness and decidability of {\sc cpl}. The proof has cautious cut elimination as a corollary. I also give a rather general syntactic proof of cautious cut elimination. It states that any `base' logic which has a reflexive, monotone consequence relation that allows cautious cut to be eliminated preserves cautious cut elimination when extended to a might logic.
Get the
.dvi or the PostScript file.
Back to the table of contents.
On the Fregean view of $NP$'s, quantified $NP$'s are represented as operator-variable structures while proper names are constants appearing in argument position. The Generalized Quantifier (GQ) approach characterizes quantified $NP$'s and names as elements of a unified syntactic category and semantic type. According to the Logicality Thesis (May \cite{May91}), the distinction between quantified $NP$'s, which undergo an operation of quantifier raising to yield operator-variable structures at Logical Form (LF) and non-quantified $NP$'s, which appear {\em in situ} at LF, corresponds to a difference in logicality status. The former are logical expressions while the latter are not. Using van Benthem's \cite{Benthem86,Benthem89} criterion for logicality, I extend the concept of logicality to GQ's. I argue that $NP$'s modified by exception phrases constitute a class of quantified $NP$'s which are heterogeneous with respect to logicality. However, all exception phrase $NP$'s exhibit the syntactic and semantic properties which motivate May to treat quantified $NP$'s as operators at LF. I present a semantic analysis of exception phrases as modifiers of GQ's, and I indicate how this account captures the central semantic properties of exception phrase $NP$'s. I explore the consequences of the logically heterogeneous character of exception phrase $NP$'s for proof theoretic accounts of quantifiers in natural language. The proposed analysis of exception phrase $NP$'s provides support for the GQ approach to the syntax and semantics of $NP$'s.
Get the
.dvi or the PostScript file.
Back to the table of contents.
Get the
.dvi or the PostScript file.
Back to the table of contents.
D. M. Gabbay
Department of Computing, Imperial College, London SW7 2BZ, UK.
E-mail: dg@doc.ic.ac.uk
We are concerned with showing how `labelled' Natural Deduction presentation systems based on an extension of the so-called Curry--Howard functional interpretation can help us understand and generalise most of the deduction calculi designed to deal with the logical notion of existential quantification. We present the labelling mechanism for `$\exists x^{\bf D}.{\bf F}(x)$' using what we call `$\varepsilon$-terms', which have the form of `$\varepsilon x.(f(x),a)$' in a dual form to the `$\Lambda x.f(x)$' terms of `$\forall x^{\bf D}.{\bf F}(x)$' in the sense that the `witness' (`$a$' in the term above) is chosen (and hidden) at the time of assertion/{\it introduction\/}. With the intention of demonstrating the generality of our framework, we provide a brief comparison with some other calculi of existentials, including the model-theoretic interpretations of Skolem and Herbrand, indicating the way in which some of the classical results on prenex normal forms can be extended to a wide range of logics. The essential ingredient is to look at the conceptual framework of labelled natural deduction as an attempted reconstruction of Frege's functional calculus where the devices of the {\it Begriffsschrift\/} (i.e.\ connectives and quantifiers) work separately from, yet in harmony with, those of the {\it Grundgesetze\/} (i.e.\ abstractors).\footnote{An earlier version of this paper was presented at the 1991 European Summer Meeting of the Association for Symbolic Logic, {\it Logic Colloquium '91\/}, sections 1--5 and 10 of the (IUHPS) {\it 9th International Congress of Logic, Methodology and Philosophy of Science\/}, Uppsala, Sweden, August 7--14, 1991. A one-page abstract has appeared in the {\it Abstracts\/} of the Congress, and has also appeared in the {\em JSL\/} \cite{deQueirozGabbay91:exists-abstract}.}
Get the
.dvi or the PostScript file.
Back to the table of contents.
This article discusses my work in the last few years on logical formalisms which have been shown to be useful to various aspects of Natural and Programming Languages and for foundational formalisms. In this period, I have been involved in two extensive programs:
Get the
.dvi or the PostScript file.
Back to the table of contents.
In this paper, we shall present a generalization of phrase structure grammar, in which all functional categories (such as verbs and adjectives) have type restrictions, that is, their argument types are specific domains. In ordinary phrase structure grammar, there is just one universal domain of individuals. The grammar does not make a distinction between verbs and adjectives in terms of domains of applicability. Consequently, it fails to distinguish between sentences like \eex{every line intersects every line}, which is well typed, and \eex{every line intersects every point}, which is ill typed.
Our generalization relates to ordinary phrase structure grammar in the same way as the higher-level constructive type theory of Martin-L\"{o}f (see Nordstr\"{o}m {\em et al.} 1990, part III, % ?? or Ranta 1994, chapter 8) relates to the simple type theory of Church (Church 1940). Simple type theory has been used in linguistics and related with phrase structure grammar, especially in the tradition based on the work of Montague (1974).
Our definition of the grammar will be more formal than Montague's in the sense that we shall use a formal metalanguage, that of constructive type theory, for defining both the object language and its interpretation. The grammar, both syntax and semantics, is thus readily implementable in the type-theoretical proof system ALF (see Magnusson and Nordstr\"{o}m 1994 for ALF). Inside type theory, a distinction can be made between the object language and the model, in other words, between syntactic and semantic types. It will turn out that the object language cannot be defined independently of the model, as in ordinary Tarski semantics. %%% This is a direct consequence of introducing the typing restrictions.
The grammar presented here can be seen as a formal linguistic elaboration of the work presented in Ranta 1991 and 1994. The organization of generative grammar as categorial grammar followed by sugaring should now look more familiar to the linguist, as the grammatical formalism on which sugaring operates is no longer full type theory but a set of phrase structure trees.
Get the
.dvi or the PostScript file.
Back to the table of contents.
Recent work within Categorial Grammar has seen the development of a number of {\it multimodal\/} systems, where different families of connectives coexist within a single categorial logic. Such systems can be viewed as making available differing modes of linguistic description within a single grammatical formalism. This paper addresses proposals for constructing multimodal systems due to Hepple \cite{mh-hep.93} and Moortgat \& Oehrle \cite{mh-moo.oeh.93}, which are in many ways similar, but which make apparently contradictory claims concerning the appropriate interrelation of different modes of description, which lead in turn to differences for the kind of linguistic accounts that the two approaches make possible. Although we focus mostly on the view taken in Hepple \cite{mh-hep.93}, and its inspiration by earlier work involving structural modalities, the paper proceeds to a discussion of whether the two approaches are genuinely incompatible in the way that they at first appear.
Get the
.dvi or the PostScript file.
Back to the table of contents.
Get the
.dvi or the PostScript file.
Back to the table of contents.
In this paper we compare grammatical inference in the context of {\em simple\/} and of {\em mixed\/} Lambek systems. Simple Lambek systems are obtained by taking the logic of residuation for a family of multiplicative connectives $/,\bullet,\bs$, together with a package of structural postulates characterizing the resource management properties of the $\bullet$ connective. Different choices for Associativity and Commutativity yield the familiar logics {\bf NL}, {\bf L}, {\bf NLP}, {\bf LP}. Semantically, a simple Lambek system is a unimodal logic: the connectives get a Kripke style interpretation in terms of a single ternary accessibility relation modeling the notion of linguistic composition for each individual system. The simple systems each have their virtues in linguistic analysis. But none of them in isolation provides a basis for a full theory of grammar. In the second part of the paper, we consider two types of {\em mixed\/} Lambek systems.
The first type %of mixed system is obtained by combining a number of unimodal systems into one multimodal logic. The combined multimodal logic is set up in such a way that the individual resource management properties of the constituting logics are preserved. But the inferential capacity of the mixed logic is greater than the sum of its component parts through the addition of interaction postulates, together with the corresponding interpretive constraints on frames, regulating the {\em communication\/} between the component logics.
The second type of mixed system is obtained by generalizing the residuation scheme for binary connectives to families of $n$-ary connectives, and by putting together families of different arities in one logic. We focus on residuation for unary connectives, hence on mixed (2,3) frames, as these already represent the complexities in full. We prove a number of elementary logical results for unary families of residuated connectives and their combination with binary families. The existing proposals for unary `structural modalities' are situated within this general framework, and a number of new linguistic applications is given.
Get the
.dvi or the PostScript file.
Back to the table of contents.
We consider the task of theorem proving in Lambek calculi and their generalisation to \scare{multimodal residuation calculi}. These form an integral part of categorial logic, a logic of signs stemming from categorial grammar, on the basis of which language processing is essentially theorem proving. The demand of this application is not just for efficient processing of some or other specific calculus, but for methods that will be generally applicable to categorial logics.
It is proposed that multimodal cases be treated by dealing with the highest common factor of all the connectives as linear (propositional) validity. The prosodic (sublinear) aspects are encoded in labels, in effect the term-structure of quantified linear logic. The correctness condition on proof nets (\scare{long trip condition}) can be implemented by SLD resolution in linear logic with unification on labels/terms {\em limited to one way matching}. A suitable unification strategy is obtained for calculi of discontinuity by normalisation of the ground goal term followed by recursive descent and redex pattern matching on the head term.
Get the
.dvi or the PostScript file.
Back to the table of contents.
Get the
.dvi or the PostScript file.
Back to the table of contents.
This paper considers negative triggers (negative and negative quantifiers) and the interpretation of simple sentences containing more than one occurrence of those items (multiple negation sentences). In the most typical interpretations those sentences have more negative expressions than negations in their semantic representation. It is first shown that this compositionality problem remains in current approaches. A principled algorithm for deriving the representation of sentences with multiple negative quantifiers in a DRT framework (Kamp and Reyle, \cite{KaRe93}) is then introduced. The algorithm is under the control of an on-line check-in, keeping the complexity of negation auto-embedding below a threshold of complexity. This mechanism is seen as a competence limitation imposing (and licensing) the `abrogation of compositionality' (May \cite{May89}) observed in the so-called negative concord readings (Labov \cite{Labov72}, Zanuttini \cite{Zanuttini91}, Ladusaw \cite{Ladusaw92}). A solution to the compositionality problem is thus proposed, which is based on a control on the processing input motivated by a limitation of the processing mechanism itself.
Get the
.dvi or the PostScript file.
Back to the table of contents.
The paper adresses the problem of reasoning with ambiguities. Semantic representations are presented that leave scope relations between quantifiers and/or other operators unspecified. Truth conditions are provided for these representations and different consequence relations are judged on the basis of intuitive correctness. Finally inference patterns are presented that operate directly on these underspecified structures, i.e. do not rely on any translation into the set of their disambiguations.
Get the
.dvi or the PostScript file.
Back to the table of contents.
Using the LDS$_{\rm NL}$ model of utterance interpretation being developed by Gabbay and Kempson (cf.\ \cite{GaKe92,Kempson:forthcominga,Kempson:forthcomingb}), this paper demonstrates how the dynamics of the proof process adopted explains configurational restrictions imposed on the interpretation of elliptical fragments. The blurring of traditional semantic and syntactic dichotomies in the LDS$_{\rm NL}$ proof-theoretic reconstruction of interpretation successfully provides a basis for predicting the array of variation displayed by different elliptical forms. The logic adopted is a composite system of a type logic nested within a database logic. Two resource-sensitive sub-types of Conditional Introduction form the basis for explaining the ellipsis data. The result is a demonstration of how the simple device of adding labels to an inference system can provide a useful tool not only at the meta-logic level of comparing alternative logic and grammar formalisms, but also at the level of explaining natural language data.
Get the .dvi or the PostScript file. Back to the table of contents.