You are here

Paul Smolensky


Paul Smolensky

Johns Hopkins University

Paul Smolensky is Krieger-Eisenhower Professor of Cognitive Science at Johns Hopkins University, where he is director of the NSF IGERT Program, Unifying the Science of Language. His research addresses unification of the continuous and the discrete facets of cognition: principally, the development of grammatical formalisms that are grounded in cognitive and neural computation. His publications include the books Mathematical perspectives on neural networks (with M. Mozer, D. Rumelhart), Optimality Theory: Constraint interaction in generative grammar (with A. Prince), Learnability in Optimality Theory(with B. Tesar), and The harmonic mind: From neural computation to optimality-theoretic grammar (with G. Legendre). He co-taught LSA Summer Linguistic Institute courses on “Connectionism and Harmony Theory in Linguistics” (1991, with A. Prince: the first presentation of Optimality Theory) and “Explaining Phonological Universals” (2005, with J. Pierrehumbert). He was awarded the 2005 David E. Rumelhart Prize for Outstanding Contributions to the Formal Analysis of Human Cognition.


  • Sapir Lecture by Paul Smolensky

    Tuesday, July 7, 2015 -
    6:00pm to 10:00pm

    Grammar with Gradient Symbol Structures

    Lecture by Paul Smolensky, Sapir Professor, at the Max Palevsky Cinema in Ida Noyes, at 6pm. Reception to follow at 7:30pm at the Cloister Club, also in Ida Noyes.

    How can grammatical theory contribute to, and benefit from, advances in psycholinguistics, computational linguistics, and cognitive neuroscience? I will propose a novel computational framework for grammar — its structure, use and acquisition — providing formal characterizations of grammar at both an abstract, functional level and a neural-network level. The most recent developments in this framework involve linguistic representations in which partially-active symbols occupy blends of positions within discrete structures such as trees. Application of such Gradient Symbolic Computation (GSC) to grammatical competence theory tests a general hypothesis: theoretical disputes about whether phenomenon X should be analyzed using structure A or structure B persist because the correct analysis is in fact a blend of those two structures. Application of GSC to language production and comprehension tests whether this type of computation can successfully unify discrete, structure-based theories of competence with gradient, activation-based theories of performance. The general architecture raises important questions about current corpus-based computational linguistic systems that appear to achieve remarkable linguistic performance using unstructured numerical vectors as internal representations (e.g., of semantics), and deep neural networks for learning those representations.