EMNLP 2008: conference on Empirical Methods in Natural Language Processing — October 25-27, 2008 — Waikiki, Honolulu, Hawaii.

Welcome to EMNLP 2008

Invited speakers

  • We KnowItAll: lessons from a quarter century of Web extraction research

    Oren Etzioni Computer Science and Engineering, University of Washington

    For the last quarter century (measured in person years), the KnowItAll project has investigated information extraction at Web scale. If successful, this effort will begin to address the long-standing "Knowledge Acquisition Bottleneck" in Artificial Intelligence, and will enable a new generation of search engines that extract and synthesize information from text to answer complex user queries. To date, we have generalized information extraction methods to process arbitrary Web text, to handle unanticipated concepts, and to leverage the redundancy inherent in the Web corpus, but many challenges remain. One of the most formidable challenges is moving from extracting isolated nuggets of information to capturing a coherent body of knowledge that can support automatic inference. My talk will describe the lessons we have learned and identify directions for future work.

  • Connecting language learning and language evolution via Bayesian statistics

    Tom Griffiths Department of Psychology, University of California, Berkeley

    The methods of Bayesian statistics have become a common source of tools for natural language processing, with ideas like nonparametric Bayesian models and Markov chain Monte Carlo being increasingly widely used. In this talk I will discuss how the same ideas can contribute to a theoretical analysis of language evolution. Languages are learned from utterances produced by people who were once language learners themselves, an observation that has inspired an influential stream of research investigating the consequences of this process of "iterated learning" for the languages being passed fom one learner to another. I will summarize some of the key ideas of modern Bayesian statistics in the context of language learning, showing how this approach can address questions such as whether we should learn from type or token frequencies and how to define statistical models of language that obey Zipf's law. I will then outline how these ideas can be used to analyze the consequences of iterated learning and show how they provide a potential explanation for some basic regularities in the structure and evolution of human languages. Various parts of this project are joint work with Sharon Goldwater, Mark Johnson, Mike Kalish, Steve Lewandowsky, and Florencia Reali.

  • Are Linear Models Right for Language?

    Fernando Pereira Google and University of Pennsylvania

    Over the last decade, linear models have become the standard machine learning approach for supervised classification, ranking, and structured prediction natural language processing. They can handle very high-dimensional problem representations, they are easy to set up and use, and they extend naturally to complex structured problems. But there is something unsatisfying in this work. The geometric intuitions behind linear models were developed with low-dimensional, continuous problems, while natural language problems involve very high dimension, discrete representations with long tailed distributions. Do the orignal intuitions carry over? In particular, do standard regularization methods make any sense for language problems? I will give recent experimental evidence that there is much to do in making linear model learning more suited to the statistics of language.