The Depling conferences respond to the growing need for a linguistic conference dedicated to approaches in syntax, semantics and the lexicon that are centered around dependency structures as a central linguistic notion.
A short introduction to dependency can be found here
Depling 2017, will be held in Pisa, September 18-20 2017. The conference is organized by the University of Pisa and the Institute for Computational Linguistics “A. Zampolli” of the Italian National Research Council (ILC-CNR).
Depling 2015, was held at the University of Uppsala from August 24 to 26 2015. The conference was organized by the Computational Linguistics group at Uppsala University in collaboration with Akademikonferens.
Depling 2013 was held at the Charles University in Prague, from August 27 to 30, 2013. It was organized by the Faculty of Mathematics and Physics of the Charles University.
Depling 2011 was held at the Pompeu Fabra University in Barcelona, from September 5 to 7, 2011. It was organized by the TALN group of the technology department of the UPF.
In the past decade, dependencies, directed labeled graph structures representing hierarchical relations between morphemes, words and semantic units, with a strong reference to the lexicon, have become the near-standard representation and annotation schemes in computational linguistics, parsing, generation, and other fields of natural language processing. The linguistic significance of these structures often remains vague, and the need for the development of common notational and formal grounds is felt strongly by many people working in these domains.
In general terms, the conference investigates:
The use of dependency structures in the description of interesting syntactic and semantic phenomena, especially in a cross-linguistic perspective, including
linguistic phenomena for which classical simple phrase-structure based models have proven to be unsatisfactory.
- The modelling of lexical phenomena and their role in the dependency view of linguistics.
- The applications of dependency analyses to natural language processing, including machine translation, parsing, generation, information extraction, etc.
include, but are not limited to:
- The use of dependency trees in syntactic analysis, description, formalization, parsing, generation, and corpus annotation of written and spoken texts.
- The use of semantic valency-based predicate and actancy graph structures and their link to classical logic.
- The elaboration of formal dictionaries for dependency-based syntax and semantics, including descriptions of collocations and paradigmatic links.
- Links to morphology and linearization of dependency structures, using for example topological field theories
- Dependency-like structures beyond the sentence as annotation scheme for discourse phenomena.
- The description and formalization of semantic and pragmatic phenomena related to information structure.
- History, epistemology, and psycholinguistic relevance of dependency grammar, including its relation to generative approaches to language
We are also interested in work on questions such as:
- What are the differences and similarities between theta roles, valencies, f-structures, TAG derivation trees, (subcategorization) frames, semantic role labelling, etc?
- Which corpus annotations using head-daughter relations on words are formally and linguistically equivalent, which are not?
- How to describe syntax-semantic interfaces between dependency structures?