Nyelvtanulás | Olasz » Daniela Guglielmo - Lexicon Grammar of the Italian Language, Method, Resources and New Trends

Alapadatok

Év, oldalszám:2014, 1 oldal

Nyelv:angol

Letöltések száma:1

Feltöltve:2018. február 01.

Méret:454 KB

Intézmény:
-

Megjegyzés:
University of Salerno

Csatolmány:-

Letöltés PDF-ben:Kérlek jelentkezz be!



Értékelések

Nincs még értékelés. Legyél Te az első!


Tartalmi kivonat

Source: http://www.doksinet Lexicon-Grammar of the Italian Language: Method, Resources & New Trends Abstract - Daniela Guglielmo (University of Salerno) daniguglielmo@gmail.com Lexicon-Grammar is a method of formal description of languages, developed in the late 1960s by Maurice Gross at the LADL of Paris 7. It is based on Zellig Harris’s theoretical insights (i.e distribution, transformation, operator-argument structure) and on the empirical principles of methodological rigor, respect for data, comprehensive coverage of a language and reproducibility of experiments. The working hypothesis of the Lexicon-Grammar is that the lexicon cannot be separated from the syntax. This talk will give an overview of the core theoretical and methodological aspects of the Lexicon-Grammar of the Italian Language (LGI hereinafter), developed in the late 1970s by Annibale Elia, who inherited the Linguistics Institute of the University of Salerno by Tullio De Mauro. It will describe the main

resources of the Italian Language Module, ie a syntactic-semantic database including: 4000 predicative nouns in support verb constructions, 2000 adjectives, 5000 ordinary verbs (classified by means of 1349 properties into 68 classes), 5000 multi-word verbs, 3000 multi-word adverbs, 1000 verb-particle constructions. The linguistic data (in form of matrix tables) are implemented into the two main computational resources, i.e electronic dictionaries and local grammars, which are applied to large-scale written and spoken corpora in order to parse, annotate, translate and disambiguate in real time a wide range of morphological, syntactic and semantic phenomena. The software used at the University of Salerno is Nooj (www.nooj4nlpnet) which allows in-deep corpusbased investigations and several NLP tasks such as sentiment analysis, opinion extraction, information retrieval, automatic translation, paraphrasis generation, semantic data mining, word sense disambiguation, topic extraction,

automatic tagging systems and semantic role labelling. In this talk, I will present in detail the project called Semantic Role Labeling (SRL) that identifies the syntactic arguments of verbs and automatically assigns semantic roles (e.g AGENT, OBJECT, RECIPIENT and so on) to them. SRL is based on the mapping between semantic classes of verbs (e.g psychological verbs, cognitive verbs, transfer verbs, spatial verbs) and their morphosyntactic profiles classes (Vietri 2013, Elia 2013). The talk will then focus on the Spatial Semantic Role Labeling System which identifies and annotates in a corpora the semantic roles of <Figure> and <Ground>, as theorised by Talmy (2000), on the basis of the argument requirement of more than 700 spatial verb constructions (Elia, Guglielmo, et alii. 2013) In addition to performing sophisticated automatic analyses, I will show how LGI can be well regarded as a usage-based approach: it bonds structural facts and usage facts (Guglielmo, Elia,

Mancuso 2014) and it also contributes to some long standing linguistic debates, such as the role of non-verbal predication, the small clause analysis, lexical ambiguity, the argument structure and the syntax-semantic relation. Finally the talk will provide an introduction to recent interactions of LGI with other empirical methods, such as those of Experimental Linguistics