• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 6
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Capturing semantics using a link analysis based concept extractor approach

Kulkarni, Swarnim January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Doina Caragea / The web contains a massive amount of information and is continuously growing every day. Extracting information that is relevant to a user is an uphill task. Search engines such as Google TM, Yahoo! TM have made the task a lot easier and have indeed made people much more "smarter". However, most of the existing search engines still rely on the traditional keyword-based searching techniques i.e. returning documents that contain the keywords in the query. They do not take the associated semantics into consideration. To incorporate semantics into search, one could proceed in at least two ways. Firstly, we could plunge into the world of "Semantic Web", where the information is represented in formal formats such as RDF, N3 etc which can effectively capture the associated semantics in the documents. Secondly, we could try to explore a new semantic world in the existing structure of World Wide Web (WWW). While the first approach can be very effective when semantic information is available in RDF/N3 formats, for many web pages such information is not readily available. This is why we consider the second approach in this work. In this work, we attempt to capture the semantics associated with a query by rst extracting the concepts relevant to the query. For this purpose, we propose a novel Link Analysis based Concept Extractor (LACE) that extract the concepts associated with the query by exploiting the meta data of a web page. Next, we propose a method to determine relationships between a query and its extracted concepts. Finally, we show how LACE can be used to compute a statistical measure of semantic similarity between concepts. At each step, we evaluate our approach by comparison with other existing techniques (on benchmark data sets, when available) and show that our results are competitive with existing state of the art results or even outperform them.
2

Lexikální a sémantická specifika právního jazyka / Lexical And Semantic Specifics of Legal Language

Čížkovská, Anna Marie January 2011 (has links)
The purpose of this thesis is to describe legal language, its basic elements and relations, in which they are entering. The introductory chapter defines the legal language in general as a discipline at the interface between linguistics and theory of law. In addition to the basic legal elements and their relations to the general official language, there are described the basic elements of the legal language out of whose structure some basic elements required on the legal language come out. The relation between and legal language is symbolised through Euler circles. The conclusion of this chapter describes the legal language in terms of functional style and presents its stylistic traits. The first two parts of the second chapter are focused on the meaning of the lexical element, that are evaluated according to the amount of autonomy as a autosemantic and synemenatic units, or according to the motivating factor of the word and according to the fact if they are composed of one or more lexical elements . The keeping up with the basic requirements that are established by the Government's Legislative Rules, is proved with examples from primary legislation. In the other subchapters adherence to claim to the legal text certainty, comprehensibility and expliciteness in the legal terminology in the using of...
3

Especificação, instanciação e experimentação de um arcabouço para criação automática de ligações hipertexto entre informações homogêneas / Specification, instantion and experimentation of a framework intended to support the task of automatic creation of hypertext links between homogeneous repositories

Macedo, Alessandra Alaniz 02 July 2004 (has links)
Com a evolução da informática, diferentes meios de comunicação passaram a explorar a Web como um meio de divulgação de suas informações. Diferentes fontes de informações, diferentes estilos de escrita e a curiosidade nata do ser humano despertam o interesse de leitores por conhecer mais de um relato sobre um mesmo tema. Para que a leitura de diferentes relatos com conteúdo similar seja possível, leitores precisam procurar, ler e analisar informações fornecidas por diferentes fontes de informação. Essa atividade, além de exigir grande investimento de tempo, sobrecarrega cognitivamente usuários. Faz parte das pesquisas da área de Hipermídia investigar mecanismos que apóiem usuários no processo de identificação de informações em repositórios homogêneos, sejam eles disponibilizados na Web ou não. No contexto desta tese, repositórios com informações de conteúdo homogêneo são aqueles cujas informações tratam do mesmo assunto. Esta tese tem por objetivo investigar a especificação, a instanciação e a experimentação de um arcabouço para apoiar a tarefa de criação automática de ligações hipertexto entre repositórios homogêneos. O arcabouço proposto, denominado CARe (Criação Automática de Relacionamentos), é representado por um conjunto de classes que realizam a coleta de informações a serem relacionadas e que processam essas informações para a geração de índices. Esses índices são relacionados e utilizados na criação automática de ligações hipertexto entre a informação original. A definição do arcabouço se deu após uma fase de análise de domínio na qual foram identificados requisitos e construídos componentes de software. Nessa fase, vários protótipos também foram construídos de modo iterativo / With the evolution of the Internet, distinct communication media have focused on the Web as a channel of information publishing. An immediate consequence is an abundance of sources of information and writing styles in the Web. This effect, combining with the inherent curiosity of human beings, has led Web users to look for more than a single article about a same subject. To gain access to separate on a same subject, readers need to search, read and analyze information provided by different sources of information. Besides consuming a great amount of time, that activity imposes a cognitive overhead to users. Several hypermedia researches have investigated mechanisms for supporting users during the process of identifying information on homogeneous repositories, available or not on the Web. In this thesis, homogeneous repositories are those containing information that describes a same subject. This thesis aims at investigating the specification and the construction of a framework intended to support the task of automatic creation of hypertext links between homogeneous repositories. The framework proposed, called CARe (Automatic Creation of Relationships), is composed of a set of classes, methods and relationships that gather information to be related, and also process that information for generating an index. Those indexes are related and used in the automatic creation of hypertext links among distinct excerpts of original information. The framework was defined based on a phase of domain analysis in which requirements were identified and software components were built. In that same phase several prototypes were developed in an iterative prototyping
4

Els Sufixos verbalitzadors del català. Relacions semàntiques i diccionari

Bernal, Elisenda, 1971- 22 December 2000 (has links)
Material addicional: http://hdl.handle.net/10230/6326 / La tesi Els sufixos verbalitzadors del català. Relacions semàntiques i diccionari és un treball interdisciplinari en què intervenen aspectes de morfologia i semàntica lèxiques i lexicografia, que té un objectiu doble: d'una banda, analitzar els verbs sufixats de manera que quedin recollits els punts de divergència i de contacte entre els diversos verbs, i de l'altra, proposar una representació lexicogràfica per als sufixos verbalitzadors en forma de prototip de diccionari. Amb aquest objectiu, es parteix de la premissa que és important que els afixos en general, i els sufixos verbalitzadors en particular, passin a formar part de la macroestructura del diccionari, per tal de millorar les definicions lexicogràfiques de les entrades que els contenen.Així, s'analitzen com a objecte d'estudi les sèries derivatives verbals coradicals, és a dir: verbs que comparteixen la mateixa base, però que es construeixen per mitjà de processos diferents, ja que el fet de confrontar els diversos elements que forman una sèrie derivativa havia de permetre precisar quines són les característiques de cada procés i/o cada sufix, i, en concret, quines són les diferències semàntiques i sintàctiques que hi ha entre cada element. El treball aconsegueix establir tres tipus de factors que expliquen que no es donin sèries sinonímiques completes en els verbs construïts per sufixació: pragmàtics, temàtics i distribucionals.La proposta de representació lexicogràfica dels sufixos analitzats es presenta dins d'un projecte futur de Diccionari d'afixos. Aquesta representació intenta millorar la dels diccionaris existents, sistematitzant les informacions de cada afix i incloent-hi les relacions semàntiques que s'estableixen entre les paraules construïdes del mateix tipus i les regles que les construeixen. / The dissertation Els sufixos verbalitzadors del català. Relacions semàntiques i diccionari (en.: The verbalizing suffixes of Catalan. Semantic relations and dictionary) is an interdisciplinary work, where aspects of lexical morphology, lexical semantics and lexicography are involved. The work has a double goal: first, to analyze the verbs to determine the points of divergence and contact among them, and second, to propose a lexicographic representation for the suffixes in a prototype of dictionary. We started from the premise that it is important that affixes in general, and the verbalizing suffixes in particular, should be part of the macrostructure of the dictionary, in order to improve the lexicographic definitions of the entries that contain them. Thus, verbal derivative strings are analized, but only those verbs that share the same base, but built by different processes. We thought that to confront the several elements that form a derivative string will allow us how to determine which are the characteristics of each process and/or each suffix, and, in particular, which are the semantic and syntactic differences that exist between each element. The work achieves to establish three types of factors that explain why complete synonymic strings do not exist: pragmatic, thematic and distributional.The proposal of lexicographic representation for suffixes is shown in a future project of Dictionary of affixes. This representation attempts to improve that of the published dictionaries, giving the information of each affix systematically and including the semantic relations that are established between the words built by the same rule.
5

Especificação, instanciação e experimentação de um arcabouço para criação automática de ligações hipertexto entre informações homogêneas / Specification, instantion and experimentation of a framework intended to support the task of automatic creation of hypertext links between homogeneous repositories

Alessandra Alaniz Macedo 02 July 2004 (has links)
Com a evolução da informática, diferentes meios de comunicação passaram a explorar a Web como um meio de divulgação de suas informações. Diferentes fontes de informações, diferentes estilos de escrita e a curiosidade nata do ser humano despertam o interesse de leitores por conhecer mais de um relato sobre um mesmo tema. Para que a leitura de diferentes relatos com conteúdo similar seja possível, leitores precisam procurar, ler e analisar informações fornecidas por diferentes fontes de informação. Essa atividade, além de exigir grande investimento de tempo, sobrecarrega cognitivamente usuários. Faz parte das pesquisas da área de Hipermídia investigar mecanismos que apóiem usuários no processo de identificação de informações em repositórios homogêneos, sejam eles disponibilizados na Web ou não. No contexto desta tese, repositórios com informações de conteúdo homogêneo são aqueles cujas informações tratam do mesmo assunto. Esta tese tem por objetivo investigar a especificação, a instanciação e a experimentação de um arcabouço para apoiar a tarefa de criação automática de ligações hipertexto entre repositórios homogêneos. O arcabouço proposto, denominado CARe (Criação Automática de Relacionamentos), é representado por um conjunto de classes que realizam a coleta de informações a serem relacionadas e que processam essas informações para a geração de índices. Esses índices são relacionados e utilizados na criação automática de ligações hipertexto entre a informação original. A definição do arcabouço se deu após uma fase de análise de domínio na qual foram identificados requisitos e construídos componentes de software. Nessa fase, vários protótipos também foram construídos de modo iterativo / With the evolution of the Internet, distinct communication media have focused on the Web as a channel of information publishing. An immediate consequence is an abundance of sources of information and writing styles in the Web. This effect, combining with the inherent curiosity of human beings, has led Web users to look for more than a single article about a same subject. To gain access to separate on a same subject, readers need to search, read and analyze information provided by different sources of information. Besides consuming a great amount of time, that activity imposes a cognitive overhead to users. Several hypermedia researches have investigated mechanisms for supporting users during the process of identifying information on homogeneous repositories, available or not on the Web. In this thesis, homogeneous repositories are those containing information that describes a same subject. This thesis aims at investigating the specification and the construction of a framework intended to support the task of automatic creation of hypertext links between homogeneous repositories. The framework proposed, called CARe (Automatic Creation of Relationships), is composed of a set of classes, methods and relationships that gather information to be related, and also process that information for generating an index. Those indexes are related and used in the automatic creation of hypertext links among distinct excerpts of original information. The framework was defined based on a phase of domain analysis in which requirements were identified and software components were built. In that same phase several prototypes were developed in an iterative prototyping
6

Vocabulaire employé pour l'accès thématique aux documents d'archives patrimoniaux : étude linguistique exploratoire de termes de recherche, de description, d'indexation

Guitard, Laure 04 1900 (has links)
No description available.

Page generated in 0.1344 seconds