• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1329
  • 556
  • 320
  • 111
  • 83
  • 57
  • 54
  • 54
  • 37
  • 37
  • 28
  • 25
  • 25
  • 24
  • 23
  • Tagged with
  • 3108
  • 979
  • 507
  • 473
  • 424
  • 415
  • 401
  • 354
  • 326
  • 290
  • 288
  • 275
  • 257
  • 255
  • 243
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Thoughts don't have Colour, do they? : Finding Semantic Categories of Nouns and Adjectives in Text Through Automatic Language Processing / Generering av semantiska kategorier av substantiv och adjektiv genom automatisk textbearbetning

Fallgren, Per January 2017 (has links)
Not all combinations of nouns and adjectives are possible and some are clearly more fre- quent than other. With this in mind this study aims to construct semantic representations of the two types of parts-of-speech, based on how they occur with each other. By inves- tigating these ideas via automatic natural language processing paradigms the study aims to find evidence for a semantic mutuality between nouns and adjectives, this notion sug- gests that the semantics of a noun can be captured by its corresponding adjectives, and vice versa. Furthermore, a set of proposed categories of adjectives and nouns, based on the ideas of Gärdenfors (2014), is presented that hypothetically are to fall in line with the produced representations. Four evaluation methods were used to analyze the result rang- ing from subjective discussion of nearest neighbours in vector space to accuracy generated from manual annotation. The result provided some evidence for the hypothesis which suggests that further research is of value.
262

Knowledge-based Semantic Measures : From Theory to Applications / Mesures sémantiques à base de connaissance : de la théorie aux applicatifs

Harispe, Sébastien 25 April 2014 (has links)
Les notions de proximité, de distance et de similarité sémantiques sont depuis longtemps jugées essentielles dans l'élaboration de nombreux processus cognitifs et revêtent donc un intérêt majeur pour les communautés intéressées au développement d'intelligences artificielles. Cette thèse s'intéresse aux différentes mesures sémantiques permettant de comparer des unités lexicales, des concepts ou des instances par l'analyse de corpus de textes ou de représentations de connaissance (e.g. ontologies). Encouragées par l'essor des technologies liées à l'Ingénierie des Connaissances et au Web sémantique, ces mesures suscitent de plus en plus d'intérêt à la fois dans le monde académique et industriel. Ce manuscrit débute par un vaste état de l'art qui met en regard des travaux publiés dans différentes communautés et souligne l'aspect interdisciplinaire et la diversité des recherches actuelles dans ce domaine. Cela nous a permis, sous l'apparente hétérogénéité des mesures existantes, de distinguer certaines propriétés communes et de présenter une classification générale des approches proposées. Par la suite, ces travaux se concentrent sur les mesures qui s'appuient sur une structuration de la connaissance sous forme de graphes sémantiques, e.g. graphes RDF(S). Nous montrons que ces mesures reposent sur un ensemble réduit de primitives abstraites, et que la plupart d'entre elles, bien que définies indépendamment dans la littérature, ne sont que des expressions particulières de mesures paramétriques génériques. Ce résultat nous a conduits à définir un cadre théorique unificateur pour les mesures sémantiques. Il permet notamment : (i) d'exprimer de nouvelles mesures, (ii) d'étudier les propriétés théoriques des mesures et (iii) d'orienter l'utilisateur dans le choix d'une mesure adaptée à sa problématique. Les premiers cas concrets d'utilisation de ce cadre démontrent son intérêt en soulignant notamment qu'il permet l'analyse théorique et empirique des mesures avec un degré de détail particulièrement fin, jamais atteint jusque-là. Plus généralement, ce cadre théorique permet de poser un regard neuf sur ce domaine et ouvre de nombreuses perspectives prometteuses pour l'analyse des mesures sémantiques. Le domaine des mesures sémantiques souffre d'un réel manque d'outils logiciels génériques et performants ce qui complique à la fois l'étude et l'utilisation de ces mesures. En réponse à ce manque, nous avons développé la Semantic Measures Library (SML), une librairie logicielle dédiée au calcul et à l'analyse des mesures sémantiques. Elle permet d'utiliser des centaines de mesures issues à la fois de la littérature et des fonctions paramétriques étudiées dans le cadre unificateur introduit. Celles-ci peuvent être analysées et comparées à l'aide des différentes fonctionnalités proposées par la librairie. La SML s'accompagne d'une large documentation, d'outils logiciels permettant son utilisation par des non informaticiens, d'une liste de diffusion, et de façon plus large, se propose de fédérer les différentes communautés du domaine afin de créer une synergie interdisciplinaire autour la notion de mesures sémantiques : http://www.semantic-measures-library.org Cette étude a également conduit à différentes contributions algorithmiques et théoriques, dont (i) la définition d'une méthode innovante pour la comparaison d'instances définies dans un graphe sémantique – nous montrons son intérêt pour la mise en place de système de recommandation à base de contenu, (ii) une nouvelle approche pour comparer des concepts représentés dans des taxonomies chevauchantes, (iii) des optimisations algorithmiques pour le calcul de certaines mesures sémantiques, et (iv) une technique d'apprentissage semi-supervisée permettant de cibler les mesures sémantiques adaptées à un contexte applicatif particulier en prenant en compte l'incertitude associée au jeu de test utilisé. Travaux validés par plusieurs publications et communications nationales et internationales. / The notions of semantic proximity, distance, and similarity have long been considered essential for the elaboration of numerous cognitive processes, and are therefore of major importance for the communities involved in the development of artificial intelligence. This thesis studies the diversity of semantic measures which can be used to compare lexical entities, concepts and instances by analysing corpora of texts and knowledge representations (e.g., ontologies). Strengthened by the development of Knowledge Engineering and Semantic Web technologies, these measures are arousing increasing interest in both academic and industrial fields.This manuscript begins with an extensive state-of-the-art which presents numerous contributions proposed by several communities, and underlines the diversity and interdisciplinary nature of this domain. Thanks to this work, despite the apparent heterogeneity of semantic measures, we were able to distinguish common properties and therefore propose a general classification of existing approaches. Our work goes on to look more specifically at measures which take advantage of knowledge representations expressed by means of semantic graphs, e.g. RDF(S) graphs. We show that these measures rely on a reduced set of abstract primitives and that, even if they have generally been defined independently in the literature, most of them are only specific expressions of generic parametrised measures. This result leads us to the definition of a unifying theoretical framework for semantic measures, which can be used to: (i) design new measures, (ii) study theoretical properties of measures, (iii) guide end-users in the selection of measures adapted to their usage context. The relevance of this framework is demonstrated in its first practical applications which show, for instance, how it can be used to perform theoretical and empirical analyses of measures with a previously unattained level of detail. Interestingly, this framework provides a new insight into semantic measures and opens interesting perspectives for their analysis.Having uncovered a flagrant lack of generic and efficient software solutions dedicated to (knowledge-based) semantic measures, a lack which clearly hampers both the use and analysis of semantic measures, we consequently developed the Semantic Measures Library (SML): a generic software library dedicated to the computation and analysis of semantic measures. The SML can be used to take advantage of hundreds of measures defined in the literature or those derived from the parametrised functions introduced by the proposed unifying framework. These measures can be analysed and compared using the functionalities provided by the library. The SML is accompanied by extensive documentation, community support and software solutions which enable non-developers to take full advantage of the library. In broader terms, this project proposes to federate the several communities involved in this domain in order to create an interdisciplinary synergy around the notion of semantic measures: http://www.semantic-measures-library.org This thesis also presents several algorithmic and theoretical contributions related to semantic measures: (i) an innovative method for the comparison of instances defined in a semantic graph – we underline in particular its benefits in the definition of content-based recommendation systems, (ii) a new approach to compare concepts defined in overlapping taxonomies, (iii) algorithmic optimisation for the computation of a specific type of semantic measure, and (iv) a semi-supervised learning-technique which can be used to identify semantic measures adapted to a specific usage context, while simultaneously taking into account the uncertainty associated to the benchmark in use. These contributions have been validated by several international and national publications.
263

Composição automática de serviços web semânticos : uma abordagem com times assíncronos e operadores genéticos / Automatic composition of semantic web services : an approach with asynchronous teams and genetic operators

Tizzo, Neil Paiva 20 August 2018 (has links)
Orientadores: Eleri Cardozo, Juan Manuel Adán Coello / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-20T11:09:42Z (GMT). No. of bitstreams: 1 Tizzo_NeilPaiva_D.pdf: 4158360 bytes, checksum: 2da573ad2a127a6b19a4e75f6b8f2d76 (MD5) Previous issue date: 2012 / Resumo: A automação da composição de serviços Web é, na visão do autor, um dos problemas mais importantes da área de serviços Web. Além de outras características, destaca-se que somente a composição automática é capaz de lidar com ambientes mutáveis onde os serviços são permanentemente inseridos, removidos e modificados. Os métodos existentes para realizar a composição automática de serviços apresentam várias limitações. Alguns tratam de um número muito restrito de fluxos de controles e outros não consideram a marcação semântica dos serviços. Em adição, em muitos casos não há avaliações quantitativas do desempenho dos métodos. Desta forma, o objetivo desta tese é propor um método para realizar a composição automática de serviços Web semânticos que considera os cinco tipos básico de fluxo de controle identificados pela Workflow Management Coalition, a saber: sequencial, separação paralela, sincronização, escolha-exclusiva e união simples; bem como para o fluxo de controle em laço, considerado um fluxo do tipo estrutural. As regras que descrevem a composição entre os serviços são híbridas, baseadas em semântica e em técnicas de recuperação de informação. Os serviços são descritos em OWL-S, uma ontologia descrita em OWL que permite descrever semanticamente os atributos IOPE (parâmetros de entrada, de saída, pré-requisitos e efeitos) de um serviço, mas somente os parâmetros de entrada e saída foram levados em consideração neste trabalho. Para validar a abordagem foi implementado um protótipo que utilizou times assíncronos (A-Teams) com agentes baseados em algoritmos genéticos para realizar a composição segundo os padrões de fluxo sequencial, paralelo e sincronização. A avaliação experimental do algoritmo de composição foi realizada utilizando uma coleção de serviços Web semânticos pública composta de mais de 1000 descrições de serviços. As avaliações de desempenho, em vários cenários típicos, medidas em relação ao tempo de resposta médio e à quantidade de vezes em que a função de avaliação (função fitness) é calculada são igualmente apresentadas. Para os casos mais simples de composição, o algoritmo conseguiu reduzir o tempo de resposta em relação a uma busca cega em aproximadamente 97%. Esta redução aumenta à medida que a complexidade da composição também aumenta / Abstract: The automation of the composition of Web services is, in the view of the author, one of the most important problems in the area of Web services. Beyond other characteristics, only the automatic composition can deal with a changing environment where the services are permanently inserted, removed, and modified. Existing methods performing the automatic service composition have several limitations. Some deal with a very limited number of control flow patterns, while others do not consider the semantic markup of services. In addition, in many cases there is no quantitative evaluation of the method's performance. In such a way, the objective of this thesis is to propose a method to perform the automatic composition of semantic Web services considering the five basic types of control flow identified by the Workflow Management Coalition, namely: sequential, parallel split, synchronization, exclusive choice and simple merge; and for loop control flow, classified as a structural control flow pattern. The rules that describe the composition of the service are hybrid: based in semantics and in information retrieval techniques. Services are described in OWL-S, an ontology described in OWL that allows the semantically description of the IOPE attributes (input, output, prerequisite and effect) of a service, but only the input and output parameters were taken into consideration in this work. A prototype was implemented to validate the proposed rules. An asynchronous Team (A-Team) algorithm with genetic agents was used to carry out the composition according to the sequential, parallel and synchronization control flows. The experimental evaluation of the composition algorithm employed a public collection of semantic Web services composed of more than 1000 descriptions of services. An experimental performance evaluation showed that, for simple composition cases, the algorithm reduced the average response time in approximately 97%, when compared to blind search. This reduction increases as the composition complexity increases / Doutorado / Engenharia de Computação / Doutor em Engenharia Elétrica
264

Semantic Matching for Stream Reasoning

Dragisic, Zlatan January 2011 (has links)
Autonomous system needs to do a great deal of reasoning during execution in order to provide timely reactions to changes in their environment. Data needed for this reasoning process is often provided through a number of sensors. One approach for this kind of reasoning is evaluation of temporal logical formulas through progression. To evaluate these formulas it is necessary to provide relevant data for each symbol in a formula. Mapping relevant data to symbols in a formula could be done manually, however as systems become more complex it is harder for a designer to explicitly state and maintain thismapping. Therefore, automatic support for mapping data from sensors to symbols would make system more flexible and easier to maintain. DyKnow is a knowledge processing middleware which provides the support for processing data on different levels of abstractions. The output from the processing components in DyKnow is in the form of streams of information. In the case of DyKnow, reasoning over incrementally available data is done by progressing metric temporal logical formulas. A logical formula contains a number of symbols whose values over time must be collected and synchronized in order to determine the truth value of the formula. Mapping symbols in formula to relevant streams is done manually in DyKnow. The purpose of this matching is for each variable to find one or more streams whose content matches the intended meaning of the variable. This thesis analyses and provides a solution to the process of semantic matching. The analysis is mostly focused on how the existing semantic technologies such as ontologies can be used in this process. The thesis also analyses how this process can be used for matching symbols in a formula to content of streams on distributed and heterogeneous platforms. Finally, the thesis presents an implementation in the Robot Operating System (ROS). The implementation is tested in two case studies which cover a scenario where there is only a single platform in a system and a scenario where there are multiple distributed heterogeneous platforms in a system. The conclusions are that the semantic matching represents an important step towards fully automatized semantic-based stream reasoning. Our solution also shows that semantic technologies are suitable for establishing machine-readable domain models. The use of these technologies made the semantic matching domain and platform independent as all domain and platform specific knowledge is specified in ontologies. Moreover, semantic technologies provide support for integration of data from heterogeneous sources which makes it possible for platforms to use streams from distributed sources.
265

Automatic semantic image annotation and retrieval

Wong, Chun Fan 01 January 2010 (has links)
No description available.
266

Exploring conceptual knowledge and name relearning in semantic dementia

Mayberry, Emily Jane January 2011 (has links)
This thesis investigated the role of the anterior temporal lobes (ATLs) in conceptual knowledge and name relearning by studying people with semantic dementia (SD). People with SD have atrophy focussed on the ATLs and they exhibit a pan-modal semantic impairment (e.g., Hodges, Patterson, Oxbury, & Funnell, 1992). Recent evidence suggests that modality-invariant concept representations are built up in the ATLs and that these modality-invariant representations are crucial for abstracting away from the surface features of items in order to generalise conceptual information based on their core semantic similarity (e.g., Lambon Ralph & Patterson, 2008). In order to test this, two of the studies described in this thesis (Chapters 2 and 3) assessed semantic generalisation in people with SD. These studies showed that people with SD are less able to generalise conceptual information on the basis of the deeper semantic structure of concepts but instead are increasingly influenced by the superficial similarity of the items. These studies support the hypothesis that the modality-invariant representations formed in the ATLs are crucial for semantic-based generalisation. Previous SD relearning studies have reported relatively good learning but a lack of generalisation to untrained items, tasks, and/or contexts (i.e., under-generalisation). This has been interpreted based on the Complementary Learning Systems (CLS) (McClelland, McNaughton, & O'Reilly, 1995) to suggest that the neocortical semantic system no longer makes a meaningful contribution to relearning but instead relearning is primarily dependent upon the sparse representational medial temporal lobe (MTL) learning system. The studies described in two of the thesis chapters (Chapters 4 and 5) investigated the role of the underlying systems further and found that the neocortical semantic system does still contribute to relearning in SD (although its contribution is disordered and based on the degraded concept representations in the ATL) but there is a shift in the division of labour such that the MTL system takes over more of the work. Finally, in order to clarify the outcomes of relearning in SD, Chapter 6 reviewed all of the previous SD relearning studies and confirmed that people with SD are able to relearn the specific information that they study but that this relearning is rigid. The review and a subsequent re-analysis of the data from Chapters 4 and 5 also showed that relearning in SD can have negative side-effects as well as positive effects.
267

Semantics in speech production

Soni, Maya January 2011 (has links)
The semantic system contributes to the process of speech production in two major ways. The basic information is contained within semantic representations, and the semantic control system manipulates that knowledge as required by task and context. This thesis explored the evidence for interactivity between semantic and phonological stages of speech production, and examined the role of semantic control within speech production. The data chapters focussed on patients with semantic aphasia or SA, who all have frontal and/or temporoparietal lesions and are thought to have a specific impairment of semantic control. In a novel development, grammatical class and cueing effects in this patient group were compared with healthy participants under tempo naming conditions, a paradigm which is thought to impair normal semantic control by imposing dual task conditions. A basic picture naming paradigm was used throughout, with the addition of different grammatical classes, correct and misleading phonemic cues, and repetition and semantic priming: all these manipulations could be expected to place differing loads on a semantic control system with either permanent or experimentally induced impairment. It was found that stimuli requiring less controlled processing such as high imageability objects, pictures with simultaneous correct cues or repetition primed pictures were named significantly more accurately than items which needed more controlled processing, such as low imageability actions, pictures with misleading phonemic cues and unprimed pictures. The cueing evidence offered support to interactive models of speech production where phonological activation is able to influence semantic selection. The impairment in tasks such as the inhibition of task-irrelevant material seen in SA patients and tempo participants, and the overlap between cortical areas cited in studies looking at both semantic and wider executive control mechanisms suggest that semantic control may be part of a more generalised executive system.
268

Enhancing Automation and Interoperability in Enterprise Crowdsourcing Environments

Hetmank, Lars 05 October 2016 (has links) (PDF)
The last couple of years have seen a fascinating evolution. While the early Web predominantly focused on human consumption of Web content, the widespread dissemination of social software and Web 2.0 technologies enabled new forms of collaborative content creation and problem solving. These new forms often utilize the principles of collective intelligence, a phenomenon that emerges from a group of people who either cooperate or compete with each other to create a result that is better or more intelligent than any individual result (Leimeister, 2010; Malone, Laubacher, & Dellarocas, 2010). Crowdsourcing has recently gained attention as one of the mechanisms that taps into the power of web-enabled collective intelligence (Howe, 2008). Brabham (2013) defines it as “an online, distributed problem-solving and production model that leverages the collective intelligence of online communities to serve specific organizational goals” (p. xix). Well-known examples of crowdsourcing platforms are Wikipedia, Amazon Mechanical Turk, or InnoCentive. Since the emergence of the term crowdsourcing in 2006, one popular misconception is that crowdsourcing relies largely on an amateur crowd rather than a pool of professional skilled workers (Brabham, 2013). As this might be true for low cognitive tasks, such as tagging a picture or rating a product, it is often not true for complex problem-solving and creative tasks, such as developing a new computer algorithm or creating an impressive product design. This raises the question of how to efficiently allocate an enterprise crowdsourcing task to appropriate members of the crowd. The sheer number of crowdsourcing tasks available at crowdsourcing intermediaries makes it especially challenging for workers to identify a task that matches their skills, experiences, and knowledge (Schall, 2012, p. 2). An explanation why the identification of appropriate expert knowledge plays a major role in crowdsourcing is partly given in Condorcet’s jury theorem (Sunstein, 2008, p. 25). The theorem states that if the average participant in a binary decision process is more likely to be correct than incorrect, then as the number of participants increases, the higher the probability is that the aggregate arrives at the right answer. When assuming that a suitable participant for a task is more likely to give a correct answer or solution than an improper one, efficient task recommendation becomes crucial to improve the aggregated results in crowdsourcing processes. Although some assumptions of the theorem, such as independent votes, binary decisions, and homogenous groups, are often unrealistic in practice, it illustrates the importance of an optimized task allocation and group formation that consider the task requirements and workers’ characteristics. Ontologies are widely applied to support semantic search and recommendation mechanisms (Middleton, De Roure, & Shadbolt, 2009). However, little research has investigated the potentials and the design of an ontology for the domain of enterprise crowdsourcing. The author of this thesis argues in favor of enhancing the automation and interoperability of an enterprise crowdsourcing environment with the introduction of a semantic vocabulary in form of an expressive but easy-to-use ontology. The deployment of a semantic vocabulary for enterprise crowdsourcing is likely to provide several technical and economic benefits for an enterprise. These benefits were the main drivers in efforts made during the research project of this thesis: 1. Task allocation: With the utilization of the semantics, requesters are able to form smaller task-specific crowds that perform tasks at lower costs and in less time than larger crowds. A standardized and controlled vocabulary allows requesters to communicate specific details about a crowdsourcing activity within a web page along with other existing displayed information. This has advantages for both contributors and requesters. On the one hand, contributors can easily and precisely search for tasks that correspond to their interests, experiences, skills, knowledge, and availability. On the other hand, crowdsourcing systems and intermediaries can proactively recommend crowdsourcing tasks to potential contributors (e.g., based on their social network profiles). 2. Quality control: Capturing and storing crowdsourcing data increases the overall transparency of the entire crowdsourcing activity and thus allows for a more sophisticated quality control. Requesters are able to check the consistency and receive appropriate support to verify and validate crowdsourcing data according to defined data types and value ranges. Before involving potential workers in a crowdsourcing task, requesters can also judge their trustworthiness based on previous accomplished tasks and hence improve the recruitment process. 3. Task definition: A standardized set of semantic entities supports the configuration of a crowdsourcing task. Requesters can evaluate historical crowdsourcing data to get suggestions for equal or similar crowdsourcing tasks, for example, which incentive or evaluation mechanism to use. They may also decrease their time to configure a crowdsourcing task by reusing well-established task specifications of a particular type. 4. Data integration and exchange: Applying a semantic vocabulary as a standard format for describing enterprise crowdsourcing activities allows not only crowdsourcing systems inside but also crowdsourcing intermediaries outside the company to extract crowdsourcing data from other business applications, such as project management, enterprise resource planning, or social software, and use it for further processing without retyping and copying the data. Additionally, enterprise or web search engines may exploit the structured data and provide enhanced search, browsing, and navigation capabilities, for example, clustering similar crowdsourcing tasks according to the required qualifications or the offered incentives.
269

A semantica das relações anaforicas entre eventos / The semantics of anaphoric relations between events

Basso, Renato Miguel 14 August 2018 (has links)
Orientador: Edson Françozo / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Estudos da Linguagem / Made available in DSpace on 2018-08-14T10:07:06Z (GMT). No. of bitstreams: 1 Basso_RenatoMiguel.pdf: 1338405 bytes, checksum: 9163a03b6f57e64b88663107067bd72f (MD5) Previous issue date: 2009 / Resumo: Nesta tese, avaliamos a afirmação de Davidson (1967) de que a anáfora de eventos e a anáfora de objetos ordinários lançam mão dos mesmos recursos linguísticos. Davidson usa a evidência da anáfora não apenas para postular eventos na ontologia, mas também como um argumento a favor de considerá-los como objetos ordinários (como particulares). No entanto, ao investigarmos os mecanismos linguísticos mobilizados na anáfora de eventos, encontramos grandes diferenças em comparação com o que encontramos na anáfora de objetos (em geral, linguisticamente veiculados através de nomes ou de descrições), levando-nos a colocar a afirmação de Davidson sob suspeita. Na primeira parte da tese, apresentamos e defendemos uma versão da teoria de eventos postulada por Davidson que os trata como objetos ordinários (particulares). Analisamos também as teorias que tomam eventos como propriedades de momentos de tempo e teorias que tomam eventos como entidades proposicionais. Cada uma dessas teorias tem seus méritos e problemas, mas o intuito é nos mantermos o mais próximo à formulação de Davidson para avaliarmos suas afirmações quanto à anáfora de eventos. Ainda na primeira parte, investigamos as relações entre dêixis e anáfora, um tema que envolve quaisquer discussões sobre termos usados anaforicamente. Na segunda parte da tese, nosso olhar volta-se para a anáfora de eventos cujos antecedentes são expressões sentenciais (i.e., que não são DPs). Diante de tais antecedentes, os termos anafóricos preferenciais são demonstrativos, e investigamos o pronome demonstrativo 'isso' e descrições demonstrativas da forma 'esse/essa/aquele/aquela N'. Apresentamos o estado-da-arte dos estudos sobre demonstrativos, salientando que eles podem ser tratados como termos referenciais ou como termos quantificacionais. Dado que o debate é bastante complexo e ainda incipiente, apresentamos duas análises de retomadas de eventos com demonstrativos: uma que os toma como termos referenciais, e outra que os toma como termos quantificacionais. Contudo, apesar dessa diferença nas análises, o resultado a que chegamos é semelhante, e mostra que os mecanismos por trás da anáfora de eventos são mais próximos da anáfora de entidades proposicionais do que da anáfora de objetos ordinários, contrariando a tese davidsoniana. Na terceira parte, analisamos a retomada de eventos em que o antecedente é uma estrutura nominal (i.e., DPs) e cujos termos anafóricos preferenciais são descrições definidas e demonstrativas. Nesta parte, investigamos a semântica das nominalizações e sua relação com eventos veiculados por verbos de ação flexionados. Assumimos, como é comum na literatura sobre eventos, que uma sentença com verbo de ação flexionado e sua contraparte com nominalização têm a mesma forma lógica. Mostramos que tal assunção leva a resultados indesejados quando consideramos a anáfora de eventos, que se situam, também nessa configuração, mais próxima da anáfora de entidades proposicionais. Na conclusão, apontamos que adotar a noção de evento como um objeto a partir do fenômeno da anáfora não se sustenta, já que a anáfora de eventos se assemelha em muito à anáfora de entidades abstratas, como as proposições, e não à anáfora de objetos. Tal conclusão tem consequências para as teorias semânticas contemporâneas que ingenuamente equiparam eventos a objetos. / Abstract: In this thesis, we evaluated Davidson's (1967) statement according to which event anaphora and (ordinary) object anaphora use the same linguistic resources. Davidson uses the evidence of anaphora not only to postulate events in the ontology but also as an argument for considering them as ordinary objects (as individuals). However, as we investigate the linguistic mechanisms mobilized in event anaphora and the ones mobilized in (ordinary) object anaphora we found significant differences, a conclusion which compromises Davidson's assumptions about the metaphysics of events. In the first part of this thesis, we present and defend a version of the theory of events postulated by Davidson that treats them as individuals in the same way as other objects. We look briefly at other theories that take events as properties of moments of time and theories that take events as propositional entities. Each of these theories has their merits and problems, but our intention is to follow Davidson's formulation closely, in order to evaluate his claims about anaphora event. We investigate the differences between deixis and anaphora, a theme that involves any discussion of the terms used in anaphora. In the second part, our attention goes to event anaphora when the antecedents are sentential expressions (i.e., that are not DPs). With this kind of antecedent, the preferred anaphoric terms are demonstratives, and we investigate the demonstrative pronoun 'isso' and demonstrative descriptions ('esse / essa / aquele / aquela N'). We present the state-of-the art of the studies of demonstrative, noting that they can be treated as referring or as quantificational expressions. Since the debate is very complex and still in its beginning, we present two analysis of event anaphora with sentential antecedents: one that takes the anaphoric terms as referential, and one that takes them as quantificational. However, despite this difference in the analysis, the result we reached is similar, and it shows that the mechanisms behind the anaphora of events are closer to the anaphora of propositional entities than to the anaphora of ordinary objects, against the Davidsonian thesis. In the third part, we analyze the anaphora of events in which the antecedent is a nominal structure (i.e., DPs); with this kind of antecedent the preferred anaphoric terms are definite and demonstrative descriptions. We also investigate the semantics of nominalizations and their relationship to events conveyed by inflected action verbs. We assume, as is common practice in the literature on events, that sentences with an inflected action verb and their nominalized counterparts have the same logical form. We show that this assumption leads to undesired results when we consider event anaphora. In the conclusion, we point out that adopting the concept of event as an object can not be sustained from the point of view of anaphoric phenomenona, since event anaphora resembles the anaphora of abstract entities such as propositions, and not object anaphora. This conclusion has implications for contemporary semantic theories which naively equate events and objects. / Universidade Estadual de Campi / Linguistica / Doutor em Linguística
270

Uma arquitetura para sistemas de busca semântica para recuperação de informações em repositórios de biodiversidade / An architecture for semantic search systems for retrieving information in repositories of biodiversity

Flor Karina Mamani Amanqui 16 May 2014 (has links)
A diversidade biológica é essencial para a sustentabilidade da vida na Terra e motiva numerosos esforços para coleta de dados sobre espécies, dando origem a uma grande quantidade de informação. Esses dados são geralmente armazenados em bancos de dados relacionais. Pesquisadores usam esses bancos de dados para extrair conhecimento e compartilhar novas descobertas. No entanto, atualmente a busca tradicional (baseada em palavras-chave) já não é adequada para ser usada em grandes quantidades de dados heterogêneos, como os de biodiversidade. Ela tem baixa precisão e revocação para esse tipo de dado. Este trabalho apresenta uma nova arquitetura para abordar esse problema aplicando técnicas de buscas semânticas em dados sobre biodiversidade e usando formatos e ferramentas da Web Semântica para representar esses dados. A busca semântica tem como objetivo melhorar a acurácia dos resultados de buscas com o uso de ontologias para entender os objetivos dos usuários e o significado contextual dos termos utilizados. Este trabalho também apresenta os resultados de testes usando um conjunto de dados representativos sobre biodiversidade do Instituto Nacional de Pesquisas da Amazônia (INPA) e do Museu Paraense Emílio Goeldi (MPEG). Ontologias permitem que conhecimento seja organizado em espaços conceituais de acordo com seu significado. Para a busca semântica funcionar, um ponto chave é a criação de mapeamentos entre os dados (neste caso, dados sobre biodiversidade do INPA e MPEG) e termos das ontologias que os descrevem, neste caso: a classificação taxonômica de espécies e a OntoBio, a ontologia de biodiversidade do INPA. Esses mapeamentos foram criados depois que extraímos a classificação taxonômica do site Catalog of Life (CoL) e criamos uma nova versão da OntoBio. Um protótipo da arquitetura foi construído e testado usando casos de uso e dados do INPA e MPEG. Os resultados dos testes mostraram que a abordagem da busca semântica tinha uma melhor precisão (28% melhor) e revocação (25% melhor) quando comparada com a busca por palavras-chave. Eles também mostraram que é possível conectar facilmente os dados mapeados a outras fontes de dados abertas, como a fonte Amazon Forest Linked Data do Instituto Nacional de Pesquisas Espaciais. (INPE) / Biological diversity is of essential value to life sustainability on Earth and motivates many efforts to collect data about species. That gives rise to a large amount of information. Biodiversity data, in most cases, is stored in relational databases. Researchers use this data to extract knowledge and share their new discoveries about living things. However, nowadays the traditional search approach (based basically on keywords matching) is not appropriate to be used in large amounts of heterogeneous biodiversity data. Search by keyword has low precision and recall in this kind of data. This work presents a new architecture to tackle this problem using a semantic search system for biodiversity data and semantic web formats and tools to represent this data. Semantic search aims to improve search accuracy by using ontologies to understand user objectives and the contextual meaning of terms used in the search to generate more relevant results. This work also presents test results using a set of representative biodiversity data from the National Research Institute for the Amazon (INPA) and the Emilio Gueldi Museum in Pará (MPEG). Ontologies allow knowledge to be organized into conceptual spaces in accordance to its meaning. For semantic search to work, a key point is to create mappings between the data (in this case, INPAs and MPEGs biodiversity data) and the ontologies describing it, in this case: the species taxonomy (a taxonomy is an ontology where each class can have just one parent) and OntoBio, INPAs biodiversity ontology. These mappings were created after we extracted the taxonomic classification from the Catalogue of Life (CoL) website and created a new version of OntoBio. A prototype of the architecture was built and tested using INPAs and MPEGs use cases and data. The results showed that the semantic search approach had a better precision (28% improvement) and recall (25% improvement) when compared to keyword based search. They also showed that it was possible to easily connect the mapped data to other Linked Open Data sources, such as the Amazon Forest Linked Data from the National Institute for Space Research (INPE)

Page generated in 0.0882 seconds