• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 251
  • 53
  • 50
  • 40
  • 28
  • 22
  • 21
  • 17
  • 12
  • 8
  • 6
  • 5
  • 4
  • 4
  • 3
  • Tagged with
  • 598
  • 100
  • 66
  • 58
  • 58
  • 56
  • 52
  • 51
  • 48
  • 47
  • 47
  • 45
  • 43
  • 41
  • 39
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Um novo processo para refatoração de bancos de dados. / A new process to database refactoring.

Domingues, Márcia Beatriz Pereira 15 May 2014 (has links)
O projeto e manutenção de bancos de dados é um importante desafio, tendo em vista as frequentes mudanças de requisitos solicitados pelos usuários. Para acompanhar essas mudanças o esquema do banco de dados deve passar por alterações estruturais que muitas vezes prejudicam o desempenho e o projeto das consultas, tais como: relacionamentos desnecessários, chaves primárias ou estrangeiras criadas fortemente acopladas ao domínio, atributos obsoletos e tipos de atributos inadequados. A literatura sobre Métodos Ágeis para desenvolvimento de software propõe o uso de refatorações para evolução do esquema do banco de dados quando há mudanças de requisitos. Uma refatoração é uma alteração simples que melhora o design, mas não altera a semântica do modelo de dados, nem adiciona novas funcionalidades. Esta Tese apresenta um novo processo para aplicar refatorações ao esquema do banco de dados. Este processo é definido por um conjunto de tarefas com o objetivo de executar as refatorações de uma forma controlada e segura, permitindo saber o impacto no desempenho do banco de dados para cada refatoração executada. A notação BPMN foi utilizada para representar e executar as tarefas do processo. Como estudo de caso foi utilizado um banco de dados relacional, o qual é usado por um sistema de informação para agricultura de precisão. Esse sistema, baseado na Web, necessita fazer grandes consultas para plotagem de gráficos com informações georreferenciadas. / The development and maintenance of a database is an important challenge, due to frequent changes and requirements from users. To follow these changes, the database schema suffers structural modifications that, many times, negatively affect its performance and the result of the queries, such as: unnecessary relationships, primary and foreign keys, created strongly attached to the domain, with obsolete attributes or inadequate types of attributes. The literature about Agile Methods for software development suggests the use of refactoring for the evolution of database schemas when there are requirement changes. A refactoring is a simple change that improves the design, but it does not alter the semantics of the data model neither adds new functionalities. This thesis aims at proposing a new process to apply many refactoring to the database schema. This process is defined by a set of refactoring tasks, which is executed in a controlled, secure and automatized form, aiming at improving the design of the schema and allowing the DBA to know exactly the impact on the performance of the database for each refactoring performed. A notation BPMN has been used to represent and execute the tasks of the workflow. As a case study, a relational database, which is used by an information system for precision agriculture was used. This system is web based, and needs to perform large consultations to transfer graphics with geo-referential information.
362

Integração de bancos de dados heterogêneos utilizando grades computacionais. / Heterogeneous databases integration using grid computing.

Kakugawa, Fernando Ryoji 18 November 2010 (has links)
Bancos de dados normalmente são projetados para atender a um domínio específico de uma aplicação, tornando o acesso aos dados limitado e uma tarefa árdua em relação à integração de bancos e compartilhamento de dados. Existem várias pesquisas no intuito de integrar dados, como a criação de softwares específicos para uma determinada aplicação e até soluções mais radicais como refazer todos os bancos de dados envolvidos, demonstrando que ainda existem questões em aberto e que a área está longe de atingir soluções definitivas. Este trabalho apresenta conceitos e estratégias para a integração de bancos de dados heterogêneos e a implementa na forma do DIGE, uma ferramenta para desenvolver sistemas de banco de dados integrando diferentes bancos de dados relacionais heterogêneos utilizando grades computacionais. O sistema criado permite o compartilhamento de acesso deixando os dados armazenados em seu local de origem, desta forma, usuários do sistema acessam os dados em outras instituições com a impressão de que os dados estão armazenados localmente. O programador da aplicação final pode acessar e manipular os dados de forma convencional utilizando a linguagem SQL sem se preocupar com a localização e o esquema de cada banco e o administrador do sistema pode adicionar ou remover bancos de forma facilitada sem a necessidade de solicitar alterações na aplicação final. / Databases are usually designed to support a specific application domain, thus making data-access and data-sharing a hard and arduous task when database integration is required. Therefore, research projects have been developed in order to integrate several and heterogeneous databases systems, such as specific-domain application tools or even more extreme solutions, as a complete database redefinition and redesign. Considering these open questions and with no definite answers, this work presents some concepts, strategies and an implementation for heterogeneous databases integration. In this implementation, the DIGE tool was developed to provide access to heterogeneous and geographically distributed databases, using Grid computing, which store locally its data, appearing to the user application, the data are stored locally. By this way, programmers can manipulate data using conventional SQL language with no concern about database location or its schema. Systems Administrators may also add or remove databases on the whole system without need to change the final user application.
363

Optimization Strategies for Data Warehouse Maintenance in Distributed Environments

Liu, Bin 30 April 2002 (has links)
Data warehousing is becoming an increasingly important technology for information integration and data analysis. Given the dynamic nature of modern distributed environments, both source data updates and schema changes are likely to occur autonomously and even concurrently in different data sources. Current approaches to maintain a data warehouse in such dynamic environments sequentially schedule maintenance processes to occur in isolation. Furthermore, each maintenance process is handling the maintenance of one single source update. This limits the performance of current data warehouse maintenance systems in a distributed environment where the maintenance of source updates endures the overhead of network delay as well as IO costs for each maintenance query. In this thesis work, we propose two different optimization strategies which can greatly improve data warehouse maintenance performance for a set of source updates in such dynamic environments. Both strategies are able to support source data updates and schema changes. The first strategy, the parallel data warehouse maintainer, schedules multiple maintenance processes concurrently. Based on the DWMS_Transaction model, we formalize the constraints that exist in maintaining data and schema changes concurrently and propose several parallel maintenance process schedulers. The second strategy, the batch data warehouse maintainer, groups multiple source updates and then maintains them within one maintenance process. We propose a technique for compacting the initial sequence of updates, and then for generating delta changes for each source. We also propose an algorithm to adapt/maintain the data warehouse extent using these delta changes. A further optimization of the algorithm also is applied using shared queries in the maintenance process. We have designed and implemented both optimization strategies and incorporated them into the existing DyDa/TxnWrap system. We have conducted extensive experiments on both the parallel as well as the batch processing of a set of source updates to study the performance achievable under various system settings. Our findings include that our parallel maintenance gains around 40 ~ 50% performance improvement compared to sequential processing in environments that use single-CPU machines and little network delay, i.e, without requiring any additional hardware resources. While for batch processing, an improvement of 400 ~ 500% improvement compared with sequential maintenance is achieved, however at the cost of less frequent refreshes of the data warehouse content.
364

Formalisation, acquisition et mise en œuvre de connaissances pour l’intégration virtuelle de bases de données géographiques : les spécifications au cœur du processus d’intégration / Formalisation, acquisition and implementation of specifications knowledge for geographic databases integration

Abadie, Nathalie 20 November 2012 (has links)
Cette thèse traite de l'intégration de bases de données topographiques qui consiste à expliciter les relations de correspondance entre bases de données hétérogènes, de sorte à permettre leur utilisation conjointe. L'automatisation de ce processus d'intégration suppose celle de la détection des divers types d'hétérogénéité pouvant intervenir entre les bases de données topographiques à intégrer. Ceci suppose de disposer, pour chacune des bases à intégrer, de connaissances sur leurs contenus respectifs. Ainsi, l'objectif de cette thèse réside dans la formalisation, l'acquisition et l'exploitation des connaissances nécessaires pour la mise en œuvre d'un processus d'intégration virtuelle de bases de données géographiques vectorielles. Une première étape du processus d'intégration de bases de données topographiques consiste à apparier leurs schémas conceptuels. Pour ce faire, nous proposons de nous appuyer sur une source de connaissances particulière : les spécifications des bases de données topographiques. Celles-ci sont tout d'abord mises à profit pour la création d'une ontologie du domaine de la topographie. Cette ontologie est utilisée comme ontologie de support, dans le cadre d'une première approche d'appariement de schémas de bases de données topographiques, fondée sur des techniques d'appariement terminologiques et structurelles. Une seconde approche, inspirée des techniques d'appariement fondées sur la sémantique, met en œuvre cette ontologie pour la représentation des connaissances sur les règles de sélection et de représentation géométrique des entités géographiques issues des spécifications dans le langage OWL 2, et leur exploitation par un système de raisonnement / This PhD thesis deals with topographic databases integration. This process aims at facilitating the use of several heterogeneous databases by making the relationships between them explicit. To automatically achieve databases integration, several aspects of data heterogeneity must be detected and solved. Identifying heterogeneities between topographic databases implies comparing some knowledge about their respective contents. Therefore, we propose to formalise and acquire this knowledge and to use it for topographic databases integration. Our work focuses on the specific problem of topographic databases schema matching, as a first step in an integration application. To reach this goal, we propose to use a specific knowledge source, namely the databases specifications, which describe the data implementing rules. Firstly, they are used as the main resource for the knowledge acquisition process in an ontology learning application. As a first approach for schema matching, the domain ontology created from the texts of IGN's databases specifications is used as a background knowledge source in a schema matching application based on terminological and structural matching techniques. In a second approach, this ontology is used to support the representation, in the OWL 2 language, of topographic entities selection and geometry capture rules described in the databases specifications. This knowledge is then used by a reasoner in a semantic-based schema matching application
365

Targeted feedback collection for data source selection with uncertainty

Cortés Ríos, Julio César January 2018 (has links)
The aim of this dissertation is to contribute to research on pay-as-you-go data integration through the proposal of an approach for targeted feedback collection (TFC), which aims to improve the cost-effectiveness of feedback collection, especially when there is uncertainty associated with characteristics of the integration artefacts. In particular, this dissertation focuses on the data source selection task in data integration. It is shown how the impact of uncertainty about the evaluation of the characteristics of the candidate data sources, also known as data criteria, can be reduced, in a cost-effective manner, thereby improving the solutions to the data source selection problem. This dissertation shows how alternative approaches such as active learning and simple heuristics have drawbacks that throw light into the pursuit of better solutions to the problem. This dissertation describes the resulting TFC strategy and reports on its evaluation against alternative techniques. The evaluation scenarios vary from synthetic data sources with a single criterion and reliable feedback to real data sources with multiple criteria and unreliable feedback (such as can be obtained through crowdsourcing). The results confirm that the proposed TFC approach is cost-effective and leads to improved solutions for data source selection by seeking feedback that reduces uncertainty about the data criteria of the candidate data sources.
366

Multisensory integration, predictive coding and the Bayesian brain : reintegrating the body image and body schema distinction into cognitive science

Watson, Ashleigh Louise January 2017 (has links)
The classic distinction between the body schema and the body image received renewed interest in cognitive psychology, in part because of the attempts by the leading psychologist Charles Spence and his co-authors to synthesise a mounting body of research into the multisensory nature and functional properties of the neural structures in primate cortex that are sensitive and responsive to cross-modal stimuli generated from the body and objects located close to the body, and the famous rubber hand illusion which purported to illustrate how the perception and understanding of what counts as one’s body, i.e., our body image, can be manipulated to include foreign, body-part-like, objects such as a rubber hand. This approach was intended to settle age old questions about how the body schema – the system sub-personal sensorimotor system that shapes, facilitates and regulates motor control – is implemented in the brain and address historic confusions about how the body schema should be understood as an explanatory concept, as well as the problems surrounding the body schema and image distinction on the grounds of the persistent conflation between the two concepts. However, after offering several proposals as to how the body schema should be used to organise and interpret the empirical data, the distinction fell out of favour with Spence and his colleagues on the grounds of the very problems they intended to resolve. The proposed solution is an alternative theoretical framework that, I shall argue, never materialised. Instead, the various definitions they disseminate, I will claim, simply serve to further perpetuate the same problems and confusions about the body schema. Thus, the current state of the literature on the body image and schema in cognitive psychology is in dire need of a conceptual framework that would help us situate and interpret the important empirical data. I propose that we revisit the philosophical debates that were inspired by the philosopher Shaun Gallagher as part of his project to provide a conceptual analysis of the body schema and image distinction and vindicate its status as an important explanatory device for the explanatory ambitions of embodied cognition. Gallagher’s analysis opens up important questions about how the sub-personal multisensory processes of the body schema not only facilitate moment-by-moment motor behaviours, but how they shape and optimise motor control across developmental timelines, as well the importance of the embodied configuration of an agent and its particular eco-niche for shaping and facilitating its motor behaviours. The second important argument of the thesis is that the response to Gallagher’s analysis has simply served to suppress the line of research that Gallagher inspired because the questions his analysis raises have been overshadowed by more general disputes between Gallagher and his opponents about the shape an analysis of the body schema from the perspective of embodied cognition should take. As such, potentially promising lines of research in relation to the body schema have since dried up. As part of my attempt to make progress on the issues that are laid out at the first and second stages of the thesis, the third stage will involve an exploration into the seminal Bayesian approach to understanding cross-modal cue optimisation as it applies to object perception (Banks & Ernst, 2002) and the recent extension of this paradigm to the multimodal sensorimotor processes that underpin motor behaviour in action-oriented cognitive science (e.g., Friston, 2010). The conclusion of the thesis is that the move from an embodied to an action-oriented analysis of the body schema, and the conceptual distinction of which it is part, provides us with the right kind of theoretical resources to begin to pursue fruitful avenues of research that allow us to begin to address the questions set out by Gallagher’s analysis whilst avoiding (some of) the pitfalls that beset the embodied approach. In the final chapter I use this model of the body schema to illustrate how it can provide the basis for working back up towards a comprehensive theory of the body image and schema distinction, which I then bring to bear on current, as-yet-unaddressed, issues in developmental psychology.
367

Evolução de esquemas em bancos de dados orientados a objetos utilizando versões / Schema evolution in object oriented data bases using versions

Fornari, Miguel Rodrigues January 1993 (has links)
Este trabalho apresenta um mecanismo para evolução de esquemas para bancos de dados orientados a objetos. A necessidade de alteração do esquema conceitual de dados pode surgir em qualquer momento da vida de um sistema, por motivos como incorporar novas especificações e necessidades do usuário, reaproveitamento de classes em outros sistemas a correção de falhas de modelagem. Uma ferramenta deste tipo deve permitir ao usuário a maior variedade possível de alterações e, ao mesmo tempo, possibilitar um alto grau de independência lógica de dados, para reduzir ao máximo possível a necessidade de alteração dos programas de aplicação que utilizam o esquema. O modelo de dados utilizado está baseado nos modelos de outros sistemas orientados a objetos, como Orion é O2. Ele permite a definição de classes, atributos simples e construídos pelo usuário, métodos, como forma de encapsular os objetos e herança múltipla de atributos e métodos para subclasses. Além disso, para manter o histórico de modificações realizadas, versões de instâncias, classes e métodos são utilizadas. Versões de um objeto formam um grafo acíclico, sendo a versão mais recente a "default". Como forma de manter a coerência no uso de versões de diferentes objetos, o conceito de contextos de esquemas é definido. A proposta baseia-se no conceito de invariantes, condições básicas para a base de dados ser considerada válida e consistente pelo sistema. Invariantes estruturais e comportamentais são definidos e mantidos. Diversas operações que podem ser realizadas sobre um esquema são descritas, detalhando para cada uma as suas opções e efeitos. Alguns mecanismos auxiliares para aumentar a transparência de alterações de esquemas são esboçados. Como uma aplicação específica do mecanismo genérico apresentado, outro é desenvolvido para o ambiente STAR. Seu modelo de dados e os gerentes de versões e metodologia são explicados, tendo suas características mais relevantes para este trabalho detalhadas. Tomando o esquema de objeto como um esquema de dados e as tarefas do gerente de metodologias como métodos, o mecanismo também se baseia em invariantes que são utilizados para validar a correção das modificações realizadas, cuja semântica está descrita detalhadamente. O mecanismo definido revelou-se extremamente flexível e capaz de manter não só o histórico do desenvolvimento de determinada aplicação, como também alternativas de um mesmo sistema que esteja sendo construído utilizando um banco de dados orientado a objetos, tendo atendido satisfatoriamente aos requisitos básicos definidos inicialmente. / This work presents a schema evolution mechanism, based on an object oriented data model. Conceptual schema modifications are needed al any moment in the life cycle of a system, for example, to incorporate new specifications and users' solicitations, to reuse classes developed for other system and to correct modeling errors. This mechanism has to allow a great number of different operations and, at the same time, a high data logic independence to reduce the number of changes in applications programs. For this proposal we are considering an object oriented data model, similar to those existing in Orion and O2. Class definitions, simple attributes and attributes constructed by the user, methods to encapsulate objects and multiple inheritance of attributes and methods to subclasses are allowed. Instances, classes and methods are versionable. Connected directed acyclic graphs organize the versions of an object. There is one current version, which either is the most recent (the default) or one defined by the user. Schema contexts are introduced to keep track of the correspondence that exists among all the versions created, assuring the selection of a method version adequate for a version instance. The mechanism is based on schema invariants, that are basic conditions that always must be satisfied in order to insure that the schema is in a correct state. Structural and behavioral invariants are defined and checked by the system. The designer can use a complete set of operations to change the schema. The semantic of all operations is described, with its options and effects. Some auxiliary mechanisms are incorporated to facilitate schema change transparency. As an application, a generic mechanism for schema evolution is developed to the STAR framework. The data model, version and methodology managers of STAR are explained. The mechanism is based on invariants to validated changes considering an object schema as conceptual schema. and the methodology manager tasks like methods. The mechanism is extremely flexible and capable of maintaining the history of schema development and alternatives of classes ; methods and instances' descriptions. The previously defined characteristics are allowed in a satisfactorily way, resulting in a very useful tool for software design.
368

Body schema plasticity after tool-use / Plasticité du schéma corporel suite à l’utilisation d’outils

Cardinali, Lucilla 25 November 2011 (has links)
Nous avons tous un corps, et seulement un corps. Grâce à lui, nous nous déplaçons dans l’espace, nous interagissons avec le monde extérieur et les autres individus qui l’habitent, nous percevons, bref, nous vivons. Il s’agit d’un objet unique et essentiel. Cependant, bien que nous ayons un seul corps, il en existe dans le cerveau plusieurs représentations. Peu d’accord existe en littérature sur le nombre de représentation, mais tout le monde concorde sur le fait qu’il en existe plus qu’une. Les modèles de la représentation corporelle sont basés sur une notion de séparation des fonctions, selon laquelle activités différentes requièrent différentes représentations. Au cours de ma thèse j’ai étudié une de ces représentations dont la fonction principale est de fournir une connaissance du corps utile à l’action. Cette représentation est nommée Schéma Corporel. En particulier, en utilisant une technique comportementale puissante telle que la cinématique, j’ai pu montrer, pour la première fois, que le Schéma Corporel est une représentation extrêmement plastique, capable d’intégrer des outils lorsqu’ils sont utilisés pour effectuer une action. Ensuite, j’ai pu décrire cette propriété plastique du Schéma Corporel en montrant quelles informations sensorielles sont nécessaires et quels aspects des outils sont intégrés dans la représentation du corps. Les résultats de mes études expérimentales, ainsi que mon travail de synthèse de la littérature, m’ont permis d’élaborer une définition opérative du Schéma Corporel et de sa plasticité / We all have a body : our own body and just one body. Through it, we move, we interact with the world and other persons, we perceive, basically we live. It’s a unique essential object. If it is true that we have only one physical body, we also have many representations of it in the brain. There is little agreement about the exact number of body representations in the brain, but not on the fact that we have more than one. The multi-componential models of body representation are based on the notion, supported by scientific evidence that different activities demand and rely on specifically adapted representations. In my thesis, I studied one particular body representation that is used and involved in action planning and execution, i.e. the Body Schema. I have been able to describe and measure the plasticity of the Body Schema and its level of specificity in healthy individuals. In particular, using a tool-use paradigm, I showed that the Body Schema is quickly and efficiently updated once a change in the body configuration occurs. With a series of kinematic studies, I contributed unveiling the ingredients that rule the plasticity of the BS and the sensory information that is used to this purpose. As a result of my thesis, I suggest that a clearer definition and operational description of the Body Schema, as an action-devoted repertoire of effectors representations, is possible, particularly thanks to its plastic features
369

Dánské předložky. Sémantická analýza / Danish prepositions. Semantic analysis

Bednářová, Marie January 2012 (has links)
5 ABSTRACT In this thesis, we examine the possibilities of a semantic analysis of spatial configurations represented by Danish prepositions. In the theoretical part of the thesis, we explore the means that cognitive linguistics uses to describe meaning in general, and to describe spatial configuration in particular. We also briefly present the results of several analyses of English spatial expressions. In the practical part of the thesis, we then apply the cognitive linguistics' terms and methods on material gathered from Danish dictionaries and grammar handbooks and Danish Corpus, in order to achieve an analysis of spatial senses represented by the Danish preposition i. The result of our analysis is a structured overview of the preposition's distinct spatial senses, accompanied by a suggestion of the preposition's semantic network. KEY WORDS: cognitive semantics, spatial scene, schema, spatial configuration, prepositions, Danish
370

Um novo processo para refatoração de bancos de dados. / A new process to database refactoring.

Márcia Beatriz Pereira Domingues 15 May 2014 (has links)
O projeto e manutenção de bancos de dados é um importante desafio, tendo em vista as frequentes mudanças de requisitos solicitados pelos usuários. Para acompanhar essas mudanças o esquema do banco de dados deve passar por alterações estruturais que muitas vezes prejudicam o desempenho e o projeto das consultas, tais como: relacionamentos desnecessários, chaves primárias ou estrangeiras criadas fortemente acopladas ao domínio, atributos obsoletos e tipos de atributos inadequados. A literatura sobre Métodos Ágeis para desenvolvimento de software propõe o uso de refatorações para evolução do esquema do banco de dados quando há mudanças de requisitos. Uma refatoração é uma alteração simples que melhora o design, mas não altera a semântica do modelo de dados, nem adiciona novas funcionalidades. Esta Tese apresenta um novo processo para aplicar refatorações ao esquema do banco de dados. Este processo é definido por um conjunto de tarefas com o objetivo de executar as refatorações de uma forma controlada e segura, permitindo saber o impacto no desempenho do banco de dados para cada refatoração executada. A notação BPMN foi utilizada para representar e executar as tarefas do processo. Como estudo de caso foi utilizado um banco de dados relacional, o qual é usado por um sistema de informação para agricultura de precisão. Esse sistema, baseado na Web, necessita fazer grandes consultas para plotagem de gráficos com informações georreferenciadas. / The development and maintenance of a database is an important challenge, due to frequent changes and requirements from users. To follow these changes, the database schema suffers structural modifications that, many times, negatively affect its performance and the result of the queries, such as: unnecessary relationships, primary and foreign keys, created strongly attached to the domain, with obsolete attributes or inadequate types of attributes. The literature about Agile Methods for software development suggests the use of refactoring for the evolution of database schemas when there are requirement changes. A refactoring is a simple change that improves the design, but it does not alter the semantics of the data model neither adds new functionalities. This thesis aims at proposing a new process to apply many refactoring to the database schema. This process is defined by a set of refactoring tasks, which is executed in a controlled, secure and automatized form, aiming at improving the design of the schema and allowing the DBA to know exactly the impact on the performance of the database for each refactoring performed. A notation BPMN has been used to represent and execute the tasks of the workflow. As a case study, a relational database, which is used by an information system for precision agriculture was used. This system is web based, and needs to perform large consultations to transfer graphics with geo-referential information.

Page generated in 0.0315 seconds