• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 250
  • 53
  • 50
  • 40
  • 28
  • 22
  • 21
  • 17
  • 12
  • 8
  • 6
  • 5
  • 4
  • 4
  • 3
  • Tagged with
  • 597
  • 100
  • 66
  • 58
  • 58
  • 56
  • 52
  • 51
  • 47
  • 47
  • 47
  • 45
  • 43
  • 41
  • 39
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Profinite etale cobordism

Quick, Gereon. Unknown Date (has links) (PDF)
University, Diss., 2005--Münster (Westfalen).
142

Functional network analyses and dynamical modeling of proprioceptive updating of the body schema

Vaisman, Lev 12 March 2016 (has links)
Proprioception is an ability to perceive the position and speed of body parts that is important for construction of the body schema in the brain. Proper updating of the body schema is necessary for appropriate voluntary movement. However, the mechanisms mediating such an updating are not well understood. To study these mechanisms when the body part was at rest, electroencephalography (EEG) and evoked potentials studies were employed, and when the body was in motion, kinematic studies were performed. An experimental approach to elicit proprioceptive P300 evoked potentials was developed providing evidence that processing of novel passive movements is similar to processing of novel visual and auditory stimuli. The latencies of the proprioceptive P300 potentials were found to be greater than those elicited by auditory, but not different from those elicited by the visual stimuli. The features of the functional networks that generated the P300s were analyzed for each modality. Cross-correlation networks showed both common features, e.g. connections between frontal and parietal areas, and the stimulus-specific features, e.g. increases of the connectivity for temporal electrodes in the visual and auditory networks, but not in the proprioceptive ones. The magnitude of coherency networks showed a reduction in alpha band connectivity for most of the electrodes groupings for all stimuli modalities, but did not demonstrate modality-specific features. Kinematic study compared performances of 19 models previously proposed in the literature for movements at the shoulder and elbow joints in terms of their ability to reconstruct the speed profiles of the wrist pointing movements. It was found that lognormal and beta function models are most suitable for wrist speed profile modeling. In addition, an investigation of the blinking rates during the P300 potentials recordings revealed significantly lower rates in left-handed participants, compared to the right-handed ones. Future work will include expanding the experimental and analytical methodologies to different kinds of proprioceptive stimuli (displacements and speeds) and experimental paradigms (error-related negativity potentials), and comparing the models of the speed profiles produced by the feet to those of the wrists, as well as replicating the observations made on the blinking rates in a larger scale study.
143

The Principles of Self-Organization of Memories in Neural Networks for Generating and Performing Cognitive Strategies / The Principles of Self-Organization of Memories in Neural Networks for Generating and Performing Cognitive Strategies

Herpich, Juliane 07 December 2018 (has links)
No description available.
144

A context-based name resolution approach for semantic schema integration

BELIAN, Rosalie Barreto 31 January 2008 (has links)
Made available in DSpace on 2014-06-12T15:50:47Z (GMT). No. of bitstreams: 2 arquivo1988_1.pdf: 1433897 bytes, checksum: 2bd67eddaeadba13aa380ec5c913b7e0 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2008 / Uma das propostas da Web Semântica é fornecer uma grande diversidade de serviços de diferentes domínios na Web. Estes serviços são, em sua maioria, colaborativos, cujas tarefas se baseiam em processos de tomada de decisão. Estas decisões, por sua vez, serão mais bem embasadas se considerarem a maior quantidade possível de informação relacionada às tarefas em execução. Neste sentido, este cenário encoraja o desenvolvimento de técnicas e ferramentas orientadas para a integração de informação, procurando soluções para a heterogeneidade das fontes de dados. A arquitetura baseada em mediação, utilizada no desenvolvimento de sistemas de integração de informações tem como objetivo isolar o usuário das fontes de dados distribuídas utilizando uma camada intermediária de software chamada de mediador. O mediador, em um sistema de integração de informações, utiliza um esquema global para a execução das consultas do usuário que são reformuladas em sub-consultas de acordo com os esquemas locais das fontes de dados. Neste caso, um processo de integração de esquemas gera o esquema global (esquema de mediação) como resultado da integração dos esquemas individuais das fontes de dados. O problema maior em integração de esquemas é a heterogeneidade das fontes de dados locais. Neste sentido, a resolução semântica é primordial. A utilização de métodos puramente estruturais e sintáticos na integração de esquemas é pouco eficaz se antes não houver a identificação do real significado dos elementos dos esquemas. Um processo de integração de esquemas tem como resultado um esquema global integrado e um conjunto de mapeamentos inter-esquema e usualmente, compreende algumas etapas básicas como: pré-integração, comparação, mapeamento e unificação de esquemas e geração do esquema de mediação. Em integração de esquemas, resolução de nomes é o processo que determina a qual entidade do mundo real um dado elemento de esquema se refere, levando em consideração um conjunto de informações semânticas disponíveis. A informação semântica necessária para resolução de nomes, em geral, é obtida de vocabulários genéricos e/ou específicos de um determinado domínio de conhecimento. Nomes de elementos podem apresentar significados diferentes dependendo do contexto semântico ao qual eles estão relacionados. Assim, o uso de informação contextual, além da de domínio, pode trazer uma maior precisão na interpretação dos elementos permitindo modificar o seu significado de acordo com um dado contexto. Este trabalho propõe uma abordagem de resolução de nomes baseada em contexto para integração de esquemas. Um de seus pontos fortes é a utilização e modelagem da informação contextual necessária à resolução de nomes em diferentes etapas do processo de integração de esquemas. A informação contextual está modelada utilizando uma ontologia, o que favorece a utilização de mecanismos de inferência, compartilhamento e reuso da informação. Além disto, este trabalho propõe um processo de integração de esquemas simples e extensível de forma que seu desenvolvimento se concentrasse principalmente nos requisitos relacionados à resolução de nomes. Este processo foi desenvolvido para um sistema de integração de informações baseado em mediação, que adota a abordagem GAV e XML como modelo comum para intercâmbio de dados e integração de fontes de dados na Web
145

Versionstransparens i evolutionära relationsdatabaser

Grehag, Björn January 2003 (has links)
Många databassystem utsätts ideligen för förändring. Dessa förändringar påverkar databasens schema. Detta har lett till att stöd för dessa förändringar har utvecklas. Schema versioning är det stödet för förändringar av databasscheman som ger det mest omfattande stödet. Med schema versioning menas att DBMS:et kan hantera flera versioner av databasschemat. Ett problem med schema versioning är att användaren måste veta vilken version en viss fråga ska ställas emot för att svaret ska bli korrekt. I detta arbete har det undersökts för vilka typer av förändringar mot en relationsdatabas som detta problem kan undvikas genom att uppnå versionstransparens, vilket innebär att versionshanteringen är osynlig för användaren. Undersökningen gjordes mot en modell som bara använder information ifrån de frågor som ställs mot databasen för att hitta rätt version att ställa frågan emot. Resultatet av undersökningen visar att versionstransparens kan uppnås för alla typer av förändringar utom för ändringar av datatypen för kolumner.
146

XML-baserade dataöverföringar i flera steg

Jildenhed, Mattias January 2004 (has links)
Behovet av datautbyte mellan olika system, ökar ständigt. Därför byggs fler och fler system med möjligheten att utbyta och överföra data via XML. Då olika system lagrar data på olika sätt, måste de strukturella och innehållsmässiga skillnaderna hanteras innan data kan överföras. Syftet med detta arbete är att undersöka hur dataöverföringar mellan XML-dokument i flera steg påverkas av strukturella eller innehållsmässiga skillnader. Arbetet visar i vilka fall data inte kan överföras korrekt. Studien utförs genom en experimentell metod. Experimenten genomförs med en applikation som utvecklats för ändamålet. Resultatet från studien visar att problem kan uppstår då käll- och målstrukturen lagrar ett element eller attribut med olika datatyper, de övriga strukturmässiga skillnaderna genererar få problem. De innehållsmässiga skillnaderna medför att data inte kan identifieras då attributet saknas i käll- eller målstrukturen.
147

The Influence of Negative Information on Trust in Virtual Teams

Lee, Tiffany T. 28 October 2015 (has links)
Organizational work is characterized by positive as well as often negative work behaviors from employees. The same may be said of work done in virtual teams, where computer-mediated communication among team members can be particularly uncivil and inflammatory (Wilson, Straus, & McEvily, 2006). Accordingly, trust has been theorized as more difficult to develop in these types of teams compared to traditional face to face teams. Using a computer simulation of a collaborative team task, this study examined how individuals in virtual teams integrate conflicting pieces of positive and negative information about a teammate into one overall rating of trust. Data were analyzed from 240 individuals to examine the influence of these behaviors on levels of trust toward a target teammate. Evidence of trust quickly developing and declining, i.e., the dynamic nature of trust, in a virtual team was observed. Secondly, the negativity effect was found, where a negative behavior was given more weight in ratings of trust than a positive behavior. Next, the hierarchically restrictive schema was offered as a plausible explanation for the negativity effect due to creating asymmetrical expectations of subsequent behavior based on an initially observed behavior. Lastly, a significant negativity effect was not found when the two behaviors were performed, one each, by a pair of unrelated persons or by a pair of related persons with entitativity.
148

Data Perspectives of Workflow Schema Evolution : Cases of Task Deletion and Insertion

Arunagiri, Aravindhan January 2013 (has links) (PDF)
Dynamic changes in the business environment requires their business process to be up-to-date. The Workflow Management Systems supporting these business processes need to adapt to these changes rapidly. The Work Flow Management Systems however lacks the ability to dynamically propagate the process changes to their process model schemas (Workflow templates). The literature on workflow schema evolution emphasizes the impact of changes in control flow with very ittle attention to other aspects of a workflow schema. This thesis studies the data aspect (data flow and data model) of workflow schema during its evolution. Workflow schema changes can lead to inconsistencies between the underlying database model and the workflow. A rather straight forward approach to the problem would be to abandon the existing database model and start afresh. However this introduces data persistence issues. Also there could be significant system downtimes involved in the process of migrating data from the old database model to the current one. In this research we develop an approach to address this problem. The business changes demand various types of control flow changes to its business process model (workflow schema). The control flow changes include task insertion, deletion, swapping, movement, replacement, extraction, in-lining, Parallelizing etc. Many of the control flow changes to the workflow can be made by using the combination of a simple task insertion and deletion, while some like embedding task in loop/ conditional branch and Parallelizing tasks also requires the addition/removal of control dependency between the tasks. Since many of the control flow change patterns involves task insertion and deletion at its core, in this thesis we study its impact on the underlying data model. We propose algorithms to dynamically handle the changes in the underlying relational database schema. First we identify the basic change patterns that can be implemented using atomic task insertion and deletions. Then we characterize these basic pattern in terms of their data flow anomalies (Missing, Redundant, Conflicting data) that they can generate. The Data schema compliance criteria are developed to identify the data changes: (i) that makes the underlying database schema inconsistent with the modified workflow and (ii) generating the aforementioned data anomalies. The Data schema compliance criteria characterizes the change patterns in terms of its ability to work with the current relational data model. The Data schema compliance criteria show various properties required of the modified workflow to be consistent with the underlying database model. The data of any workflow instance conforming to Data schema compliance criteria can be directly accommodated in the database model. The data anomalies (of task insertion and deletion) identified using DSC are handled dynamically using respective Data adaptation algorithms. The algorithm uses the functional dependency constraints in the relational database model to adapt/handle these data anomalies. Such handled data (changes) that conform to DSC can be directly accommodated in the underlying database schema. Hence with this approach the workflow can be modified (using task insertion and deletion) and their data changes can be implemented on-the-fly using the Data adaptation algorithms. In this research the same old data model is evolved without abandoning it even after the modification of the workflow schema. This maintains the old data persistence in the existing database schema. Detailed implementation procedures to deploy the Data adaptation algorithms are presented with illustrative examples.
149

Tabular Representation of Schema Mappings: Semantics and Algorithms

Rahman, Md. Anisur January 2011 (has links)
Our thesis investigates a mechanism for representing schema mapping by tabular forms and checking utility of the new representation. Schema mapping is a high-level specification that describes the relationship between two database schemas. Schema mappings constitute essential building blocks of data integration, data exchange and peer-to-peer data sharing systems. Global-and-local-as-view (GLAV) is one of the approaches for specifying the schema mappings. Tableaux are used for expressing queries and functional dependencies on a single database in a tabular form. In our thesis, we first introduce a tabular representation of GLAV mappings. We find that this tabular representation helps to solve many mapping-related algorithmic and semantic problems. For example, a well-known problem is to find the minimal instance of the target schema for a given instance of the source schema and a set of mappings between the source and the target schema. Second, we show that our proposed tabular mapping can be used as an operator on an instance of the source schema to produce an instance of the target schema which is `minimal' and `most general' in nature. There exists a tableaux-based mechanism for finding equivalence of two queries. Third, we extend that mechanism for deducing equivalence between two schema mappings using their corresponding tabular representations. Sometimes, there exist redundant conjuncts in a schema mapping which causes data exchange, data integration and data sharing operations more time consuming. Fourth, we present an algorithm that utilizes the tabular representations for reducing number of constraints in the schema mappings. At present, either schema-level mappings or data-level mappings are used for data sharing purposes. Fifth, we introduce and give the semantics of bi-level mapping that combines the schema-level and data-level mappings. We also show that bi-level mappings are more effective for data sharing systems. Finally, we implemented our algorithms and developed a software prototype to evaluate our proposed strategies.
150

A Practical Approach to Merging Multidimensional Data Models

Mireku Kwakye, Michael January 2011 (has links)
Schema merging is the process of incorporating data models into an integrated, consistent schema from which query solutions satisfying all incorporated models can be derived. The efficiency of such a process is reliant on the effective semantic representation of the chosen data models, as well as the mapping relationships between the elements of the source data models. Consider a scenario where, as a result of company mergers or acquisitions, a number of related, but possible disparate data marts need to be integrated into a global data warehouse. The ability to retrieve data across these disparate, but related, data marts poses an important challenge. Intuitively, forming an all-inclusive data warehouse includes the tedious tasks of identifying related fact and dimension table attributes, as well as the design of a schema merge algorithm for the integration. Additionally, the evaluation of the combined set of correct answers to queries, likely to be independently posed to such data marts, becomes difficult to achieve. Model management refers to a high-level, abstract programming language designed to efficiently manipulate schemas and mappings. Particularly, model management operations such as match, compose mappings, apply functions and merge, offer a way to handle the above-mentioned data integration problem within the domain of data warehousing. In this research, we introduce a methodology for the integration of star schema source data marts into a single consolidated data warehouse based on model management. In our methodology, we discuss the development of three (3) main streamlined steps to facilitate the generation of a global data warehouse. That is, we adopt techniques for deriving attribute correspondences, and for schema mapping discovery. Finally, we formulate and design a merge algorithm, based on multidimensional star schemas; which is primarily the core contribution of this research. Our approach focuses on delivering a polynomial time solution needed for the expected volume of data and its associated large-scale query processing. The experimental evaluation shows that an integrated schema, alongside instance data, can be derived based on the type of mappings adopted in the mapping discovery step. The adoption of Global-And-Local-As-View (GLAV) mapping models delivered a maximally-contained or exact representation of all fact and dimensional instance data tuples needed in query processing on the integrated data warehouse. Additionally, different forms of conflicts, such as semantic conflicts for related or unrelated dimension entities, and descriptive conflicts for differing attribute data types, were encountered and resolved in the developed solution. Finally, this research has highlighted some critical and inherent issues regarding functional dependencies in mapping models, integrity constraints at the source data marts, and multi-valued dimension attributes. These issues were encountered during the integration of the source data marts, as it has been the case of evaluating the queries processed on the merged data warehouse as against that on the independent data marts.

Page generated in 0.0153 seconds