• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 7
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 44
  • 44
  • 28
  • 24
  • 20
  • 16
  • 13
  • 11
  • 10
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Semantics-based change-merging of abstract data types

Chadha, Vineet. January 2002 (has links)
Thesis (M.S.)--Mississippi State University. Department of Computer Science. / Title from title screen. Includes bibliographical references.
22

Designing multi-sensory displays for abstract data

Nesbitt, Keith January 2003 (has links)
Doctor of Philosophy / The rapid increase in available information has lead to many attempts to automatically locate patterns in large, abstract, multi-attributed information spaces. These techniques are often called data mining and have met with varying degrees of success. An alternative approach to automatic pattern detection is to keep the user in the exploration loop by developing displays for perceptual data mining. This approach allows a domain expert to search the data for useful relationships and can be effective when automated rules are hard to define. However, designing models of the abstract data and defining appropriate displays are critical tasks in building a useful system. Designing displays of abstract data is especially difficult when multi-sensory interaction is considered. New technology, such as Virtual Environments, enables such multi-sensory interaction. For example, interfaces can be designed that immerse the user in a 3D space and provide visual, auditory and haptic (tactile) feedback. It has been a goal of Virtual Environments to use multi-sensory interaction in an attempt to increase the human-to-computer bandwidth. This approach may assist the user to understand large information spaces and find patterns in them. However, while the motivation is simple enough, actually designing appropriate mappings between the abstract information and the human sensory channels is quite difficult. Designing intuitive multi-sensory displays of abstract data is complex and needs to carefully consider human perceptual capabilities, yet we interact with the real world everyday in a multi-sensory way. Metaphors can describe mappings between the natural world and an abstract information space. This thesis develops a division of the multi-sensory design space called the MS-Taxonomy. The MS-Taxonomy provides a concept map of the design space based on temporal, spatial and direct metaphors. The detailed concepts within the taxonomy allow for discussion of low level design issues. Furthermore the concepts abstract to higher levels, allowing general design issues to be compared and discussed across the different senses. The MS-Taxonomy provides a categorisation of multi-sensory design options. However, to design effective multi-sensory displays requires more than a thorough understanding of design options. It is also useful to have guidelines to follow, and a process to describe the design steps. This thesis uses the structure of the MS-Taxonomy to develop the MS-Guidelines and the MS-Process. The MS-Guidelines capture design recommendations and the problems associated with different design choices. The MS-Process integrates the MS-Guidelines into a methodology for developing and evaluating multi-sensory displays. A detailed case study is used to validate the MS-Taxonomy, the MS-Guidelines and the MS-Process. The case study explores the design of multi-sensory displays within a domain where users wish to explore abstract data for patterns. This area is called Technical Analysis and involves the interpretation of patterns in stock market data. Following the MS-Process and using the MS-Guidelines some new multi-sensory displays are designed for pattern detection in stock market data. The outcome from the case study includes some novel haptic-visual and auditory-visual designs that are prototyped and evaluated.
23

Designing multi-sensory displays for abstract data

Nesbitt, Keith January 2003 (has links)
Doctor of Philosophy / The rapid increase in available information has lead to many attempts to automatically locate patterns in large, abstract, multi-attributed information spaces. These techniques are often called data mining and have met with varying degrees of success. An alternative approach to automatic pattern detection is to keep the user in the exploration loop by developing displays for perceptual data mining. This approach allows a domain expert to search the data for useful relationships and can be effective when automated rules are hard to define. However, designing models of the abstract data and defining appropriate displays are critical tasks in building a useful system. Designing displays of abstract data is especially difficult when multi-sensory interaction is considered. New technology, such as Virtual Environments, enables such multi-sensory interaction. For example, interfaces can be designed that immerse the user in a 3D space and provide visual, auditory and haptic (tactile) feedback. It has been a goal of Virtual Environments to use multi-sensory interaction in an attempt to increase the human-to-computer bandwidth. This approach may assist the user to understand large information spaces and find patterns in them. However, while the motivation is simple enough, actually designing appropriate mappings between the abstract information and the human sensory channels is quite difficult. Designing intuitive multi-sensory displays of abstract data is complex and needs to carefully consider human perceptual capabilities, yet we interact with the real world everyday in a multi-sensory way. Metaphors can describe mappings between the natural world and an abstract information space. This thesis develops a division of the multi-sensory design space called the MS-Taxonomy. The MS-Taxonomy provides a concept map of the design space based on temporal, spatial and direct metaphors. The detailed concepts within the taxonomy allow for discussion of low level design issues. Furthermore the concepts abstract to higher levels, allowing general design issues to be compared and discussed across the different senses. The MS-Taxonomy provides a categorisation of multi-sensory design options. However, to design effective multi-sensory displays requires more than a thorough understanding of design options. It is also useful to have guidelines to follow, and a process to describe the design steps. This thesis uses the structure of the MS-Taxonomy to develop the MS-Guidelines and the MS-Process. The MS-Guidelines capture design recommendations and the problems associated with different design choices. The MS-Process integrates the MS-Guidelines into a methodology for developing and evaluating multi-sensory displays. A detailed case study is used to validate the MS-Taxonomy, the MS-Guidelines and the MS-Process. The case study explores the design of multi-sensory displays within a domain where users wish to explore abstract data for patterns. This area is called Technical Analysis and involves the interpretation of patterns in stock market data. Following the MS-Process and using the MS-Guidelines some new multi-sensory displays are designed for pattern detection in stock market data. The outcome from the case study includes some novel haptic-visual and auditory-visual designs that are prototyped and evaluated.
24

Designing multi-sensory displays for abstract data

Nesbitt, Keith V. January 2003 (has links)
Thesis (Ph. D.)--University of Sydney, 2003. / Title from title screen (viewed April 6, 2009). Submitted in fulfilment of the requirements for the degree of Doctor of Philosophy to the School of Information Technologies, Faculty of Science. Includes bibliographical references. Also available in print form.
25

Alinhamento múltiplo de proteínas via algoritmo genético baseado em tipos abstratos de dados. / Multiple proteins alignment by genetic algorithms based on abstract data types.

Santos, Danielle Furtado dos 07 November 2008 (has links)
This works presents a new model to the multiple protein alignment problem using genetic algorithm based on abstract data types - GAADT. This model uses a structure called chromosome, which is a set of gene that in turn is composed of basic units called bases. Each chromosome is a possible alignment of the input sequences and the chromosome fitness is calculated according with the bases order in alignment. This model is different from other paradigms of multiple alignment by aligning the input sequences as a whole instead of a progressive pairwise alignment approach. Some characteristics of this model concern to the data structure, well-defined genetic operations and the convergence to a solution close to that found in other tools. The validation was performed comparing reference alignments of protein families subset with the results of the model. / Fundação de Amparo a Pesquisa do Estado de Alagoas / Este trabalho apresenta um novo modelo capaz de realizar o alinhamento múltiplo de proteínas utilizando algoritmo genético baseado em tipos abstratos de dados, denominado GAADT, no qual o cromossomo se dispõe em genes que por sua vez é composto de unidades elementares denominadas bases. Cada cromossomo representa um possível alinhamento entre as seqüências de proteínas e a adaptação do cromossomo é calculada conforme as bases se dispõem no alinhamento. O modelo se difere de outros métodos de alinhamento múltiplo por alinhar as seqüências como um todo, avaliando as colunas (genes) que compõem o alinhamento (cromossomo), ao invés de alinhar seqüências duas a duas progressivamente ou hierarquicamente. As potenciais características do modelo dizem respeito a estrutura de dados organizada, operações genéticas bem definidas sobre os tipos modelados e a convergência para uma solução próxima às encontradas por outras ferramentas, apesar desse algoritmo usar uma quantidade menor de conhecimento frente aos algoritmos existentes. A justificativa de que o modelo é válido foi realizada analisando sua performance com alinhamentos referência, utilizando como estudo de caso um subgrupo de famílias de proteínas.
26

Compilador ASN.1 e codificador/decodificador para BER / ASN.1 compiler and encode/decode for BER

Restovic Valderrama, Maria Inés 09 March 1992 (has links)
Orientador: Manuel de Jesus Mendes / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-18T21:31:47Z (GMT). No. of bitstreams: 1 RestovicValderrama_MariaInes_M.pdf: 1435363 bytes, checksum: 073357e2857410d0a58e683b2837ca3e (MD5) Previous issue date: 1992 / Resumo: Neste trabalho apresenta-se uma ferramenta chamada Compilador ASN.1, cujo objetivo é fornecer uma representação concreta para a sintaxe abstrata ASN.1, de forma que, as especificações das PDU's dos protocolos de aplicação, geralmente escritas em ASN.1, possam ser utilizadas computacionalmente. Uma das funções prioritárias da camada de apresentação de um protocolo de comunicação é produzir uma codificação dos valores destas PDU's, baseando-se nas regras definidas pela norma BER. Assim, o compilador deve fornecer numa segunda tarefa, as rotinas de codificação e decodificação específicas para cada PDU compilada, utilizando um conjunto de funções que se encontram em duas bibliotecas auxiliares que realizam estas conversões / Abstract: This work presents a tool called "Compilador ASN.1", which main objective is to provide a concrete representation for the abstract syntax ASN.1, in order to translate the application protocol PDU's specification, written in ASN.1, to the C language. One of the main functions of the presentation layer is produce an encode-decode for the PDU's data values, based on the BER norm. Therefore, a second compiler task is to provide the specific encode-decode routines for each compiled PDU, using a function set available in two complementary libraries that carry out these conversions / Mestrado / Automação / Mestre em Engenharia Elétrica
27

Investigating tools and techniques for improving software performance on multiprocessor computer systems

Tristram, Waide Barrington January 2012 (has links)
The availability of modern commodity multicore processors and multiprocessor computer systems has resulted in the widespread adoption of parallel computers in a variety of environments, ranging from the home to workstation and server environments in particular. Unfortunately, parallel programming is harder and requires more expertise than the traditional sequential programming model. The variety of tools and parallel programming models available to the programmer further complicates the issue. The primary goal of this research was to identify and describe a selection of parallel programming tools and techniques to aid novice parallel programmers in the process of developing efficient parallel C/C++ programs for the Linux platform. This was achieved by highlighting and describing the key concepts and hardware factors that affect parallel programming, providing a brief survey of commonly available software development tools and parallel programming models and libraries, and presenting structured approaches to software performance tuning and parallel programming. Finally, the performance of several parallel programming models and libraries was investigated, along with the programming effort required to implement solutions using the respective models. A quantitative research methodology was applied to the investigation of the performance and programming effort associated with the selected parallel programming models and libraries, which included automatic parallelisation by the compiler, Boost Threads, Cilk Plus, OpenMP, POSIX threads (Pthreads), and Threading Building Blocks (TBB). Additionally, the performance of the GNU C/C++ and Intel C/C++ compilers was examined. The results revealed that the choice of parallel programming model or library is dependent on the type of problem being solved and that there is no overall best choice for all classes of problem. However, the results also indicate that parallel programming models with higher levels of abstraction require less programming effort and provide similar performance compared to explicit threading models. The principle conclusion was that the problem analysis and parallel design are an important factor in the selection of the parallel programming model and tools, but that models with higher levels of abstractions, such as OpenMP and Threading Building Blocks, are favoured.
28

Ähnlichkeitsmessung von ausgewählten Datentypen in Datenbanksystemen zur Berechnung des Grades der Anonymisierung

Heinrich, Jan-Philipp, Neise, Carsten, Müller, Andreas January 2018 (has links)
Es soll ein mathematisches Modell zur Berechnung von Abweichungen verschiedener Datentypen auf relationalen Datenbanksystemen eingeführt und getestet werden. Basis dieses Modells sind Ähnlichkeitsmessungen für verschiedene Datentypen. Hierbei führen wir zunächst eine Betrachtung der relevanten Datentypen für die Arbeit durch. Danach definieren wir für die für diese Arbeit relevanten Datentypen eine Algebra, welche die Grundlage zur Berechnung des Anonymisierungsgrades θ ist. Das Modell soll zur Messung des Grades der Anonymisierung, vor allem personenbezogener Daten, zwischen Test- und Produktionsdaten angewendet werden. Diese Messung ist im Zuge der Einführung der EU-DSGVO im Mai 2018 sinnvoll, und soll helfen personenbezogene Daten mit einem hohen Ähnlichkeitsgrad zu identifizieren.
29

Méthodologie d'évaluation pour les types de données répliqués / Evaluation methodology for replicated data types

Ahmed-Nacer, Mehdi 05 May 2015 (has links)
Pour fournir une disponibilité permanente des données et réduire la latence réseau, les systèmes de partage de données se basent sur la réplication optimiste. Dans ce paradigme, il existe plusieurs copies de l'objet partagé dite répliques stockées sur des sites. Ces répliques peuvent être modifiées librement et à tout moment. Les modifications sont exécutées en local puis propagées aux autres sites pour y être appliquées. Les algorithmes de réplication optimiste sont chargés de gérer les modifications parallèles. L'objectif de cette thèse est de proposer une méthodologie d'évaluation pour les algorithmes de réplication optimiste. Le contexte de notre étude est l'édition collaborative. Nous allons concevoir pour cela un outil d'évaluation qui intègre un mécanisme de génération de corpus et un simulateur d'édition collaborative. À travers cet outil, nous allons dérouler plusieurs expériences sur deux types de corpus: synchrone et asynchrone. Dans le cas d'une édition collaborative synchrone, nous évaluerons les performances des différents algorithmes de réplication sur différents critères tels que le temps d'exécution, l'occupation mémoire, la taille des messages, etc. Nous proposerons ensuite quelques améliorations. En plus, dans le cas d'une édition collaborative asynchrone, lorsque deux répliques se synchronisent, les conflits sont plus nombreux à apparaître. Le système peut bloquer la fusion des modifications jusqu'à ce que l'utilisateur résolut les conflits. Pour réduire le nombre de ces conflits et l'effort des utilisateurs, nous proposerons une métrique d'évaluation et nous évaluerons les différents algorithmes sur cette métrique. Nous analyserons le résultat pour comprendre le comportement des utilisateurs et nous proposerons ensuite des algorithmes pour résoudre les conflits les plus important et réduire ainsi l'effort des développeurs. Enfin, nous proposerons une nouvelle architecture hybride basée sur deux types d'algorithmes de réplication. Contrairement aux architectures actuelles, l'architecture proposéeest simple, limite les ressources sur les dispositifs clients et ne nécessite pas de consensus entre les centres de données / To provide a high availability from any where, at any time, with low latency, data is optimistically replicated. This model allows any replica to apply updates locally, while the operations are later sent to all the others. In this way, all replicas eventually apply all updates, possibly even in different order. Optimistic replication algorithms are responsible for managing the concurrent modifications and ensure the consistency of the shared object. In this thesis, we present an evaluation methodology for optimistic replication algorithms. The context of our study is collaborative editing. We designed a tool that implements our methodology. This tool integrates a mechanism to generate a corpus and a simulator to simulate sessions of collaborative editing. Through this tool, we made several experiments on two different corpus: synchronous and asynchronous. In synchronous collaboration, we evaluate the performance of optimistic replication algorithms following several criteria such as execution time, memory occupation, message's size, etc. After analysis, some improvements were proposed. In addition, in asynchronous collaboration, when replicas synchronize their modifications, more conflicts can appear in the document. In this case, the system cannot merge the modifications until a user resolves them. In order to reduce the conflicts and the user's effort, we propose an evaluation metric and we evaluate the different algorithms on this metric. Afterward, we analyze the quality of the merge to understand the behavior of the users and the collaboration cases that create conflicts. Then, we propose algorithms for resolving the most important conflicts, therefore reducing the user's effort. Finally, we propose a new architecture for supporting cloud-based collaborative editing system. This architecture is based on two optimistic replication algorithms. Unlike current architectures, the proposed one removes the problems of the centralization and consensus between data centers, is simple and accessible for any developers
30

Uma arquitetura para análise de fluxo de dados estruturados aplicada ao sistema brasileiro de TV digital. / An architecture of structured data stream analysis applied to the Brazilian digital TV system.

Villa Real, Lucas Correia 27 July 2009 (has links)
Diversos sistemas computacionais transmitem informação em fluxos contínuos de dados estruturados e, por vezes, hierarquizados. Este modelo de transmissão de dados tem como uma de suas características a grande densidade de informação, o que exige de um receptor o tratamento imediato das unidades extraídas deste canal de comunicação. Muitas vezes o volume de transmissão não permite, também, que a informação recebida seja armazenada permanentemente no receptor, o que torna a análise do conteúdo desses fluxos de dados um desafio. Este trabalho apresenta uma arquitetura para a análise de fluxo de dados estruturados aplicado à hierarquia lógica definida pelo Sistema Brasileiro de TV Digital para a transmissão de programas de televisão, validada por meio de uma implementação de referência completamente funcional. / Various computing systems transfer data in structured data streams which also happen to be, sometimes, hierarchically organized. Such data stream model is characterized by the dense amount of information transmitted, which requires the receiver to immediately manipulate the elements extracted from that communication channel. The high rate in which data flows also makes it hard, if not impossible, for the receiver to store the desired information in its memory, which makes data flow analysis especially challenging. This work presents a novel structured data flow analysis architecture applied to the logical hierarchy defined by the Brazilian Digital TV System for the transmission of television programs, validated by means of a fully functional reference implementation.

Page generated in 0.0662 seconds