• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 5
  • 2
  • Tagged with
  • 20
  • 20
  • 8
  • 8
  • 5
  • 5
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Apriori Sets And Sequences: Mining Association Rules from Time Sequence Attributes

Pray, Keith A 06 May 2004 (has links)
We introduce an algorithm for mining expressive temporal relationships from complex data. Our algorithm, AprioriSetsAndSequences (ASAS), extends the Apriori algorithm to data sets in which a single data instance may consist of a combination of attribute values that are nominal sequences, time series, sets, and traditional relational values. Datasets of this type occur naturally in many domains including health care, financial analysis, complex system diagnostics, and domains in which multi-sensors are used. AprioriSetsAndSequences identifies predefined events of interest in the sequential data attributes. It then mines for association rules that make explicit all frequent temporal relationships among the occurrences of those events and relationships of those events and other data attributes. Our algorithm inherently handles different levels of time granularity in the same data set. We have implemented AprioriSetsAndSequences within the Weka environment and have applied it to computer performance, stock market, and clinical sleep disorder data. We show that AprioriSetsAndSequences produces rules that express significant temporal relationships that describe patterns of behavior observed in the data set.
2

Efficient duration modelling in the hierarchical hidden semi-Markov models and their applications

Duong, Thi V. T. January 2008 (has links)
Modeling patterns in temporal data has arisen as an important problem in engineering and science. This has led to the popularity of several dynamic models, in particular the renowned hidden Markov model (HMM) [Rabiner, 1989]. Despite its widespread success in many cases, the standard HMM often fails to model more complex data whose elements are correlated hierarchically or over a long period. Such problems are, however, frequently encountered in practice. Existing efforts to overcome this weakness often address either one of these two aspects separately, mainly due to computational intractability. Motivated by this modeling challenge in many real world problems, in particular, for video surveillance and segmentation, this thesis aims to develop tractable probabilistic models that can jointly model duration and hierarchical information in a unified framework. We believe that jointly exploiting statistical strength from both properties will lead to more accurate and robust models for the needed task. To tackle the modeling aspect, we base our work on an intersection between dynamic graphical models and statistics of lifetime modeling. Realizing that the key bottleneck found in the existing works lies in the choice of the distribution for a state, we have successfully integrated the discrete Coxian distribution [Cox, 1955], a special class of phase-type distributions, into the HMM to form a novel and powerful stochastic model termed as the Coxian Hidden Semi-Markov Model (CxHSMM). We show that this model can still be expressed as a dynamic Bayesian network, and inference and learning can be derived analytically. / Most importantly, it has four superior features over existing semi-Markov modelling: the parameter space is compact, computation is fast (almost the same as the HMM), close-formed estimation can be derived, and the Coxian is flexible enough to approximate a large class of distributions. Next, we exploit hierarchical decomposition in the data by borrowing analogy from the hierarchical hidden Markov model in [Fine et al., 1998, Bui et al., 2004] and introduce a new type of shallow structured graphical model that combines both duration and hierarchical modelling into a unified framework, termed the Coxian Switching Hidden Semi-Markov Models (CxSHSMM). The top layer is a Markov sequence of switching variables, while the bottom layer is a sequence of concatenated CxHSMMs whose parameters are determined by the switching variable at the top. Again, we provide a thorough analysis along with inference and learning machinery. We also show that semi-Markov models with arbitrary depth structure can easily be developed. In all cases we further address two practical issues: missing observations to unstable tracking and the use of partially labelled data to improve training accuracy. Motivated by real-world problems, our application contribution is a framework to recognize complex activities of daily livings (ADLs) and detect anomalies to provide better intelligent caring services for the elderly. / Coarser activities with self duration distributions are represented using the CxHSMM. Complex activities are made of a sequence of coarser activities and represented at the top level in the CxSHSMM. Intensive experiments are conducted to evaluate our solutions against existing methods. In many cases, the superiority of the joint modeling and the Coxian parameterization over traditional methods is confirmed. The robustness of our proposed models is further demonstrated in a series of more challenging experiments, in which the tracking is often lost and activities considerably overlap. Our final contribution is an application of the switching Coxian model to segment education-oriented videos into coherent topical units. Our results again demonstrate such segmentation processes can benefit greatly from the joint modeling of duration and hierarchy.
3

MITIGATION OF RISKS OF MAPPING COMPLEX DATA SOURCES ON THE EXAMPLE OF SOLVENCY II PROJECT / Mitigace rizik mapování komplexních datových zdrojů na příkladu Solvency II projektu

Abrahamyan, Nazeli January 2015 (has links)
The purpose of this diploma thesis is to describe the basic principles of Business Intelligence, its meaning in business reporting with focus on ensuring relevant information for stakeholders and consequently to identify the major risk factors in complex data mapping process of a project carried out for an insurance company Solvency II regulatory reporting. The identification of risks is based on a detailed analysis of the mapping process and its weak points. The main benefit of the thesis will then be the proposed methods by which the risks are mitigated or entirely eliminated. The diploma thesis is focused on specific project, which delivers complex Business Intelligence solution in compliance with Solvency II European insurance regulatory reform. Therefore the flawless mapping of data and mitigation of risks are key to meet the Solvency II requirements on reporting in high quality. The introductory section of the thesis is devoted to definition of Business intelligence and its purpose in organizations. This part also includes the definition of key aspects of successful business reporting and its role in effective decision-making. The other part of the introductory section shortly represents the BI architecture and components of specific Solvency II BI solution. The second main section of the work defines the Solvency II project and describes the specificities of its implementation on a specific project for unnamed organization. Based on the experience of this project is then further described the mapping process of complex data sources. The result is a list of risks arising at each stage of mapping activities. As a completion of the main objective of the thesis there are suggested ways of mitigation for each risk. The benefit of this thesis is that identified risks and the recommendations for their mitigation might lead to more effective project management, increase the quality of its outcomes and satisfaction of company clients.
4

An Algorithm for Generalized Principal Curves with Adaptive Topology in Complex Data Sets

Balzuweit, Gerd, Der, Ralf, Herrmann, Michael, Welk, Martin 12 July 2019 (has links)
Generalized principal curves are capable of representing complex data structures as they may have branching points or may consist of disconnected parts. For their construction using an unsupervised learning algorithm the templates need to be structurally adaptive. The present algorithm meets this goal by a combination of a competitive Hebbian learning scheme and a self-organizing map algorithm. Whereas the Hebbian scheme captures the main topological features of the data, in the map the neighborhood widths are automatically adjusted in order to suppress the noisy dimensions. It is noteworthy that the procedure which is natural in prestructured Kohonen nets could be carried over to a neural gas algorithm which does not use an initial connectivity. The principal curve is then given by an averaging procedure over the critical uctuations of the map exploiting noise-induced phase transitions in the neural gas.
5

The Similarity-aware Relational Division Database Operator / Divisão Relacional por Similaridade em Banco de Dados

Gonzaga, André dos Santos 01 September 2017 (has links)
In Relational Algebra, the operator Division (÷) is an intuitive tool used to write queries with the concept of for all, and thus, it is constantly required in real applications. However, as we demonstrate in this MSc work, the division does not support many of the needs common to modern applications, particularly those that involve complex data analysis, such as processing images, audio, genetic data, large graphs, fingerprints, and many other non-traditional data types. The main issue is the existence of intrinsic comparisons of attribute values in the operator, which, by definition, are always performed by identity (=), despite the fact that complex data must be compared by similarity. Recent works focus on supporting similarity comparison in relational operators, but no one treats the division. MSc work proposes the new Similarity-aware Division (÷) operator. Our novel operator is naturally well suited to answer queries with an idea of candidate elements and exigencies to be performed on complex data from real applications of high-impact. For example, it is potentially useful to support agriculture, genetic analyses, digital library search, and even to help controlling the quality of manufactured products and identifying new clients in industry. We validate our proposal by studying the first two of these applications. / O operador de Divisão (÷) da Álgebra Relacional permite representar de forma simples consultas com o conceito de para todos, e por isso é requerido em diversas aplicações reais. Entretanto, evidencia-se neste trabalho de mestrado que a divisão não atende às necessidades de diversas aplicações atuais, principalmente quando estas analisam dados complexos, como imagens, áudio, textos longos, impressões digitais, entre outros. Analisando o problema verifica-se que a principal limitação é a existência de comparações de valores de atributos intrínsecas à Divisão Relacional, que, por definição, são efetuadas sempre por identidade (=), enquanto objetos complexos devem geralmente ser comparados por similaridade. Hoje, encontram-se na literatura propostas de operadores relacionais com suporte à similaridade de objetos complexos, entretanto, nenhuma trata a Divisão Relacional. Este trabalho de mestrado propõe investigar e estender o operador de Divisão da Álgebra Relacional para melhor adequá-lo às demandas de aplicações atuais, por meio de suporte a comparações de valores de atributos por similaridade. Mostra-se aqui que a Divisão por Similaridade é naturalmente adequada a responder consultas diversas com um conceito de elementos candidatos e exigências descrito na monografia, envolvendo dados complexos de aplicações reais de alto impacto, com potencial por exemplo, para apoiar a agricultura, análises de dados genéticos, buscas em bibliotecas digitais, e até mesmo para controlar a qualidade de produtos manufaturados e a identificação de novos clientes em indústrias. Para validar a proposta, propõe-se estudar as duas primeiras aplicações citadas.
6

On conditional random fields: applications, feature selection, parameter estimation and hierarchical modelling

Tran, The Truyen January 2008 (has links)
There has been a growing interest in stochastic modelling and learning with complex data, whose elements are structured and interdependent. One of the most successful methods to model data dependencies is graphical models, which is a combination of graph theory and probability theory. This thesis focuses on a special type of graphical models known as Conditional Random Fields (CRFs) (Lafferty et al., 2001), in which the output state spaces, when conditioned on some observational input data, are represented by undirected graphical models. The contributions of thesis involve both (a) broadening the current applicability of CRFs in the real world and (b) deepening the understanding of theoretical aspects of CRFs. On the application side, we empirically investigate the applications of CRFs in two real world settings. The first application is on a novel domain of Vietnamese accent restoration, in which we need to restore accents of an accent-less Vietnamese sentence. Experiments on half a million sentences of news articles show that the CRF-based approach is highly accurate. In the second application, we develop a new CRF-based movie recommendation system called Preference Network (PN). The PN jointly integrates various sources of domain knowledge into a large and densely connected Markov network. We obtained competitive results against well-established methods in the recommendation field. / On the theory side, the thesis addresses three important theoretical issues of CRFs: feature selection, parameter estimation and modelling recursive sequential data. These issues are all addressed under a general setting of partial supervision in that training labels are not fully available. For feature selection, we introduce a novel learning algorithm called AdaBoost.CRF that incrementally selects features out of a large feature pool as learning proceeds. AdaBoost.CRF is an extension of the standard boosting methodology to structured and partially observed data. We demonstrate that the AdaBoost.CRF is able to eliminate irrelevant features and as a result, returns a very compact feature set without significant loss of accuracy. Parameter estimation of CRFs is generally intractable in arbitrary network structures. This thesis contributes to this area by proposing a learning method called AdaBoost.MRF (which stands for AdaBoosted Markov Random Forests). As learning proceeds AdaBoost.MRF incrementally builds a tree ensemble (a forest) that cover the original network by selecting the best spanning tree at a time. As a result, we can approximately learn many rich classes of CRFs in linear time. The third theoretical work is on modelling recursive, sequential data in that each level of resolution is a Markov sequence, where each state in the sequence is also a Markov sequence at the finer grain. One of the key contributions of this thesis is Hierarchical Conditional Random Fields (HCRF), which is an extension to the currently popular sequential CRF and the recent semi-Markov CRF (Sarawagi and Cohen, 2004). Unlike previous CRF work, the HCRF does not assume any fixed graphical structures. / Rather, it treats structure as an uncertain aspect and it can estimate the structure automatically from the data. The HCRF is motivated by Hierarchical Hidden Markov Model (HHMM) (Fine et al., 1998). Importantly, the thesis shows that the HHMM is a special case of HCRF with slight modification, and the semi-Markov CRF is essentially a flat version of the HCRF. Central to our contribution in HCRF is a polynomial-time algorithm based on the Asymmetric Inside Outside (AIO) family developed in (Bui et al., 2004) for learning and inference. Another important contribution is to extend the AIO family to address learning with missing data and inference under partially observed labels. We also derive methods to deal with practical concerns associated with the AIO family, including numerical overflow and cubic-time complexity. Finally, we demonstrate good performance of HCRF against rivals on two applications: indoor video surveillance and noun-phrase chunking.
7

Assessing Dimensionality in Complex Data Structures: A Performance Comparison of DETECT and NOHARM Procedures

January 2011 (has links)
abstract: The purpose of this study was to investigate the effect of complex structure on dimensionality assessment in compensatory and noncompensatory multidimensional item response models (MIRT) of assessment data using dimensionality assessment procedures based on conditional covariances (i.e., DETECT) and a factor analytical approach (i.e., NOHARM). The DETECT-based methods typically outperformed the NOHARM-based methods in both two- (2D) and three-dimensional (3D) compensatory MIRT conditions. The DETECT-based methods yielded high proportion correct, especially when correlations were .60 or smaller, data exhibited 30% or less complexity, and larger sample size. As the complexity increased and the sample size decreased, the performance typically diminished. As the complexity increased, it also became more difficult to label the resulting sets of items from DETECT in terms of the dimensions. DETECT was consistent in classification of simple items, but less consistent in classification of complex items. Out of the three NOHARM-based methods, χ2G/D and ALR generally outperformed RMSR. χ2G/D was more accurate when N = 500 and complexity levels were 30% or lower. As the number of items increased, ALR performance improved at correlation of .60 and 30% or less complexity. When the data followed a noncompensatory MIRT model, the NOHARM-based methods, specifically χ2G/D and ALR, were the most accurate of all five methods. The marginal proportions for labeling sets of items as dimension-like were typically low, suggesting that the methods generally failed to label two (three) sets of items as dimension-like in 2D (3D) noncompensatory situations. The DETECT-based methods were more consistent in classifying simple items across complexity levels, sample sizes, and correlations. However, as complexity and correlation levels increased the classification rates for all methods decreased. In most conditions, the DETECT-based methods classified complex items equally or more consistent than the NOHARM-based methods. In particular, as complexity, the number of items, and the true dimensionality increased, the DETECT-based methods were notably more consistent than any NOHARM-based method. Despite DETECT's consistency, when data follow a noncompensatory MIRT model, the NOHARM-based method should be preferred over the DETECT-based methods to assess dimensionality due to poor performance of DETECT in identifying the true dimensionality. / Dissertation/Thesis / Ph.D. Educational Psychology 2011
8

The Similarity-aware Relational Division Database Operator / Divisão Relacional por Similaridade em Banco de Dados

André dos Santos Gonzaga 01 September 2017 (has links)
In Relational Algebra, the operator Division (÷) is an intuitive tool used to write queries with the concept of for all, and thus, it is constantly required in real applications. However, as we demonstrate in this MSc work, the division does not support many of the needs common to modern applications, particularly those that involve complex data analysis, such as processing images, audio, genetic data, large graphs, fingerprints, and many other non-traditional data types. The main issue is the existence of intrinsic comparisons of attribute values in the operator, which, by definition, are always performed by identity (=), despite the fact that complex data must be compared by similarity. Recent works focus on supporting similarity comparison in relational operators, but no one treats the division. MSc work proposes the new Similarity-aware Division (÷) operator. Our novel operator is naturally well suited to answer queries with an idea of candidate elements and exigencies to be performed on complex data from real applications of high-impact. For example, it is potentially useful to support agriculture, genetic analyses, digital library search, and even to help controlling the quality of manufactured products and identifying new clients in industry. We validate our proposal by studying the first two of these applications. / O operador de Divisão (÷) da Álgebra Relacional permite representar de forma simples consultas com o conceito de para todos, e por isso é requerido em diversas aplicações reais. Entretanto, evidencia-se neste trabalho de mestrado que a divisão não atende às necessidades de diversas aplicações atuais, principalmente quando estas analisam dados complexos, como imagens, áudio, textos longos, impressões digitais, entre outros. Analisando o problema verifica-se que a principal limitação é a existência de comparações de valores de atributos intrínsecas à Divisão Relacional, que, por definição, são efetuadas sempre por identidade (=), enquanto objetos complexos devem geralmente ser comparados por similaridade. Hoje, encontram-se na literatura propostas de operadores relacionais com suporte à similaridade de objetos complexos, entretanto, nenhuma trata a Divisão Relacional. Este trabalho de mestrado propõe investigar e estender o operador de Divisão da Álgebra Relacional para melhor adequá-lo às demandas de aplicações atuais, por meio de suporte a comparações de valores de atributos por similaridade. Mostra-se aqui que a Divisão por Similaridade é naturalmente adequada a responder consultas diversas com um conceito de elementos candidatos e exigências descrito na monografia, envolvendo dados complexos de aplicações reais de alto impacto, com potencial por exemplo, para apoiar a agricultura, análises de dados genéticos, buscas em bibliotecas digitais, e até mesmo para controlar a qualidade de produtos manufaturados e a identificação de novos clientes em indústrias. Para validar a proposta, propõe-se estudar as duas primeiras aplicações citadas.
9

Static analysis on numeric and structural properties of array contents / Analyse statique des propriétés numériques et structurelles du tableau

Liu, Jiangchao 20 February 2018 (has links)
Dans cette thèse, nous étudions l'analyse statique par interprétation abstraites de programmes manipulant des tableaux, afin d'inférer des propriétés sur les valeurs numériques et les structures de données qui y sont stockées. Les tableaux sont omniprésents dans de nombreux programmes, et les erreurs liées à leur manipulation sont difficile à éviter en pratique. De nombreux travaux de recherche ont été consacrés à la vérification de tels programmes. Les travaux existants s'intéressent plus particulièrement aux propriétés concernant les valeurs numériques stockées dans les tableaux. Toutefois, les programmes bas-niveau (comme les systèmes embarqués ou les systèmes d'exploitation temps réel) utilisent souvent des tableaux afin d'y stocker des structures de données telles que des listes, de manière à éviter d'avoir recours à l'allocation de mémoire dynamique. Dans cette thèse, nous présentons des techniques permettant de vérifier par interprétation abstraite des propriétés concernant à la fois les données numériques ainsi que les structures composites stockées dans des tableaux. Notre première contribution est une abstraction qui permet de décrire des stores à valeurs numériques et avec valeurs optionnelles (i.e., lorsqu'une variable peut soit avoir une valeur numérique, soit ne pas avoir de valeur du tout), ou bien avec valeurs ensemblistes (i.e., lorsqu'une variable est associée à un ensemble de valeurs qui peut être vide ou non). Cette abstraction peut être utilisée pour décrire des stores où certaines variables ont un type option, ou bien un type ensembliste. Elle peut aussi servir à la construction de domaines abstraits pour décrire des propriétés complexes à l'aide de variables symboliques, par exemple, pour résumer le contenu de zones dans des tableaux. Notre seconde contribution est un domaine abstrait pour la description de tableaux, qui utilise des propriétés sémantiques des valeurs contenues afin de partitionner les cellules de tableaux en groupes homogènes. Ainsi, des cellules contenant des valeurs similaires sont décrites par les mêmes prédicats abstraits. De plus, au contraire des analyses de tableaux conventionnelles, les groupes ainsi formés ne sont pas nécessairement contigüs, ce qui contribue à la généralité de l'analyse. Notre analyse peut regrouper des cellules non-congitües, lorsque celles-ci ont des propriétés similaires. Ce domaine abstrait permet de construire des analyses complètement automatiques et capables d'inférer des invariants complexes sur les tableaux. Notre troisième contribution repose sur une combinaison de cette abstraction des tableaux avec différents domaines abstraits issus de l'analyse de forme des structures de données et reposant sur la logique de séparation. Cette combinaison appelée coalescence opère localement, et relie des résumés pour des structures dynamiques à des groupes de cellules du tableau. La coalescence permet de définir de manière locale des algorithmes d'analyse statique dans le domaine combiné. Nous l'utilisons pour relier notre domaine abstrait pour tableaux et une analyse de forme générique, dont la tâche est de décrire des structures chaînées. L'analyse ainsi obtenue peut vérifier à la fois des propriétés de sûreté et des propriétés de correction fonctionnelle. De nombreux programmes bas-niveau stockent des structures dynamiques chaînées dans des tableaux afin de n'utiliser que des zones mémoire allouées statiquement. La vérification de tels programmes est difficile, puisqu'elle nécessite à la fois de raisonner sur les tableaux et sur les structures chaînées. Nous construisons une analyse statique reposant sur ces trois contributions, et permettant d'analyser avec succés de tels programmes. Nous présentons des résultats d'analyse permettant la vérification de composants de systèmes d'exploitation et pilotes de périphériques. / We study the static analysis on both numeric and structural properties of array contents in the framework of abstract interpretation. Since arrays are ubiquitous in most software systems, and software defects related to mis-uses of arrays are hard to avoid in practice, a lot of efforts have been devoted to ensuring the correctness of programs manipulating arrays. Current verification of these programs by static analysis focuses on numeric content properties. However, in some lowlevel programs (like embedded systems or real-time operating systems), arrays often contain structural data (e.g., lists) without using dynamic allocation. In this manuscript, we present a series of techniques to verify both numeric and structural properties of array contents. Our first technique is used to describe properties of numerical stores with optional values (i.e., where some variables may have no value) or sets of values (i.e., where some variables may store a possibly empty set of values). Our approach lifts numerical abstract domains based on common linear inequality into abstract domains describing stores with optional values and sets of values. This abstraction can be used in order to analyze languages with some form of option scalar type. It can also be applied to the construction of abstract domains to describe complex memory properties that introduce symbolic variables, e.g., in order to summarize unbounded memory blocks like in arrays. Our second technique is an abstract domain which utilizes semantic properties to split array cells into groups. Cells with similar properties will be packed into groups and abstracted together. Additionally, groups are not necessarily contiguous. Compared to conventional array partitioning analyses that split arrays into contiguous partitions to infer properties of sets of array cells. Our analysis can group together non-contiguous cells when they have similar properties. Our abstract domain can infer complex array invariants in a fully automatic way. The third technique is used to combine different shape domains. This combination locally ties summaries in both abstract domains and is called a coalesced abstraction. Coalescing allows to define efficient and precise static analysis algorithms in the combined domain. We utilize it to combine our array abstraction (i.e., our second technique) and a shape abstraction which captures linked structures with separation logicbased inductive predicates. The product domain can verify both safety and functional properties of programs manipulating arrays storing dynamically linked structures, such as lists. Storing dynamic structures in arrays is a programming pattern commonly used in low-level systems, so as to avoid relying on dynamic allocation. The verification of such programs is very challenging as it requires reasoning both about the array structure with numeric indexes and about the linked structures stored in the array. Combining the three techniques that we have proposed, we can build an automatic static analysis for the verification of programs manipulating arrays storing linked structures. We report on the successful verification of several operating system kernel components and drivers.
10

Conception et évaluation de techniques d'interaction pour l'exploration de données complexes dans de larges espaces d'affichage / Desing and evaluation of interaction techniques for exploring complexe data in large display-spaces

Saïdi, Houssem Eddine 16 October 2018 (has links)
Les données d'aujourd'hui deviennent de plus en plus complexes à cause de la forte croissance de leurs volumes ainsi que leur multidimensionnalité. Il devient donc nécessaire d'explorer des environnements d'affichage qui aillent au-delà du simple affichage de données offert par les moniteurs traditionnels et ce, afin de fournir une plus grande surface d'affichage ainsi que des techniques d'interaction plus performantes pour l'exploration de données. Les environnements correspondants à cette description sont les suivants : Les écrans large ; les environnements multi-écrans (EME) composés de plusieurs écrans hétérogènes spatialement distribués (moniteurs, smartphones, tablettes, table interactive ...) ; les environnements immersifs. Dans ce contexte, l'objectif de ces travaux de thèse est de concevoir et d'évaluer des solutions d'interaction originales, efficaces et adaptées à chacun des trois environnements cités précédemment. Une première contribution de nos travaux consiste en Split-focus : une interface de visualisation et d'interaction qui exploite les facilités offertes par les environnements multi-écrans dans la visualisation de données multidimensionnelles au travers d'une interface overview + multi-detail multi-écrans. Bien que plusieurs techniques d'interaction offrent plus d'une vue détaillée en simultané, le nombre optimal de vues détaillées n'a pas été étudié. Dans ce type d'interface, le nombre de vues détaillées influe grandement sur l'interaction : avoir une seule vue détaillée offre un grand espace d'affichage mais ne permet qu'une exploration séquentielle de la vue d'ensemble?; avoir plusieurs vues détaillées réduit l'espace d'affichage dans chaque vue mais permet une exploration parallèle de la vue d'ensemble. Ce travail explore le bénéfice de diviser la vue détaillée d'une interface overview + detail pour manipuler de larges graphes à travers une étude expérimentale utilisant la technique Split-focus. Split-focus est une interface overview + multi-détails permettant d'avoir une vue d'ensemble sur un grand écran et plusieurs vues détaillées (1,2 ou 4) sur une tablette. [...] / Today's ever-growing data is becoming increasingly complex due to its large volume and high dimensionality: it thus becomes crucial to explore interactive visualization environments that go beyond the traditional desktop in order to provide a larger display area and offer more efficient interaction techniques to manipulate the data. The main environments fitting the aforementioned description are: large displays, i.e. an assembly of displays amounting to a single space; Multi-display Environments (MDEs), i.e. a combination of heterogeneous displays (monitors, smartphones/tablets/wearables, interactive tabletops...) spatially distributed in the environment; and immersive environments, i.e. systems where everything can be used as a display surface, without imposing any bound between displays and immersing the user within the environment. The objective of our work is to design and experiment original and efficient interaction techniques well suited for each of the previously described environments. First, we focused on the interaction with large datasets on large displays. We specifically studied simultaneous interaction with multiple regions of interest of the displayed visualization. We implemented and evaluated an extension of the traditional overview+detail interface to tackle this problem: it consists of an overview+detail interface where the overview is displayed on a large screen and multiple detailed views are displayed on a tactile tablet. The interface allows the user to have up to four detailed views of the visualization at the same time. We studied its usefulness as well as the optimal number of detailed views that can be used efficiently. Second, we designed a novel touch-enabled device, TDome, to facilitate interactions in Multi- display environments. The device is composed of a dome-like base and provides up to 6 degrees of freedom, a touchscreen and a camera that can sense the environment. [...]

Page generated in 0.0741 seconds