• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 1
  • Tagged with
  • 12
  • 12
  • 12
  • 8
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Interactive data mining and visualization on multi-dimensional data.

January 1999 (has links)
by Chu, Hong Ki. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1999. / Includes bibliographical references (leaves 75-79). / Abstracts in English and Chinese. / Acknowledgments --- p.ii / Abstract --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Problem Definitions --- p.3 / Chapter 1.2 --- Experimental Setup --- p.5 / Chapter 1.3 --- Outline of the thesis --- p.6 / Chapter 2 --- Survey on Previous Researches --- p.8 / Chapter 2.1 --- Association rules --- p.8 / Chapter 2.2 --- Clustering --- p.10 / Chapter 2.3 --- Motivation --- p.12 / Chapter 3 --- ID AN on discovering quantitative association rules --- p.16 / Chapter 3.1 --- Briefing --- p.17 / Chapter 3.2 --- A-Tree --- p.18 / Chapter 3.3 --- Insertion Algorithm --- p.25 / Chapter 3.4 --- Visualizing Association Rules --- p.28 / Chapter 4 --- ID AN on discovering patterns of clustering --- p.34 / Chapter 4.1 --- Briefing --- p.34 / Chapter 4.2 --- A-Tree --- p.36 / Chapter 4.3 --- Dimensionality Curse --- p.37 / Chapter 4.3.1 --- Discrete Fourier Transform --- p.38 / Chapter 4.3.2 --- Discrete Wavelet Transform --- p.40 / Chapter 4.3.3 --- Singular Value Decomposition --- p.42 / Chapter 4.4 --- IDAN - Algorithm --- p.45 / Chapter 4.5 --- Visualizing clustering patterns --- p.49 / Chapter 4.6 --- Comparison --- p.51 / Chapter 5 --- Performance Studies --- p.55 / Chapter 5.1 --- Association Rules --- p.55 / Chapter 5.2 --- Clustering --- p.58 / Chapter 6 --- Survey on data visualization techniques --- p.63 / Chapter 6.1 --- Geometric Projection Techniques --- p.64 / Chapter 6.1.1 --- Scatter-plot Matrix --- p.64 / Chapter 6.1.2 --- Parallel Coordinates --- p.65 / Chapter 6.2 --- Icon-based Techniques --- p.67 / Chapter 6.2.1 --- Chernoff Face --- p.67 / Chapter 6.2.2 --- Stick Figures --- p.68 / Chapter 6.3 --- Pixel-oriented Techniques --- p.70 / Chapter 6.4 --- Hierarchical Techniques --- p.72 / Chapter 7 --- Conclusion --- p.73 / Bibliography --- p.74
2

Topological analysis of level sets and its use in data visualization

Sohn, Bong-Soo 28 August 2008 (has links)
Not available / text
3

Fast data-parallel rendering of digital volume images.

January 1995 (has links)
by Song Zou. / Year shown on spine: 1997. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1995. / Includes bibliographical references (leaves 69-[72]). / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Related works --- p.7 / Chapter 2.1 --- Spatial domain methods --- p.8 / Chapter 2.2 --- Transformation based methods --- p.9 / Chapter 2.3 --- Parallel Implement ation --- p.10 / Chapter 3 --- Parallel computation model --- p.12 / Chapter 3.1 --- Introduction --- p.12 / Chapter 3.2 --- Classifications of Parallel Computers --- p.13 / Chapter 3.3 --- The SIMD machine architectures --- p.15 / Chapter 3.4 --- The communication within the parallel processors --- p.16 / Chapter 3.5 --- The parallel display mechanisms --- p.17 / Chapter 4 --- Data preparation --- p.20 / Chapter 4.1 --- Introduction --- p.20 / Chapter 4.2 --- Original data layout in the processor array --- p.21 / Chapter 4.3 --- Shading --- p.21 / Chapter 4.4 --- Classification --- p.23 / Chapter 5 --- Fast data parallel rotation and resampling algorithms --- p.25 / Chapter 5.1 --- Introduction --- p.25 / Chapter 5.2 --- Affine Transformation --- p.26 / Chapter 5.3 --- Related works --- p.28 / Chapter 5.3.1 --- Resampling in ray tracing --- p.28 / Chapter 5.3.2 --- Direct Rotation --- p.28 / Chapter 5.3.3 --- General resampling approaches --- p.29 / Chapter 5.3.4 --- Rotation by shear --- p.29 / Chapter 5.4 --- The minimum mismatch rotation --- p.31 / Chapter 5.5 --- Load balancing --- p.33 / Chapter 5.6 --- Resampling algorithm --- p.35 / Chapter 5.6.1 --- Nearest neighbor --- p.36 / Chapter 5.6.2 --- Linear Interpolation --- p.36 / Chapter 5.6.3 --- Aitken's Algorithm --- p.38 / Chapter 5.6.4 --- Polynomial resampling in 3D --- p.40 / Chapter 5.7 --- A comparison between the resampling algorithms --- p.40 / Chapter 5.7.1 --- The quality --- p.42 / Chapter 5.7.2 --- Implement ation and cost --- p.44 / Chapter 6 --- Data reordering using binary swap --- p.47 / Chapter 6.1 --- The sorting algorithm --- p.48 / Chapter 6.2 --- The communication cost --- p.51 / Chapter 7 --- Ray composition --- p.53 / Chapter 7.1 --- Introduction --- p.53 / Chapter 7.2 --- Ray Composition by Monte Carlo Method --- p.54 / Chapter 7.3 --- The Associative Color Model --- p.56 / Chapter 7.4 --- Parallel Implementation --- p.60 / Chapter 7.5 --- Discussion and further improvement --- p.63 / Chapter 8 --- Conclusion and further work --- p.67 / Bibliography --- p.69
4

Interactive visualization tools for spatial data & metadata

Antle, Alissa N. 11 1900 (has links)
In recent years, the focus of cartographic research has shifted from the cartographic communication paradigm to the scientific visualization paradigm. With this, there has been a resurgence of cognitive research that is invaluable in guiding the design and evaluation of effective cartographic visualization tools. The design of new tools that allow effective visual exploration of spatial data and data quality information in a resource management setting is critical if decision-makers and policy setters are to make accurate and confident decisions that will have a positive long-term impact on the environment. The research presented in this dissertation integrates the results of previous research in spatial cognition, visualization of spatial information and on-line map use in order to explore the design, development and experimental testing of four interactive visualization tools that can be used to simultaneously explore spatial data and data quality. Two are traditional online tools (side-by-side and sequenced maps) and two are newly developed tools (an interactive "merger" bivariate map and a hybrid o f the merger map and the hypermap). The key research question is: Are interactive visualization tools, such as interactive bivariate maps and hypermaps, more effective for communicating spatial information than less interactive tools such as sequenced maps? A methodology was developed in which subjects used the visualization tools to explore a forest species composition and associated data quality map in order to perform a range of map-use tasks. Tasks focused on an imaginary land-use conflict for a small region of mixed boreal forest in Northern Alberta. Subject responses in terms of performance (accuracy and confidence) and preference are recorded and analyzed. Results show that theory-based, well-designed interactive tools facilitate improved performance across all tasks, but there is an optimal matching between specific tasks and tools. The results are generalized into practical guidelines for software developers. The use of confidence as a measure of map-use effectiveness is verified. In this experimental setting, individual differences (in terms of preference, ability, gender etc.) did not significantly affect performance.
5

Reconstruction for visualisation of discrete data fields using wavelet signal processing

Cena, Bernard Maria January 2000 (has links)
The reconstruction of a function and its derivative from a set of measured samples is a fundamental operation in visualisation. Multiresolution techniques, such as wavelet signal processing, are instrumental in improving the performance and algorithm design for data analysis, filtering and processing. This dissertation explores the possibilities of combining traditional multiresolution analysis and processing features of wavelets with the design of appropriate filters for reconstruction of sampled data. On the one hand, a multiresolution system allows data feature detection, analysis and filtering. Wavelets have already been proven successful in these tasks. On the other hand, a choice of discrete filter which converges to a continuous basis function under iteration permits efficient and accurate function representation by providing a “bridge” from the discrete to the continuous. A function representation method capable of both multiresolution analysis and accurate reconstruction of the underlying measured function would make a valuable tool for scientific visualisation. The aim of this dissertation is not to try to outperform existing filters designed specifically for reconstruction of sampled functions. The goal is to design a wavelet filter family which, while retaining properties necessary to preform multiresolution analysis, possesses features to enable the wavelets to be used as efficient and accurate “building blocks” for function representation. The application to visualisation is used as a means of practical demonstration of the results. Wavelet and visualisation filter design is analysed in the first part of this dissertation and a list of wavelet filter design criteria for visualisation is collated. Candidate wavelet filters are constructed based on a parameter space search of the BC-spline family and direct solution of equations describing filter properties. Further, a biorthogonal wavelet filter family is constructed based on point and average interpolating subdivision and using the lifting scheme. The main feature of these filters is their ability to reconstruct arbitrary degree piecewise polynomial functions and their derivatives using measured samples as direct input into a wavelet transform. The lifting scheme provides an intuitive, interval-adapted, time-domain filter and transform construction method. A generalised factorisation for arbitrary primal and dual order point and average interpolating filters is a result of the lifting construction. The proposed visualisation filter family is analysed quantitatively and qualitatively in the final part of the dissertation. Results from wavelet theory are used in the analysis which allow comparisons among wavelet filter families and between wavelets and filters designed specifically for reconstruction for visualisation. Lastly, the performance of the constructed wavelet filters is demonstrated in the visualisation context. One-dimensional signals are used to illustrate reconstruction performance of the wavelet filter family from noiseless and noisy samples in comparison to other wavelet filters and dedicated visualisation filters. The proposed wavelet filters converge to basis functions capable of reproducing functions that can be represented locally by arbitrary order piecewise polynomials. They are interpolating, smooth and provide asymptotically optimal reconstruction in the case when samples are used directly as wavelet coefficients. The reconstruction performance of the proposed wavelet filter family approaches that of continuous spatial domain filters designed specifically for reconstruction for visualisation. This is achieved in addition to retaining multiresolution analysis and processing properties of wavelets.
6

Interactive visualization tools for spatial data & metadata

Antle, Alissa N. 11 1900 (has links)
In recent years, the focus of cartographic research has shifted from the cartographic communication paradigm to the scientific visualization paradigm. With this, there has been a resurgence of cognitive research that is invaluable in guiding the design and evaluation of effective cartographic visualization tools. The design of new tools that allow effective visual exploration of spatial data and data quality information in a resource management setting is critical if decision-makers and policy setters are to make accurate and confident decisions that will have a positive long-term impact on the environment. The research presented in this dissertation integrates the results of previous research in spatial cognition, visualization of spatial information and on-line map use in order to explore the design, development and experimental testing of four interactive visualization tools that can be used to simultaneously explore spatial data and data quality. Two are traditional online tools (side-by-side and sequenced maps) and two are newly developed tools (an interactive "merger" bivariate map and a hybrid o f the merger map and the hypermap). The key research question is: Are interactive visualization tools, such as interactive bivariate maps and hypermaps, more effective for communicating spatial information than less interactive tools such as sequenced maps? A methodology was developed in which subjects used the visualization tools to explore a forest species composition and associated data quality map in order to perform a range of map-use tasks. Tasks focused on an imaginary land-use conflict for a small region of mixed boreal forest in Northern Alberta. Subject responses in terms of performance (accuracy and confidence) and preference are recorded and analyzed. Results show that theory-based, well-designed interactive tools facilitate improved performance across all tasks, but there is an optimal matching between specific tasks and tools. The results are generalized into practical guidelines for software developers. The use of confidence as a measure of map-use effectiveness is verified. In this experimental setting, individual differences (in terms of preference, ability, gender etc.) did not significantly affect performance. / Arts, Faculty of / Geography, Department of / Graduate
7

Applying blended conceptual spaces to variable choice and aesthetics in data visualisation

Featherstone, Coral 09 1900 (has links)
Computational creativity is an active area of research within the artificial intelligence domain that investigates what aspects of computing can be considered as an analogue to the human creative process. Computers can be programmed to emulate the type of things that the human mind can. Artificial creativity is worthy of study for two reasons. Firstly, it can help in understanding human creativity and secondly it can help with the design of computer programs that appear to be creative. Although the implementation of creativity in computer algorithms is an active field, much of the research fails to specify which of the known theories of creativity it is aligning with. The combination of computational creativity with computer generated visualisations has the potential to produce visualisations that are context sensitive with respect to the data and could solve some of the current automation problems that computers experience. In addition theories of creativity could theoretically compute unusual data combinations, or introducing graphical elements that draw attention to the patterns in the data. More could be learned about the creativity involved as humans go about the task of generating a visualisation. The purpose of this dissertation was to develop a computer program that can automate the generation of a visualisation, for a suitably chosen visualisation type over a small domain of knowledge, using a subset of the computational creativity criteria, in order to try and explore the effects of the introduction of conceptual blending techniques. The problem is that existing computer programs that generate visualisations are lacking the creativity, intuition, background information, and visual perception that enable a human to decide what aspects of the visualisation will expose patterns that are useful to the consumer of the visualisation. The main research question that guided this dissertation was, “How can criteria derived from theories of creativity be used in the generation of visualisations?”. In order to answer this question an analysis was done to determine which creativity theories and artificial intelligence techniques could potentially be used to implement the theories in the context of those relevant to computer generated visualisations. Measurable attributes and criteria that were sufficient for an algorithm that claims to model creativity were explored. The parts of the visualisation pipeline were identified and the aspects of visualisation generation that humans are better at than computers was explored. Themes that emerged in both the computational creativity and the visualisation literature were highlighted. Finally a prototype was built that started to investigate the use of computational creativity methods in the ‘variable choice’, and ‘aesthetics’ stages of the data visualisation pipeline. / School of Computing / M. Sc. (Computing)
8

Analyzing software repository data to synthesize and visualize relationships between development artifacts

Unknown Date (has links)
As computing technology continues to advance, it has become increasingly difficult to find businesses that do not rely, at least in part, upon the collection and analysis of data for the purpose of project management and process improvement. The cost of software tends to increase over time due to its complexity and the cost of employing humans to develop, maintain, and evolve it. To help control the costs, organizations often seek to improve the process by which software systems are developed and evolved. Improvements can be realized by discovering previously unknown or hidden relationships between the artifacts generated as a result of developing a software system. The objective of the work described in this thesis is to provide a visualization tool that helps managers and engineers better plan for future projects by discovering new knowledge gained by synthesizing and visualizing data mined from software repository records from previous projects. / by James J. Mulcahy. / Thesis (M.S.C.S.)--Florida Atlantic University, 2011. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2011. Mode of access: World Wide Web.
9

Visor++ : a software visualisation tool for task-parallel object-orientated programs

Widjaja, Hendra. January 1998 (has links) (PDF)
Bibliography: leaves 173-184. This thesis describes Visor++, a tool for visualising programs written in CC++, a task-parallel, object-orientated language derived from C++. Visor++ provides a framework of visualising task-parallel object-orientated programs in the absence of language support for visualisation, i.e. for programs such as CC++ which are written in languages which are not "visualisation-conscious". The development of techniques using a wide selection of language features are described and the effectiveness testified by experimentation.
10

Patient Record Summarization Through Joint Phenotype Learning and Interactive Visualization

Levy-Fix, Gal January 2020 (has links)
Complex patient are becoming more and more of a challenge to the health care system given the amount of care they require and the amount of documentation needed to keep track of their state of health and treatment. Record keeping using the EHR makes this easier but mounting amounts of patient data also means that clinicians are faced with information overload. Information overload has been shown to have deleterious effects on care, with increased safety concerns due to missed information. Patient record summarization has been a promising mitigator for information overload. Subsequently, a lot of research has been dedicated to record summarization since the introduction of EHRs. In this dissertation we examine whether unsupervised inference methods can derive patient problem-oriented summaries, that are robust to different patients. By grounding our experiments with HIV patients we leverage the data of a group of patients that are similar in that they share one common disease (HIV) but also exhibit complex histories of diverse comorbidities. Using a user-centered, iterative design process, we design an interactive, longitudinal patient record summarization tool, that leverages automated inferences about the patient's problems. We find that unsupervised, joint learning of problems using correlated topic models, adapted to handle the multiple data types (structured and unstructured) of the EHR, is successful in identifying the salient problems of complex patients. Utilizing interactive visualization that exposes inference results to users enables them to make sense of a patient's problems over time and to answer questions about a patient more accurately and faster than using the EHR alone.

Page generated in 0.159 seconds