• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 147
  • 38
  • 16
  • 13
  • 8
  • 7
  • 7
  • 5
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 301
  • 301
  • 76
  • 60
  • 55
  • 47
  • 43
  • 38
  • 38
  • 37
  • 37
  • 33
  • 31
  • 30
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Vícedimensionální přechodové funkce pro vizualizaci skalárních objemových dat / Multidimensional transfer functions for scalar volumetric data visualization

Mach, Pavel January 2015 (has links)
Direct volume rendering is an algorithm used for displaying three-dimensional scalar data, like image from Computed Tomography. This algorithm makes use of a concept of Transfer functions for assigning optical properties to the data values. We studied two dimensional transfer functions, that besides primary values have additional dataset as an input. In particular, we studied computation of this secondary dataset with respect to the primary image function shape. This was done by analysing eigenvalues of the Hessian matrix in each image point. We proposed one formula and implemented several others for computing the probability that image point belongs to the blood vessel. Powered by TCPDF (www.tcpdf.org)
22

Vizualizace kvality dat v Business Intelligence / Visualization of Data Quality in Business Intelligence

Pohořelý, Radovan January 2009 (has links)
This thesis deals with the area of Business Intelligence and especially the part of data quality. The goal is provide an overview of data quality issue and possible ways the data can be shown to have better and more engaging informative value. Another goal was to make a proposal solution to the visualization of the system state, particularly the section of data quality, at the concrete enterprise. The output of this thesis should provide a quideline for the implementation of the proposed solution.
23

AN ITERATIVE METHOD OF SENTIMENT ANALYSIS FOR RELIABLE USER EVALUATION

Jingyi Hui (7023500) 16 August 2019 (has links)
<div> <div> <p>Benefited from the booming social network, reading posts from other users overinternet is becoming one of commonest ways for people to intake information. Onemay also have noticed that sometimes we tend to focus on users provide well-foundedanalysis, rather than those merely who vent their emotions. This thesis aims atfinding a simple and efficient way to recognize reliable information sources amongcountless internet users by examining the sentiments from their past posts.<br></p><p>To achieve this goal, the research utilized a dataset of tweets about Apples stockprice retrieved from Twitter. Key features we studied include post-date, user name,number of followers of that user, and the sentiment of that tweet. Prior to makingfurther use of the dataset, tweets from users who do not have sufficient posts arefiltered out. To compare user sentiments and the derivative of Apples stock price, weuse Pearson correlation between them for to describe how well each user performs.Then we iteratively increase the weight of reliable users and lower the weight ofuntrustworthy users, the correlation between overall sentiment and the derivative ofstock price will finally converge. The final correlations for individual users are theirperformance scores. Due to the chaos of real world data, manual segmentation viadata visualization is also proposed as a denoise method to improve performance.Besides our method, other metrics can also be considered as user trust index, suchas numbers of followers of each user. Experiments are conducted to prove that ourmethod out performs others. With simple input, this method can be applied on awide range of topics including election, economy, and job market.<br></p> </div> </div>
24

Performance driven design systems in practice

Joyce, Sam January 2016 (has links)
This thesis is concerned with the application of computation in the context of professional architectural practice and specifically towards defining complex buildings that are highly integrated with respect to design and engineering performance. The thesis represents applied research undertaken whilst in practice at Foster + Partners. It reviews the current state of the art of computational design techniques to quickly but flexibly model and analyse building options. The application of parametric design tools to active design projects is discussed with respect to real examples as well as methods to then link the geometric definitions to structural engineering analysis, to provide performance data in near real time. The practical interoperability between design software and engineering tools is also examined. The role of performance data in design decision making is analysed by comparing manual work-flows with methods assisted by computation. This extends to optimisation methods which by making use of design automation actively make design decisions to return optimised results. The challenges and drawbacks of using these methods effectively in real deign situations is discussed, especially the limitations of these methods with respect to incomplete problem definitions, and the design exploration resulting in modified performance requirements. To counter these issues a performance driven design work flow is proposed. This is a mixed initiative whereby designer centric understanding and decisions are computer assisted. Flexible meta-design descriptions that encapsulate the variability of the design space under consideration are explored and compared with existing optimisation approaches. Computation is used to produce and visualise the performance data from these large design spaces generated by parametric design descriptions and associated engineering analysis. Novel methods are introduced that define a design and performance space using cluster computing methods to speed up the generation of large numbers of options. The use of data visualisation is applied to design problems, showing how in real situations it can aid design orientation and decision making using the large amount of data produced. Strategies to enable these work-flows are discussed and implemented, focusing on re-appropriating existing web design paradigms using a modular approach concentrating on scalable data creation and information display.
25

Charting Contagions: Data Visualization of Disease in Late 19th-Century San Francisco Chinatown

Pashby, Michele 01 January 2019 (has links)
In the late 1800s in San Francisco, Chinese immigrants faced racism and were blamed for the city’s public health crisis. To the rest of San Francisco, disease originated from Chinese people. However, through data visualization we can see that this was not the case. This paper maps cases of disease against the city’s sanitation system and shows how the lack of adequate infrastructure contributed to high rates of disease. Data visualization is an increasingly important tool that historians need to utilize to uncover new insights.
26

Statistical flow data applied to visual analytics

Nguyen, Phong Hai January 2011 (has links)
Statistical flow data such as commuting, migration, trade and money flows has gained manyinterests from policy makers, city planners, researchers and ordinary citizens as well. Therehave appeared numerous statistical data visualisations; however, there is a shortage of applicationsfor visualising flow data. Moreover, among these rare applications, some are standaloneand only for expert usages, some do not support interactive functionalities, and somecan only provide an overview of data. Therefore, in this thesis, I develop a web-enabled,highly interactive and analysis support statistical flow data visualisation application that addressesall those challenges.My application is implemented based on GAV Flash, a powerful interactive visualisationcomponent framework, thus it is inherently web-enabled with basic interactive features. Theapplication uses visual analytics approach that combines both data analysis and interactivevisualisation to solve cluttering issue, the problem of overlapping flows on the display. A varietyof analysis means are provided to analyse flow data efficiently including analysing bothflow directions simultaneously, visualising time-series flow data, finding most attracting regionsand figuring out the reason behind derived patterns. The application also supportssharing knowledge between colleagues by providing story-telling mechanism which allowsusers to create and share their findings as a visualisation story. Last but not least, the applicationenables users to embed the visualisation based on the story into an ordinary web-pageso that public stand a golden chance to derive an insight into officially statistical flow data.
27

Daugiamačių duomenų vizualizavimo metodų tyrimas / The investigation of multidimensional data visualization methods

Šarikova, Renata 11 June 2004 (has links)
In master’s diploma work „The investigation of multidimensional data visualization methods“ the wide review of multidimensional data visualization methods is presented. The author was limited to research only two multidimensional data visualization methods such as: a parallel coordinates visualization method and Andrews curves visualization method. The program realization of both methods is realised, i. e. computer program was written for comparing those methods. To write program the tools of MS Excel and MATLAB were used. The performance of these methods is analyzed by using the mostly used data: Iris, HBK, Wood multidimensional data. The data generated by MS Excel and statistical data, taken from real life also were used. The investigations show, that the data visualization by a parallel coordinates method has some advantages comparing with Andrews curves visualization method.
28

Empirically Evaluated Improvements to Genotypic Spatial Distance Measurement Approaches for the Genetic Algorithm

Collier, Robert 04 May 2012 (has links)
The ability to visualize a solution space can be very beneficial, and it is generally accepted that the objective of visualization is to aid researchers in gathering insight. However, insight cannot be gathered effectively if the source data is misrepresented. This dissertation begins by demonstrating that the adaptive landscape visualization in widespread usage frequently misrepresents the neighborhood structure of genotypic space and, consequently, will mislead users about the manner in which solution space is traversed by the genetic algorithm. Bernhard Riemann, the father of topology, explicitly noted that a measurement of the distance between entities should represent the manner in which one can be brought towards the other. Thus, the commonly used Hamming distance, for example, is not representative of traversals of genotypic space by the genetic algorithm – a representative measure must include consideration for both mutation and recombination. This dissertation separately explores the properties that mutational and recombinational distances should have, and ultimately establishes a measure that is representative of the traversals made by both operators simultaneously. It follows that these measures can be used to enhance the adaptive landscape, by minimizing the discrepancy between the interpoint distances in genotypic space and the interpoint distances in the two-dimensional representation from which the landscape is extruded. This research also establishes a methodology for evaluating measures defining neighbourhood structures that are purportedly representative of traversals of genotypic space, by comparing them against an empirically generated norm. Through this approach it is conclusively demonstrated that the Hamming distance between genotypes is less representative than the proposed measures, and should not be used to define the neighbourhood structure from which visualizations would be constructed. While the proposed measures do not distort the data or otherwise mislead the user, they do require a significant computational expense. Fortunately, the choice to use these measures is always made at the discretion of the user, with additional costs incurred when accuracy and representativity are of paramount importance. These measures will ultimately find further application in population diversity measurement, cluster analysis, and any other task where the representativity of the neighborhood structure of the genotypic space is vital.
29

Visualization for frequent pattern mining

Carmichael, Christopher Lee 03 April 2013 (has links)
Data mining algorithms analyze and mine databases for discovering implicit, previously unknown and potentially useful knowledge. Frequent pattern mining algorithms discover sets of database items that often occur together. Many of the frequent pattern mining algorithms represent the discovered knowledge in the form of a long textual list containing these sets of frequently co-occurring database items. As the amount of discovered knowledge can be large, it may not be easy for most users to examine and understand such a long textual list of knowledge. In my M.Sc. thesis, I represent both the original database and the discovered knowledge in pictorial form. Specifically, I design a new interactive visualization system for viewing the original transaction data (which are then fed into the frequent pattern mining engine) and for revealing the interesting knowledge discovered from the transaction data in the form of mined patterns.
30

Supporting production system development through Obeya concept

Shahbazi, Sasha, Javadi, Siavash January 2013 (has links)
Manufacturing Industry as an important part of European and Swedish economy faces new challenges with the daily growing global competition. An enabler of overcoming these challenges is a rapid transforming to a value-based focus. Investment in innovation tools for production system development is a crucial part of that focus which helps the companies to rapidly adapt their production systems to new changes. Those changes can be categorized to incremental and radical ones. In this research we studied the Obeya concept as a supporting tool for production system development with both of those approaches. It came from Toyota production system and is a big meeting space which facilitates communication and data visualization for a project team. Four lean companies have been studied to find the role of such spaces in production development. Results indicate a great opportunity for improving those spaces and their application to radical changes in production development projects / EXPRES

Page generated in 0.0598 seconds