• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 44
  • 44
  • 17
  • 11
  • 9
  • 9
  • 8
  • 8
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

The quantification and visualisation of human flourishing.

Henley, Lisa January 2015 (has links)
Economic indicators such as GDP have been a main indicator of human progress since the first half of last century. There is concern that continuing to measure our progress and / or wellbeing using measures that encourage consumption on a planet with limited resources, may not be ideal. Alternative measures of human progress, have a top down approach where the creators decide what the measure will contain. This work defines a 'bottom up' methodology an example of measuring human progress that doesn't require manual data reduction. The technique allows visual overlay of other 'factors' that users may feel are particularly important. I designed and wrote a genetic algorithm, which, in conjunction with regression analysis, was used to select the 'most important' variables from a large range of variables loosely associated with the topic. This approach could be applied in many areas where there are a lot of data from which an analyst must choose. Next I designed and wrote a genetic algorithm to explore the evolution of a spectral clustering solution over time. Additionally, I designed and wrote a genetic algorithm with a multi-faceted fitness function which I used to select the most appropriate clustering procedure from a range of hierarchical agglomerative methods. Evolving the algorithm over time was not successful in this instance, but the approach holds a lot of promise as an alternative to 'scoring' new data based on an original solution, and as a method for using alternate procedural options to those an analyst might normally select. The final solution allowed an evolution of the number of clusters with a fixed clustering method and variable selection over time. Profiling with various external data sources gave consistent and interesting interpretations to the clusters.
12

Porovnání nástrojů pro Data Discovery / Data Discovery Tools Comparison

Kopecký, Martin January 2012 (has links)
Diploma thesis focuses on Data Discovery tools, which have been growing in im-portance in the Business Intelligence (BI) field during the last few years. Increasing number of companies of all sizes tend to include them in their BI environments. The main goal of this thesis is to compare QlikView, Tableau and PowerPivot using a defined set of criteria. The comparison is based on development of human resources report, which was modeled on a real life banking sector business case. The main goal is supported by a number of minor goals, namely: analysis of existing comparisons, definition of a new set of criteria, basic description of the compared platforms, and documentation of the case study. The text can be divided into two major parts. The theoretical part describes elemental BI architecture, discusses In-memory databases and data visualisation in context of a BI solution, and analyses existing comparisons of Data Discovery tools and BI platforms in general. Eight different comparisons are analysed in total, including reports of consulting companies and diploma theses. The applied part of the thesis builds upon the previous analysis and defines comparison criteria divided into five groups: Data import, transformation and storage; Data analysis and presentation; Operations criteria; User friendliness and support; Business criteria. The subsequent chapter describes the selected platforms, their brief history, component architecture, available editions and licensing. Case study chapter documents development of the report in each of the platforms and pinpoints their pros and cons. The final chapter applies the defined set of criteria and uses it to compare the selected Data Discovery platforms to fulfil the main goal of this thesis. The results are presented both numerically, utilising the weighted sum model, and verbally. The contribution of the thesis lies in the transparent confrontation of three Data Discovery tools, in the definition of a new set of comparison criteria, and in the documentation of the practical testing. The thesis offers an indirect answer to the question: "Which analytical tool should we use to supplement our existing BI solution?"
13

Point of View : The Impact of Background Conditions on Distinguishability of Visualised Data in Detailed Virtual Environments

Larsson, Clara January 2021 (has links)
Data visualisation in a virtual environment (VE) opens up new ways of presenting data and makes it possible for the observer to explore data in an immersive way. However, it also comes with a number of challenges. One of these challenges is data distinguishability. The data needs to be distinguishable against the background, but in a VE where the user can move around and observe the data from different perspectives, the backdrop will be constantly changing. This thesis studies this challenge and contributes knowledge to current research about data visualisation in VEs. The research question When in a detailed virtual environment, what impact does the varying background have on distinguishability of visualised data? is answered using a digital self -completion questionnaire and four hypotheses.  The data were not able to clearly determine if one of the colourmap used (YellowRed, Rainbow) was overall more effective than the other one. However, the rainbow colourmap did have marginally better results and was chosen by more participants as their preferred colourmap. The results did show that a larger number of participants disagreed that the light background made the data easier to distinguish in comparison to a dark backdrop. The results showed that more participants found it easier to see the data when seen from above than when from below. The two colourmaps were not equally effective regarding how well they could show both the VE and the data: The results indicating that the YellowRed colourmap was better at showing the details of the VE but not as good at distinguishing the data, whilst the Rainbow colourmap had the reverse results being better at distinguishing the data but less effective at showing the background.  The thesis concludes that it has fulfilled its goal of establishing a starting point for further studies, further studies that, according to the author, is woefully needed.
14

Bridging the gap between human and computer vision in machine learning, adversarial and manifold learning for high-dimensional data

Jungeum Kim (12957389) 01 July 2022 (has links)
<p>In this dissertation, we study three important problems in modern deep learning: adversarial robustness, visualization, and partially monotonic function modeling. In the first part, we study the trade-off between robustness and standard accuracy in deep neural network (DNN) classifiers. We introduce sensible adversarial learning and demonstrate the synergistic effect between pursuits of standard natural accuracy and robustness. Specifically, we define a sensible adversary which is useful for learning a robust model while keeping high natural accuracy. We theoretically establish that the Bayes classifier is the most robust multi-class classifier with the 0-1 loss under sensible adversarial learning. We propose a novel and efficient algorithm that trains a robust model using implicit loss truncation. Our  experiments demonstrate that our method is effective in promoting robustness against various attacks and keeping high natural accuracy. </p> <p>In the second part, we study nonlinear dimensional reduction with the manifold assumption, often called manifold learning. Despite the recent advances in manifold learning, current state-of-the-art techniques focus on preserving only local or global structure information of the data. Moreover, they are transductive; the dimensional reduction results cannot be generalized to unseen data. We propose iGLoMAP, a novel inductive manifold learning method for dimensional reduction and high-dimensional data visualization. iGLoMAP preserves both local and global structure information in the same algorithm by preserving geodesic distance between data points. We establish the consistency property of our geodesic distance estimators. iGLoMAP can provide the lower-dimensional embedding for an unseen, novel point without any additional optimization. We  successfully apply iGLoMAP to the simulated and real-data settings with competitive experiments against state-of-the-art methods.</p> <p>In the third part, we study partially monotonic DNNs. We model such a function by using the fundamental theorem for line integrals, where the gradient is parametrized by DNNs. For the validity of the model formulation, we develop a symmetric penalty for gradient modeling. Unlike existing methods, our method allows partially monotonic modeling for general DNN architectures and monotonic constraints on multiple variables. We empirically show the necessity of the symmetric penalty on a simulated dataset.</p>
15

En jämnförelse av prestanda och skalbarhet för grafgenerering i datavisualiserande Javascript-bibliotek : Ett jämnförande experiment på Chart.js, ApexCharts, Billboard, och ToastUI / A comparision of performance and scalability of chart generation for Javascript data visualisation libraries : A comparative experiment on Chart.js, ApexCharts, Billboard, and ToastUI

Magnusson Millqvist, Hamlet, Bolin, Niklas January 2022 (has links)
On the web, data visualisation through charts and diagrams can help present data in a more readable way. This is often done through the usage of JavaScript libraries. We experimented with 5 JavaScript data visualisation libraries to determine their respective performances and how each one scaled with increased data size. Our results will hopefully provide help with the selection of said libraries. The results show a significant difference in response times between all libraries for mostdata sizes, with only a few exceptions. Different exponential growths were also identified for all libraries, and the performance often varied greatly depending on chart type. Response time is not the only variable in performance measurements. Future research could cover other aspects, like memory consumption and rendering requirements. There were also times when the libraries did not render at larger data sizes, despite showing no errors, and further investigation behind this should be done.
16

Data visualisation to improve battery discharge process

Gustafsson, Ebba January 2024 (has links)
Data visualisation can provide a user-friendly way to observe and understand data. It can make is easier to make well-informed decision and communicate findings in data. This study aims to research how to effectively structure and visualize a complex data sets in order to improve a battery discharge process. By implementing a dashboard and visualising a data set from electrical battery discharging, the following objectives are considered; analyse and identify which parameters and variables are most important in the battery discharging process. Analyse how data transformation and cleaning can support data visualisations. Define an interface that visualises the trends, anomalies and correlations of the data set. Evaluate how the visual representation is perceived by users in the battery recycling process?The study followed the User Centered Design method, which consist of five phases that are iterated. During the phases Identify needs and Specify context of use six stake holder interviews was held, theses were analysed through an Affinity diagram and Personas. In the phase Specify requirements requirements are established by conducting a user journey mapping. The data and insights from the previous phases was turned into ideas and solutions in the phase Produce design solutions. In total three low-fidelity and one high-fidelity prototype were created, as well as one implemented solution. In the last phase Evaluate design the design solutions were tested though, interviews, usability test and a survey. The result of the study strengthens the theory that data visualisations can be used to provide insights. The findings show that visualisation to some extent could help detect abnormalities, patterns and correlation between variables, which could be useful in improving a process.
17

Use of Machine Learning Algorithms to Propose a New Methodology to Conduct, Critique and Validate Urban Scale Building Energy Modeling

January 2017 (has links)
abstract: City administrators and real-estate developers have been setting up rather aggressive energy efficiency targets. This, in turn, has led the building science research groups across the globe to focus on urban scale building performance studies and level of abstraction associated with the simulations of the same. The increasing maturity of the stakeholders towards energy efficiency and creating comfortable working environment has led researchers to develop methodologies and tools for addressing the policy driven interventions whether it’s urban level energy systems, buildings’ operational optimization or retrofit guidelines. Typically, these large-scale simulations are carried out by grouping buildings based on their design similarities i.e. standardization of the buildings. Such an approach does not necessarily lead to potential working inputs which can make decision-making effective. To address this, a novel approach is proposed in the present study. The principle objective of this study is to propose, to define and evaluate the methodology to utilize machine learning algorithms in defining representative building archetypes for the Stock-level Building Energy Modeling (SBEM) which are based on operational parameter database. The study uses “Phoenix- climate” based CBECS-2012 survey microdata for analysis and validation. Using the database, parameter correlations are studied to understand the relation between input parameters and the energy performance. Contrary to precedence, the study establishes that the energy performance is better explained by the non-linear models. The non-linear behavior is explained by advanced learning algorithms. Based on these algorithms, the buildings at study are grouped into meaningful clusters. The cluster “mediod” (statistically the centroid, meaning building that can be represented as the centroid of the cluster) are established statistically to identify the level of abstraction that is acceptable for the whole building energy simulations and post that the retrofit decision-making. Further, the methodology is validated by conducting Monte-Carlo simulations on 13 key input simulation parameters. The sensitivity analysis of these 13 parameters is utilized to identify the optimum retrofits. From the sample analysis, the envelope parameters are found to be more sensitive towards the EUI of the building and thus retrofit packages should also be directed to maximize the energy usage reduction. / Dissertation/Thesis / Masters Thesis Architecture 2017
18

Design and evaluation of an educational tool for understanding functionality in flight simulators : Visualising ARINC 610C

Söderström, Arvid, Thorheim, Johanna January 2017 (has links)
The use of simulation in aircraft development and pilot training is essential as it saves time and money. The ARINC 610C standard describes simulator functionality, and is developed to streamline the use of flight simulators. However, the text based standard lacks overview and function descriptions are hard to understand for the simulator developers, who are the main users. In this report, an educational software tool is conceptualised to increase usability of ARINC 610C. The usability goals and requirements were established through multiple interviews and two observation studies. Consequently, six concepts were produced, and evaluated in a workshop with domain experts. Properties from the evaluated concepts were combined in order to form one concluding concept. A prototype was finally developed and evaluated in usability tests with the potential user group. The results from the heuristic evaluation, the usability tests, and a mean system usability score of 79.5 suggests that the prototyped system, developed for visualising ARINC 610C, is a viable solution.
19

Emotion recognition from speech using prosodic features

Väyrynen, E. (Eero) 29 April 2014 (has links)
Abstract Emotion recognition, a key step of affective computing, is the process of decoding an embedded emotional message from human communication signals, e.g. visual, audio, and/or other physiological cues. It is well-known that speech is the main channel for human communication and thus vital in the signalling of emotion and semantic cues for the correct interpretation of contexts. In the verbal channel, the emotional content is largely conveyed as constant paralinguistic information signals, from which prosody is the most important component. The lack of evaluation of affect and emotional states in human machine interaction is, however, currently limiting the potential behaviour and user experience of technological devices. In this thesis, speech prosody and related acoustic features of speech are used for the recognition of emotion from spoken Finnish. More specifically, methods for emotion recognition from speech relying on long-term global prosodic parameters are developed. An information fusion method is developed for short segment emotion recognition using local prosodic features and vocal source features. A framework for emotional speech data visualisation is presented for prosodic features. Emotion recognition in Finnish comparable to the human reference is demonstrated using a small set of basic emotional categories (neutral, sad, happy, and angry). A recognition rate for Finnish was found comparable with those reported in the western language groups. Increased emotion recognition is shown for short segment emotion recognition using fusion techniques. Visualisation of emotional data congruent with the dimensional models of emotion is demonstrated utilising supervised nonlinear manifold modelling techniques. The low dimensional visualisation of emotion is shown to retain the topological structure of the emotional categories, as well as the emotional intensity of speech samples. The thesis provides pattern recognition methods and technology for the recognition of emotion from speech using long speech samples, as well as short stressed words. The framework for the visualisation and classification of emotional speech data developed here can also be used to represent speech data from other semantic viewpoints by using alternative semantic labellings if available. / Tiivistelmä Emootiontunnistus on affektiivisen laskennan keskeinen osa-alue. Siinä pyritään ihmisen kommunikaatioon sisältyvien emotionaalisten viestien selvittämiseen, esim. visuaalisten, auditiivisten ja/tai fysiologisten vihjeiden avulla. Puhe on ihmisten tärkein tapa kommunikoida ja on siten ensiarvoisen tärkeässä roolissa viestinnän oikean semanttisen ja emotionaalisen tulkinnan kannalta. Emotionaalinen tieto välittyy puheessa paljolti jatkuvana paralingvistisenä viestintänä, jonka tärkein komponentti on prosodia. Tämän affektiivisen ja emotionaalisen tulkinnan vajaavaisuus ihminen-kone – interaktioissa rajoittaa kuitenkin vielä nykyisellään teknologisten laitteiden toimintaa ja niiden käyttökokemusta. Tässä väitöstyössä on käytetty puheen prosodisia ja akustisia piirteitä puhutun suomen emotionaalisen sisällön tunnistamiseksi. Työssä on kehitetty pitkien puhenäytteiden prosodisiin piirteisiin perustuvia emootiontunnistusmenetelmiä. Lyhyiden puheenpätkien emotionaalisen sisällön tunnistamiseksi on taas kehitetty informaatiofuusioon perustuva menetelmä käyttäen prosodian sekä äänilähteen laadullisten piirteiden yhdistelmää. Lisäksi on kehitetty teknologinen viitekehys emotionaalisen puheen visualisoimiseksi prosodisten piirteiden avulla. Tutkimuksessa saavutettiin ihmisten tunnistuskykyyn verrattava automaattisen emootiontunnistuksen taso käytettäessä suppeaa perusemootioiden joukkoa (neutraali, surullinen, iloinen ja vihainen). Emootiontunnistuksen suorituskyky puhutulle suomelle havaittiin olevan verrannollinen länsieurooppalaisten kielten kanssa. Lyhyiden puheenpätkien emotionaalisen sisällön tunnistamisessa saavutettiin taas parempi suorituskyky käytettäessä fuusiomenetelmää. Emotionaalisen puheen visualisoimiseksi kehitetyllä opetettavalla epälineaarisella manifoldimallinnustekniikalla pystyttiin tuottamaan aineistolle emootion dimensionaalisen mallin kaltainen visuaalinen rakenne. Mataladimensionaalisen kuvauksen voitiin edelleen osoittaa säilyttävän sekä tutkimusaineiston emotionaalisten luokkien että emotionaalisen intensiteetin topologisia rakenteita. Tässä väitöksessä kehitettiin hahmontunnistusmenetelmiin perustuvaa teknologiaa emotionaalisen puheen tunnistamiseksi käytettäessä sekä pitkiä että lyhyitä puhenäytteitä. Emotionaalisen aineiston visualisointiin ja luokitteluun kehitettyä teknologista kehysmenetelmää käyttäen voidaan myös esittää puheaineistoa muidenkin semanttisten rakenteiden mukaisesti.
20

Red Lines &amp; Hockey Sticks : A discourse analysis of the IPCC’s visual cultureand climate science (mis)communication

Dawson, Thomas January 2021 (has links)
Within the climate science research community there exists an overwhelming consensus on the question of climate change. The scientific literature supports the broad conclusion that the Earth’s climate is changing, that this change is driven by human factors (anthropogenic), and that the environmental consequences could be severe. While a strong consensus exists in the climate science community, this is not reflected in the wider public or among policymakers, where sceptical attitudes towards anthropogenic climate change is much more prevalent. This discrepancy in the perception of the urgency of the problem of climate change is an alarming trend and likely a result of a failure of science communication, which is the topic of this thesis. This paper analyses the visual culture of climate change, with specific focus on the data visualisations comprised within the IPCC assessment reports. The visual aspects of the reports were chosen because of the prioritisation images often receive within scientific communication and for their quality as immutable mobiles that can transition between different media more easily than text. The IPCC is the central institutional authority in the climate science visual discourse, and its assessment reports, therefore, are the site of this discourse analysis. The analysis tracks the development and variations in the IPCC’s visual culture, investigates in detail the use of colour and the visual form of the “Hockey Stick” graph. This work is undertaken to better understand the state of the art of climate science data visualisation, in an effort to suggest the best way forward to bridge the knowledge gap between the scientific community and the public on this important issue. The thesis concludes that a greater emphasis on the information aesthetics of their data visualisations could benefit the IPCC’s pedagogical reach, but that it may also be argued that it is not the IPCC’s role in climate change discourse to produce the most visually persuasive images. That they exist as a tone-setting institution that provides authority to entities that are better geared towards wider communication, such as journalism and activism.

Page generated in 0.1556 seconds