• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 172
  • 39
  • 19
  • 13
  • 8
  • 7
  • 7
  • 7
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 340
  • 340
  • 85
  • 69
  • 60
  • 49
  • 47
  • 47
  • 40
  • 38
  • 38
  • 37
  • 37
  • 34
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Climate Change and Mountaintop Removal Mining: A MaxEnt Assessment of the Potential Dual Threat to West Virginia Fishes

Hendrick, Lindsey R F 01 January 2018 (has links)
Accounts of species’ range shifts in response to climate change, most often as latitudinal shifts towards the poles or upslope shifts to higher elevations, are rapidly accumulating. These range shifts are often attributed to species ‘tracking’ their thermal niches as temperatures in their native ranges increase. Our objective was to estimate the degree to which climate change-driven shifts in water temperature may increase the exposure of West Virginia’s native freshwater fishes to mountaintop removal surface coal mining. Mid-century shifts in habitat suitability for nine non-game West Virginia fishes were projected via Maximum Entropy species distribution modeling, using a combination of physical habitat, historical climate conditions, and future climate data. Modeling projections for a high-emissions scenario (Representative Concentration Pathway 8.5) predict that habitat suitability will increase in high elevation streams for eight of nine species, with marginal increases in habitat suitability ranging from 46-418%. We conclude that many West Virginia fishes will be at risk of increased exposure to mountaintop removal surface coal mining if climate change continues at a rapid pace.
82

Bringing interpretability and visualization with artificial neural networks

Gritsenko, Andrey 01 August 2017 (has links)
Extreme Learning Machine (ELM) is a training algorithm for Single-Layer Feed-forward Neural Network (SLFN). The difference in theory of ELM from other training algorithms is in the existence of explicitly-given solution due to the immutability of initialed weights. In practice, ELMs achieve performance similar to that of other state-of-the-art training techniques, while taking much less time to train a model. Experiments show that the speedup of training ELM is up to the 5 orders of magnitude comparing to standard Error Back-propagation algorithm. ELM is a recently discovered technique that has proved its efficiency in classic regression and classification tasks, including multi-class cases. In this thesis, extensions of ELMs for non-typical for Artificial Neural Networks (ANNs) problems are presented. The first extension, described in the third chapter, allows to use ELMs to get probabilistic outputs for multi-class classification problems. The standard way of solving this type of problems is based 'majority vote' of classifier's raw outputs. This approach can rise issues if the penalty for misclassification is different for different classes. In this case, having probability outputs would be more useful. In the scope of this extension, two methods are proposed. Additionally, an alternative way of interpreting probabilistic outputs is proposed. ELM method prove useful for non-linear dimensionality reduction and visualization, based on repetitive re-training and re-evaluation of model. The forth chapter introduces adaptations of ELM-based visualization for classification and regression tasks. A set of experiments has been conducted to prove that these adaptations provide better visualization results that can then be used for perform classification or regression on previously unseen samples. Shape registration of 3D models with non-isometric distortion is an open problem in 3D Computer Graphics and Computational Geometry. The fifth chapter discusses a novel approach for solving this problem by introducing a similarity metric for spectral descriptors. Practically, this approach has been implemented in two methods. The first one utilizes Siamese Neural Network to embed original spectral descriptors into a lower dimensional metric space, for which the Euclidean distance provides a good measure of similarity. The second method uses Extreme Learning Machines to learn similarity metric directly for original spectral descriptors. Over a set of experiments, the consistency of the proposed approach for solving deformable registration problem has been proven.
83

CAVISAP : Context-Aware Visualization of Air Pollution with IoT Platforms

Nurgazy, Meruyert January 2019 (has links)
Air pollution is a severe issue in many big cities due to population growth and the rapid development of the economy and industry. This leads to the proliferating need to monitor urban air quality to avoid personal exposure and to make savvy decisions on managing the environment. In the last decades, the Internet of Things (IoT) is increasingly being applied to environmental challenges, including air quality monitoring and visualization. In this thesis, we present CAVisAP, a context-aware system for outdoor air pollution visualization with IoT platforms. The system aims to provide context-aware visualization of three air pollutants such as nitrogen dioxide (NO2), ozone (O3) and particulate matter (PM2.5) in Melbourne, Australia and Skellefteå, Sweden. In addition to the primary context as location and time, CAVisAP takes into account users’ pollutant sensitivity levels and colour vision impairments to provide personalized pollution maps and pollution-based route planning. Experiments are conducted to validate the system and results are discussed.
84

Heavy Vehicle Classification Analysis Using Length-Based Vehicle Count and Speed Data

Yuksel, Eren 27 June 2018 (has links)
There is an increasing demand for application of Intelligent Transportation Systems (ITS) in order to make highways safer and sustainable. Collecting and analyzing traffic stream data are the most important parameters in transportation engineering in enhancing our understanding of traffic congestion and mobility. Classification of the vehicles using traffic data is one of the most essential parameters for traffic management. Of particular interest are heavy vehicles which impact traffic mobility due to their lack of maneuverability and slower speeds. The impact of heavy vehicles on the traffic stream results in congestion and reduction of road efficiency. In this paper, length-based vehicle count and speed data were analyzed and interpreted using one week's data from Interstate 5 (I-5) in the Portland, Oregon (OR) region of the United States (US). I-5 was chosen due to its prominent role in promoting North-South freight movement between Canada and Mexico and its vicinity to the Port of Portland. The objective of this analysis was to find better visualization techniques for the length-based traffic count and speed data. In total, 13,901,793 out of 56,146,138 20-second records were analyzed. The vehicles were classified into two categories. Those that were 20 feet or less were considered as passenger vehicles and those above 20 feet were considered as heavy vehicles. The data consisted of approximately 25% heavy vehicles. Results showed the merit of applying more disaggregate data (5-min polar, and radar plots) for better visualization as against hourly, and 15-min plots in order to capture sudden changes in average speed, heavy vehicle volume, and heavy vehicle percentage.
85

Jittery Gauges: Combating the Polarizing Effect of Political Data Visualizations Through Uncertainty

Hardy, Bethany Blaire 01 December 2017 (has links)
Since the late 1800s, public data visualizations displaying election forecasts and results—such as the red and blue map of the United State—have presented an irreparably divided country. However, on November 8, 2016, the New York Times published a data visualization on their live presidential forecast page that broke over a century of visual expectations, inspiring many to tweet reactions to what popular media has dubbed the "jittery gauges." Not surprisingly, the tweets about this unique and difficult-to-interpret display were mostly negative. This paper argues, though, that the negative feedback indicates that the gauges, while imperfect, represent an important step away from visualizations that support the growing perception of party polarization. The key factor present in the gauges is the data design principle of uncertainty or possibility. If major news outlets were more thoughtful about introducing uncertain elements into visualizations of American politics, perhaps the nation could begin to imagine a political landscape that moves beyond red vs. blue, me vs. you.
86

Advanced Building Energy Data Visualization

Udd, Krister January 2002 (has links)
Advanced Building Energy Data Visualization is a way to detect performance problems in commercialbuildings. By placing sensors in a building that collects data from example, air temperature and electricalpower, then makes it possible to calculate the data in Data Visualization software. This softwaregenerates visual diagrams so the building manager or building operator can see if for example thepower consumption is to high.A first step (before sensors are installed in a building) to see how the energy consumption is in abuilding can be to use a Benchmarking Tool. There is a number of Benchmarking Tools that is availablefor free on the Internet. Each tool have a bit different approach, but they all show how much energyconsumption there is in a building compared to other similar buildings.In this study a new web design for the benchmarking tool CalARCH has been developed. CalARCHis developed at the Berkeley Lab in Berkeley, California, USA. CalARCH uses data collected only frombuildings in California, and is only for comparing buildings in California with other similar buildingsin the state.Five different versions of the web site were made. Then a web survey was done to determine whichversion would be the best for CalARCH. The results showed that Version 5 and Version 3 was the best.Then a new version was made, based on these two versions. This study was made at the LawrenceBerkeley Laboratory.
87

Ανάπτυξη συστήματος εποπτικού ελέγχου και καταγραφής δεδομένων προερχομένων από ανανεώσιμες πηγές ενέργειας : απεικόνιση δεδομένων σε υπολογιστή

Τσανταρλιώτης, Λεωνίδας 18 July 2013 (has links)
Στην παρούσα εργασία ασχοληθήκαμε με την ανάπτυξη ενός συστήματος, που θα ασκεί εποπτικό έλεγχο και θα παρουσιάζει δεδομένα, που προέρχονται από συστήματα ανανεώσιμων πηγών ενέργειας. Η εργασία αναπτύχθηκε με βάση τις ΑΠΕ, αλλά μπορεί να εφαρμοσθεί σε πληθώρα άλλων εφαρμογών. Παρέχει πολλές δυνατότητες παραμετροποίησης, ώστε να προσαρμόζεται στις ανάγκες της κάθε εφαρμογής. / This work creates a system, which controls and shows data. These data come from systems, which take advantage of Renewable Energy Sources. The work is primary created for RES, but it is suitable for many other applications, as well. It can be configured in many ways so as to be suitable for almost any application, which demands supervisory control and data acquisition (SCADA).
88

Εφαρμογές και τεχνικές εξόρυξης και οπτικοποίησης γνώσης σε βιοϊατρικά δεδομένα

Μερίδου, Δέσποινα 08 May 2013 (has links)
Η οπτικοποίηση των δεδομένων (data visualization) αποτελεί τη διαδικασία αναπαράστασης αφαιρετικών ή επιστημονικών δεδομένων με τη μορφή εικόνας, η οποία μπορεί να συμβάλει στην καλύτερη και βαθύτερη κατανόηση της σημασίας των δεδομένων και των μεταβλητών ή των μονάδων που συνιστούν τα δεδομένα αυτά. Λόγω των τεράστιων και συνεχώς αυξανόμενων ποσοτήτων και πηγών πληροφορίας, η ανάγκη για οπτικοποίηση είναι μεγάλη. Εφαρμόζοντας διάφορα μέσα οπτικοποίησης, η μελέτη των δεδομένων γίνεται πιο αποδοτική: τα δεδομένα εξετάζονται μαζικά και γρήγορα. Επίσης, η οπτικοποίηση των δεδομένων συμβάλλει στην ουσιαστική κατανόηση ενός ορισμένου προβλήματος και μπορεί να οδηγήσει στην ανακάλυψη νέων εννοιών και λύσεων. Η τεχνική της οπτικοποίησης δεδομένων βρίσκει ιδιαίτερη εφαρμογή στον τομέα της Βιοπληροφορικής. Συγκεκριμένα, η οπτικοποίηση εφαρμόζεται σε δεδομένα αλληλουχιών, γονιδιωμάτων, μακρομοριακών δομών, συστημικής βιολογίας, μαγνητικής τομογραφίας, κλπ. Η πρόσφατη και ολοένα μεγαλύτερη πρόοδος στη διαθεσιμότητα δεδομένων και στις μεθόδους ανάλυσης έχει δημιουργήσει νέες ευκαιρίες για τους ερευνητές, έτσι ώστε αυτοί να είναι σε θέση να βελτιώσουν τις μεθόδους καταγραφής νόσων σε εθνικό ή τοπικό επίπεδο. Η HELPIDA (HELlenic ePIdemiological DAtabase) αποτελεί την πρώτη προσπάθει καταγραφής ενός μεγάλου αριθμού επιδημιολογικών μελετών από τον χώρο της ελλάδας, συνδυασμού αυτών με γεωγραφικές και στατιστικές παραμέτρους και οπτικοποίησης των αποτελεσμάτων με σκοπό την εξόρυξη πολύτιμης πληροφορίας. Σε ό,τι αφορά την πρώτη έκδοσή της, η HELPIDA αναπτύχθηκε με τη χρήστη των γλωσσών προγραμματισμού ASP.NET και Visual C#. Στην εργασία αυτή, παρουσιάζεται η δεύτερη έκδοση της HELPIDA, η οποία σχεδιάστηκε με τη βοήθεια του εργαλείου Microsoft Lightswitch και εμπλουτίστηκε με γραφήματα και οπτικοποιήσεις δεδομένων. Εφαρμόζοντας ορισμένα εργαλεία οπτικοποίησης, στοχεύουμε στον χαρακτηρισμό της HELPIDA ως ένα πολύτιμο εργαλείο στον τομέα της Δημόσιας Υγείας και ελπίζουμε ότι θα χρησιμοποιηθεί από ερευνητές σε ακαδημαϊκό επίπεδο αλλά και σε άλλους τομείς. / Data visualization is the study of the visual representation of data, meaning "information that has been abstracted in some schematic form, including attributes or variables for the units of information". The ability to visualize the implications of data is as old as humanity itself. Yet due to the vast quantities, sources, and sinks of data being pumped around our global economy at an ever increasing rate, the need for superior visualization is great and growing. Data visualization is efficient: vast quantities of data are processed in a simple and quick manner. Furthermore, visualizations can help an analyst or a group achieve more insight into the nature of a problem and discover new understanding. Data Visualization is often applied in the field of Bioinformatics. Specifically, software tools are used for the visualization of sequences, genomes, alignments, phylogenies, macromolecular structures, systems biology, microscopy, and magnetic resonance imaging data. HELPIDA (HELlenic ePIdemiological DAtabase) is the first attempt to register a large number of epidemiological studies from Greece, to combine them with geographical and statistical parameters and to visualize the results in order to mine valuable information. As fas as the first version of the application is concerned, HELPIDA was developed using the programming languages ASP.NET and Visual C#. In this thesis, the second version of HELPIDA, which was designed using the tool Microsoft Lightswitch and was enhanced with charts and data visualizations, is presented. Being enhanced with certain data visualization tools, HELPIDA is aiming at being used as an invaluable tool for Public Health decisions and we hope that it will be exploited by decision makers in academic and political level.
89

Sistema para indexação e visualização de depoimentos de história oral: o caso do Museu da Pessoa / System for indexing and visualizing oral history testimonials: the Museu da Pessoas case

Pedro Herzog 26 February 2014 (has links)
Esta dissertação apresenta a estruturação de um sistema para indexação e visualização de depoimentos de história oral em vídeo. A partir do levantamento de um referencial teórico referente à indexação, o sistema resultou em um protótipo funcional de alta fidelidade. O conteúdo para a realização deste foi obtido pela indexação de 12 depoimentos coletados pela equipe do Museu da Pessoa durante o projeto Memórias da Vila Madalena, em São Paulo (ago/2012). Acervos de História Oral como o Museu da Pessoa, o Museu da Imagem e do Som ou o Centro de Pesquisa e Documentação de História Contemporânea do Brasil / CPDOC da Fundação Getúlio Vargas, reúnem milhares de horas de depoimentos em áudio e vídeo. De uma forma geral, esses depoimentos são longas entrevistas individuais, onde diversos assuntos são abordados; o que dificulta sua análise, síntese e consequentemente, sua recuperação. A transcrição dos depoimentos permite a realização de buscas textuais para acessar assuntos específicos nas longas entrevistas. Por isso, podemos dizer que as transcrições são a principal fonte de consulta dos pesquisadores de história oral, deixando a fonte primária (o vídeo) para um eventual segundo momento da pesquisa. A presente proposta visa ampliar a recuperação das fontes primárias a partir da indexação de segmentos de vídeo, criando pontos de acesso imediato para trechos relevantes das entrevistas. Nessa abordagem, os indexadores (termos, tags ou anotações) não são associados ao vídeo completo, mas a pontos de entrada e saída (timecodes) que definem trechos específicos no vídeo. As tags combinadas com os timecodes criam novos desafios e possibilidades para indexação e navegação através de arquivos de vídeo. O sistema aqui estruturado integra conceitos e técnicas de áreas aparentemente desconectadas: metodologias de indexação, construção de taxonomias, folksonomias, visualização de dados e design de interação são integrados em um processo unificado que vai desde a coleta e indexação dos depoimentos até sua visualização e interação. / This work presents the construction of an interface for visualizing and navigating the many narratives of oral history testimonials. Collections such as those belonging to the CPDOC/FGV, the Museu da Imagem e do Som and the Museu da Pessoa, contain thousands of hours of audio and video interviews. Each one of them covers many subjects, which complicates its analysis, synthesis, indexing, and consequently its retrieval. This proposal aims to facilitate the retrieval of primary sources (audio and video) by indexing specific excerpts of testimonies. To accomplish this, technologies and methodologies from areas such as: tagging, content analysis, text mining, thesauri construction and data visualization will be applied. Hence the need for an approach that consolidates these various project phases into one unified process in which the interdependencies of each step are clear and transparent. As case study, we will use 12 testimonials collected in late 2012 by researchers from the Museu da Pessoa. By indexing these videos, we will create an interface for navigating the interview segments, now categorized by topics.
90

MusicVis : interactive visualization tool for exploring music rankings / MusicVis : ferramenta de visualização interativa para explorar rankings musicais

Guedes, Leandro Soares January 2017 (has links)
Os rankings musicais destinam-se principalmente a fins de marketing, mas também ajudam os usuários a descobrir novas músicas, bem como a comparar artistas, álbuns, etc. Este trabalho apresenta uma ferramenta interativa para visualizar, encontrar e comparar rankings musicais usando diferentes técnicas além de exibir atributos das músicas. A técnica foi concebida após uma pesquisa remota que coletou dados sobre como as pessoas escolhem música. As técnicas de visualização tornam mais fácil obter informações sobre artistas e faixas, e também comparar os dados obtidos a partir dos dois principais rankings de música, Billboard e Spotify. A ferrament também permite a interação com dados pessoais. Resultados de experimentos conduzidos com usuários potenciais mostraram que a ferramenta foi considerada interessante, com um layout atrativo. Comparando com as formas tradicionais de visualizar rankings de músicas, usuários preferiram a ferramenta aqui desenvolvida, mas a diferença para Billboard e Spotify não foi grande. Entretanto, quando avaliada a usabilidade da ferramenta, os resultados foram melhores, principalmente no que se refere à filtragem e às técnicas de comparação. MusicVis foi também considerado fácil de aprender. / Music rankings are mainly aimed at marketing purposes but also help users in discovering new music as well as comparing songs, artists, albums, etc. This work presents an interactive way to visualize, find and compare music rankings using different techniques, including the display of music attributes. The technique was conceived after a remote survey we conducted to collect data about how people choose music. Our visualization makes easier to obtain information about artists and tracks, and also to compare the data gathered from the two major music rankings, namely Billboard and Spotify. The tool also provides interaction with personal data. The results obtained from experiments with potential users showed that the tool was considered interesting, with an attractive layout. Compared to traditional music ranking tools users preferred ours, but with not such a large difference from using Billboard or Spotify. However, when evaluating the usability of our tool, results are positive, mainly concerning to data filtering and comparison features. MusicVis was also considered easy to learn.

Page generated in 0.121 seconds