• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 172
  • 39
  • 19
  • 13
  • 8
  • 7
  • 7
  • 7
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 340
  • 340
  • 85
  • 69
  • 60
  • 49
  • 47
  • 47
  • 40
  • 38
  • 38
  • 37
  • 37
  • 34
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Research of multidimensional data visualization using feed-forward neural networks / Tiesioginio sklidimo neuroninių tinklų taikymo daugiamačiams duomenims vizualizuoti tyrimai

Medvedev, Viktor 04 February 2008 (has links)
The research area of this work is the analysis of multidimensional data and the ways of improving apprehension of the data. Data apprehension is rather a complicated problem especially if the data refer to a complex object or phenomenon described by many parameters. The research object of the dissertation is artificial neural networks for multidimensional data projection. General topics that are related with this object: multidimensional data visualization; dimensionality reduction algorithms; errors of projecting data; the projection of the new data; strategies for retraining the neural network that visualizes multidimensional data; optimization of control parameters of the neural network for multidimensional data projection; parallel computing. The key aim of the work is to develop and improve methods how to efficiently minimize visualization errors of multidimensional data by using artificial neural networks. The results of the research are applied in solving some problems in practice. Human physiological data that describe the human functional state have been investigated. / Disertacijos tyrimų sritis yra daugiamačių duomenų analizė, bei tų duomenų suvokimo gerinimo būdai. Duomenų suvokimas yra sudėtingas uždavinys, ypač kai duomenys nurodo sudėtingą objektą, kuris aprašytas daugeliu parametrų. Disertacijoje nagrinėjami dirbtinių neuroninių tinklų algoritmai daugiamačiams duomenims vizualizuoti. Darbo tyrimų objektas yra dirbtiniai neuroniniai tinklai, skirti daugiamačių duomenų vizualizavimui. Su šiuo objektu yra betarpiškai susiję dalykai: daugiamačių duomenų vizualizavimas; dimensijos mažinimo algoritmai; projekcijos paklaidos; naujų taškų atvaizdavimas; vizualizavimui skirto neuroninio tinklo permokymo strategijos ir parametrų optimizavimas; lygiagretieji skaičiavimai. Pagrindinis disertacijos tikslas yra sukurti ir tobulinti metodus, kuriuos taikant būtų efektyviai minimizuojamos daugiamačių duomenų projekcijos paklaidos naudojantis dirbtiniais neuroniniais tinklais bei projekcijos algoritmais. Darbe atliktų tyrimų rezultatai atskleidė naujas medicininių (fiziologinių) duomenų analizės galimybes.
172

Duomenų bazės turinio publikavimo interaktyviuose tinklapiuose galimybių tyrimas / Research on the Possibilities of Publishing Data Base content in Interactive Web Pages

Selickas, Tomas 31 August 2011 (has links)
Dažnai būna taip, kad internetiniame tinklapyje yra daug svarbių duomenų, tačiau jie nėra pateikti lengvai suprantamoje formoje. Būtent pateiktos informacijos interaktyvumo stoka, sąlygoja ne tik esamos informacijos sudėtingesnį suvokimą ar įsisavinimą, bet taip pat tiesiogiai siejasi su lankytojų srauto mažėjimu. Pastarosios situacijos buvimas ypač aktualus internetiniams tinklapiams, kuriuose kaupiama ir publikuojama daug specifinės srities duomenų. Šiame magistriniame darbe siekiama Exhibit įrankį pritaikyti korektiškam ir pilnavertiškam informacinės sistemos duomenų bazėje kaupiamų duomenų publikavimui ir vizualizavimui. Esamų sprendimų analizė leido atskleisti, kad Exhibit pritaikytas dirbti su statine, tai yra failuose saugoma informacija. Be to Exhibit vidinės duomenų struktūros formavimas gana ilgai užtrunka [Zhao et al., 2008]. Taigi, magistriniame darbe surastos ir pritaikytos priemonės leidžiančios Exhibit įrankį pritaikyti dažnai kintančios ir nuolatos atsinaujinančios informacijos atvaizdavimui. O taip pat, patobulintas metodas, kuris leidžia greičiau suformuoti vidinę Exhibit duomenų struktūrą. / There is common situation where are much of important data on the website, but the data are given in an inconvenient form to use. The lack of information interactivity determines complicated understanding and acquisition of the given information, also it directly determines the decline of website visitors. The being of the mentioned situation is topical to websites where collecting and publishing a lot of specific data. An aim of this research is to find a way to realize qualitatively publishing and visualization of the data stored in information system database. An analysis of the present decisions showed that Exhibit is great tool to solve existing problem. But this tool have a limitation. Exhibit is working with static (stored in files) information. Also creation of internal Exhibit data structure takes too long time. In this research were found and adapted facilities that let to use an Exhibit tool for publishing dynamic information. Also was improved method for faster internal Exhibit data structure creation.
173

Doppler Lidar Vector Retrievals and Atmospheric Data Visualization in Mixed/Augmented Reality

January 2017 (has links)
abstract: Environmental remote sensing has seen rapid growth in the recent years and Doppler wind lidars have gained popularity primarily due to their non-intrusive, high spatial and temporal measurement capabilities. While lidar applications early on, relied on the radial velocity measurements alone, most of the practical applications in wind farm control and short term wind prediction require knowledge of the vector wind field. Over the past couple of years, multiple works on lidars have explored three primary methods of retrieving wind vectors viz., using homogeneous windfield assumption, computationally extensive variational methods and the use of multiple Doppler lidars. Building on prior research, the current three-part study, first demonstrates the capabilities of single and dual Doppler lidar retrievals in capturing downslope windstorm-type flows occurring at Arizona’s Barringer Meteor Crater as a part of the METCRAX II field experiment. Next, to address the need for a reliable and computationally efficient vector retrieval for adaptive wind farm control applications, a novel 2D vector retrieval based on a variational formulation was developed and applied on lidar scans from an offshore wind farm and validated with data from a cup and vane anemometer installed on a nearby research platform. Finally, a novel data visualization technique using Mixed Reality (MR)/ Augmented Reality (AR) technology is presented to visualize data from atmospheric sensors. MR is an environment in which the user's visual perception of the real world is enhanced with live, interactive, computer generated sensory input (in this case, data from atmospheric sensors like Doppler lidars). A methodology using modern game development platforms is presented and demonstrated with lidar retrieved wind fields. In the current study, the possibility of using this technology to visualize data from atmospheric sensors in mixed reality is explored and demonstrated with lidar retrieved wind fields as well as a few earth science datasets for education and outreach activities. / Dissertation/Thesis / Doctoral Dissertation Mechanical Engineering 2017
174

Processos no jornalismo digital: do Big Data à visualização de dados / Processes in digital journalism: from Big Data to data visualization

Mayanna Estevanim 16 September 2016 (has links)
A sociedade está cada vez mais digitalizada, dados em diferentes extensões são passíveis de serem armazenados e correlacionados, temos um volume, variedade e velocidade de dados humanamente imensuráveis sem o auxílio de computadores. Neste cenário, falamos de um jornalismo de dados que visa o entendimento de temas complexos de relevância social e que sintoniza a profissão com as novas necessidades de compreensão informativa contemporânea. O intuito desta dissertação é problematizar a visualização de dados no jornalismo brasileiro partindo exatamente do que é esta visualização de dados jornalísticos e diante dos apontamentos sobre seu conceito e prática questionar como proporciona diferenciais relevantes. Por relevantes entendemos pautas de interesse público, que envolvem maior criticidade, maior aprofundamento e contextualização dos conteúdos no Big Data. As iniciativas que reúnem imagens relacionadas a dados e metadados ocorrem nas práticas de mercado, laboratórios acadêmicos, assim como em mídias independentes. Neste sistema narrativo atuam diferentes atores humanos e não-humanos, em construções iniciadas em codificações maquínicas, com bases de dados que dialogam com outras camadas até chegar a uma interface com o usuário. Há a necessidade de novas expertises por parte dos profissionais, trabalhos em equipe e conhecimento básico, muitas vezes, em linguagem de programação, estatística e a operacionalização de ferramentas na construção de narrativas dinâmicas e que cada vez mais envolvam o leitor. Sendo importante o pensar sobre um conteúdo que seja disponível para diferentes formatos. Para o desenvolvimento da pesquisa foi adotada uma estratégia multimetodológica, tendo os pressupostos da centralidade da comunicação, que perpassa todas as atividades comunicativas e informativas de forma transversal, sejam elas analógicas ou não. Um olhar que requer resiliências diante das abordagens teórico-metodológicas para que as mesmas consigam abarcar e sustentar as reflexões referentes ao dinâmico campo de estudos. Para se fazer as proposições e interpretações adotou-se como base o paradigma Jornalismo Digital em Base de Dados, tendo as contribuições dos conceitos de formato (RAMOS, 2012 e MACHADO, 2003), de jornalismo pós-industrial (COSTA, 2014), sistema narrativo e antenarrativa (BERTOCCHI, 2013) como meios de amadurecimento da compreensão do objeto proposto. / Society is increasingly digitalized. Different scopes of data are likely to be stored and correlated, having volumes, variety and accumulating speeds humanly impossible to track and analyze without the aid of computers. In this scenario we explore the realm of data-driven journalism with its aim of helping us understand complex issues of social relevance and which integrates journalism with the new needs of contemporary informative understanding. The purpose of this paper is to discuss data visualization in Brazilian journalism, starting with what data visualization is and then, upon its concept and practical uses, determine how this view provides relevant advantages. By relevant advantages we mean matters of public interest with more critical, greater depth and context of content on Big Data. Initiatives that bring together images related to data and metadata occur on market practices, academic laboratories, as well as independent media. This narrative system is acted upon different human and nonhuman agents, whose structures are being built with machinic codifications, using databases that communicate with other layers until reaching a user interface. There is a need for new expertise from professionals, teamwork and basic knowledge, often in programming languages, statistics and operational tools to build dynamic narratives and increasingly involve the reader. It is important to think about content that is available to different formats. For this research we adopted a multi-methodological strategy and the assumptions of the centrality of communication that permeates all communication and informational activities across the board, whether analog or not. A view that requires resilience in the face of theoretical and methodological approaches, so that they are able to embrace and support the reflections for this dynamic field of study. To make propositions and interpretations, adopted based on the Database Digital Journalism paradigm, and the contributions of format concepts (RAMOS, 2012 and MACHADO, 2003), post-industrial journalism (COSTA, 2014), system narrative and antenarrative (BERTOCCHI, 2013) maturing as means of understanding the proposed object.
175

Um modelo dinâmico de reputação para apoiar a manutenção colaborativa de software

Lélis, Cláudio Augusto Silveira 30 August 2017 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-10-21T01:04:22Z No. of bitstreams: 1 claudioaugustosilveiralelis.pdf: 7232359 bytes, checksum: 731c10b688562fad8855da41890a7f97 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-10-21T13:13:19Z (GMT) No. of bitstreams: 1 claudioaugustosilveiralelis.pdf: 7232359 bytes, checksum: 731c10b688562fad8855da41890a7f97 (MD5) / Made available in DSpace on 2017-10-21T13:13:19Z (GMT). No. of bitstreams: 1 claudioaugustosilveiralelis.pdf: 7232359 bytes, checksum: 731c10b688562fad8855da41890a7f97 (MD5) Previous issue date: 2017-08-30 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / A importância dos softwares nas organizações é crescente. No entanto, para manter seu valor, o software deve ser alterado e atualizado. A manutenção de software depende da alocação de recursos humanos para o cumprimento das atividades de alteração definidas. Entretanto, em um cenário distribuído no qual a colaboração é fundamental para o bom funcionamento das atividades, torna-se uma tarefa não trivial designar desenvolvedores para as atividades de manutenção. Neste contexto, a reputação se torna um elemento chave, afetando os elementos de colaboração, tais como: a coordenação, a cooperação, e a comunicação. Portanto, o acompanhamento da evolução da reputação é importante para promover a colaboração nas atividades de manutenção. A teoria de Dinâmica de Sistemas pode ser aplicada no acompanhamento da evolução da reputação. Através dos dados obtidos, é possível compreender o passado, estabelecer o que ocorre no presente e projetar o comportamento futuro da reputação. Diante disso, este trabalho apresenta um modelo para cálculo da reputação dos desenvolvedores de software, apoiado por técnicas de Dinâmica de Sistemas, o qual permite simular como a reputação se comporta ao longo do tempo. Este modelo serviu de base para a construção de uma infraestrutura para informações de reputação dinâmica, cujo objetivo é possibilitar o gerenciamento e acompanhamento de informações de reputação dos desenvolvedores geograficamente distribuídos de forma a apoiar a alocação desses desenvolvedores às tarefas de manutenção. Além disso, oferece elementos de visualização e colaboração, em um ambiente integrado às atividades de manutenção de software. Uma prova de conceito e um experimento realizados com dados reais de uma empresa são apresentados com o intuito de identificar a viabilidade e aderência do modelo proposto, bem como dos demais recursos oferecidos pela infraestrutura. / The importance of software in organizations is growing. However, to maintain its value, the software must be changed and updated. Software maintenance depends on the allocation of human resources to fulfill defined change activities. However, in a distributed scenario in which collaboration is critical to the well running of activities, designate developers for maintenance activities becomes a non-trivial task. In this context, reputation becomes a key element, affecting elements of collaboration, such as: coordination, cooperation, and communication. Therefore, tracking reputation evolution is important to promote collaboration in maintenance activities. The theory of System Dynamics can be applied in monitoring the evolution of reputation. Through the data obtained, it is possible to understand the past, establish what occurs in the present, and project future reputation behavior. Therefore, this work presents a model for calculating the reputation of software developers, supported by System Dynamics techniques, which allows simulating how reputation behaves over time. This model served as the basis for building an infrastructure for dynamic reputation information, which aims to enable the management and tracking reputation information of geographically distributed developers to support the allocation of these developers to maintenance tasks. In addition, it provides visualization and collaboration elements in an integrated environment for software maintenance activities. A proof of concept and an experiment made with real data of a company are presented with the intention of identifying the feasibility and adherence of the proposed model, as well as of the other resources offered by the infrastructure.
176

Design and evaluation of an educational tool for understanding functionality in flight simulators : Visualising ARINC 610C

Söderström, Arvid, Thorheim, Johanna January 2017 (has links)
The use of simulation in aircraft development and pilot training is essential as it saves time and money. The ARINC 610C standard describes simulator functionality, and is developed to streamline the use of flight simulators. However, the text based standard lacks overview and function descriptions are hard to understand for the simulator developers, who are the main users. In this report, an educational software tool is conceptualised to increase usability of ARINC 610C. The usability goals and requirements were established through multiple interviews and two observation studies. Consequently, six concepts were produced, and evaluated in a workshop with domain experts. Properties from the evaluated concepts were combined in order to form one concluding concept. A prototype was finally developed and evaluated in usability tests with the potential user group. The results from the heuristic evaluation, the usability tests, and a mean system usability score of 79.5 suggests that the prototyped system, developed for visualising ARINC 610C, is a viable solution.
177

Data acquisition system for optical frequency comb spectroscopy

Seton, Ragnar January 2017 (has links)
The Optical Frequency Comb Spectroscopy (OFCS) Group at the Department of Physics at Umeå University develops new techniques for extremely high sensitivity trace gas detection, non invasive temperature measurements, and other applications of OFCS. Their setup used primarily for trace gas detection contains several components that have been developed in-house, including a Fourier Transform Spectrometer (FTS) and an auto-balancing detector. This is the one used in this thesis work and it includes a high frequency data acquisition card (DAC) recording interferograms in excess of 10^7 double-precision floating point samples per sweep of the FTS's retarder. For acquisition and analysis to be possible in both directions of the retarder the interferograms needs to be analysed in a sub-second timeframe, something not possible with the present software. The aim of this thesis work has thus been to develop a system with optimized analysis implementations in MATLAB. The latter was a prerequisite from the group to ensure maintainability, as all members are well acquainted with it.Fulfilling its primary purpose MATLAB performs vector and matrix computations quite efficiently, has mostly fully mutable datatypes, and with recent just-in-time (JIT) compilation optimizations vector resizing performance has improved to what in many instances is perceived as equivalent to preallocated variables. This memory management abstraction, however, also means that explicit control of when arguments are passed by value or by reference to a function is not officially supported. The following performance ramifications naturally increase with the size of the data sets (N) passed as arguments and become quite noticeable even at moderate values of N when dealing with data visualization, a key function in system. To circumvent these problems explicit data references were implemented using some of the undocumented functions of MATLAB's libmx library together with a custom data visualization function.The main parts of the near real time interferogram analysis are resampling and a Fourier transformation, both of which had functionally complete but not optimized implementations. The minimal requirement for the reimplementation of these were simply to improve efficiency while maintaining output precision.On experimentally obtained data the new system's (DAQS) resampling implementation increased sample throughput by a factor of 19 which in the setup used corresponds to 10^8 samples per second. Memory usage was decreased by 72% or in terms of the theoretical minimum from a factor 7.1 to 2.0. Due to structural changes in the sequence of execution DAQS has no corresponding implementation of the reference FFT function as the computations performed in it have been parallelized and/or are only executed on demand, their combined CPU-time can however in a worst-case scenario reach 75% of that of the reference. The data visualization performance increase (compared to MATLAB's own, as the old system used LabVIEW) depends on the size in pixels of the surface it is visualized on and N, decreasing with the former and increasing with the latter. In the baseline case of a default surface size of 434x342 pixels and N corresponding to one full sweep of the FTS's retarder DAQS offers a 100x speed-up to the Windows 7 version of MATLAB R2014b's plot.In addition to acquiring and analyzing interferograms the primary objectives of the work included tools to configure the DAC and controlling the FTS's retarder motor, both implemented in DAQS.Secondary to the above was the implementation of acquisition and analysis for both directions of the retarder, a HITRAN reference spectra generator, and functionality to improve the user experience (UX). The first, though computation time allows for it, has not been implemented due to a delay in the DAC-driver. To provide a generic implementation of the second, the HITRAN database was converted from the text-based format it is distributed in to a MySQL database, a wrapper class providing frequency-span selection and the absorption spectra generation was developed together with a graphical front-end. Finally the improved UX functionality mainly focused on providing easy-access documentation of the properties of the DAC.In summation, though the primary objectives of optimizing the data analysis functions were reached, the end product still requires a new driver for the DAC to provide the full functionality of the reference implementation as the existing one is simply too slow. Many of DAQS' components can however be used as stand-alone classes and functions until a new driver is available. It is also worth mentioning that National Instruments (NI), the DAC vendor, has according to their technical support no plans to develop native MATLAB drivers as MathWorks will not sell them licenses.
178

Využití demografických sítí v ekonomické statistice / Using Demographic Networks in Economic Statistics

Písaříková, Petra January 2017 (has links)
One of the first analytical tools that can be used to analyze data is the graphical representation. The time that is used as a measure in tasks in a wide range is problematic to grasp, and its mapping is not easy. In demographics, some tools, such as the Lexis diagram, are used. However, the list of graphical tools can be extended by diagrams that look at the time measure in different ways. Their use can be demonstrated not only on demographic data but also on non-demographic data and the modern statistical program R can be used too.
179

Exploring Ways of Visualizing Functional Connectivity

Nylén, Jan January 2017 (has links)
Functional connectivity is a field within neuroscience where measurements of co-activation between brain regions are used to test various hypotheses or explore how the brain activates depending on a given situation or task. After analysis, the underlying data in the field consists of a n by n adjacency matrix where each cell represents a correlation value between two regions in the brain. Depending on the research question the number of regions and matrices incorporated varies and new visualizations are needed in order to portray them. In this thesis the design of an interactive web based visualization tool for functional connectivity was explored through an iterative design process. The design of the tool was based on existing guidelines, interviews and best practices in data visualization as well as an analysis of current visualization solutions used in functional connectivity. The final concept and prototype uses a network plot for functional connectivity called the connectogram as well as a grouped bar graph to provide an intuitive and accessible way of comparing functional connectivity data by interacting with and highlighting networks and specific network data through direct manipulation. Results of qualitative evaluations of a prototype using data from a concurrent scientific project is presented. The prototype was found to be useful, engaging, easily perceivable and offered an easy and quick way of exploring data sets.
180

Srovnání vybraných komerčních reportingových BI platforem / Comparison of selected commercial reporting BI platforms

Krečmerová, Petra January 2015 (has links)
This Diploma thesis focuses on a comparison of selected commercial reporting BI platforms. The goals of the thesis are to create pilot dashboards based on three Business Intelligence tools, compare these tools among themselves and select the most optimal one for creating dashboards. The theoretical part describes the problematics of Business Intelligence, its principles and components. These are divided into several layers such as production systems, transformation and data layers, data analysis and presentation layer. The performance dashboard are then described and an indepth analysis of the commercial tools market has been performed. In the practical part pilot dashboards are created based on retail chains data in three selected Business Intelligence tools Power BI, QlikView and Tableau. These are then compared and based on a multi-criteria analysis the optimal tool for creating dashboards is selected and recommended. The evaluation criteria, their weights using Fuller´s method and the final evaluation based on the Weighted Sum Approach are set in the analysis. The Diploma thesis is set up for a project of the Competency center Retail Analytics and for the professional community. It shall be used as one of the sources in the selection of BI reporting platform.

Page generated in 0.4206 seconds