• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 36
  • 36
  • 11
  • 10
  • 8
  • 8
  • 8
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Implementing & Evaluating A Standardised MapRepresentation

Persson, Fredrik January 2021 (has links)
Maps are essential in the robots of today, not only to navigate but also to perform actions.This project’s aim was to create a program able to take map files of older standards suchas ROS Gridmaps or the current IEEE 2D map standard 1873-2015[2] and convert themto the new IEEE P2751/D1[4] standard. The new IEEE P2751/D1 is based of IEEE1873-2015 but improves by allowing 3D map formats such as Pointclouds or Voxel-gridsto be incorporated. This projects main goal was to have a program that is able to convertfrom some of these different map types into the new standard. The existence of sucha program eases any future adaptation. This report also includes a detailed evaluationof the efficiency of the IEEE P2751/D1 standard format in comparison to other mapformats.
22

Uniform interval normalization : Data representation of sparse and noisy data sets for machine learning

Sävhammar, Simon January 2020 (has links)
The uniform interval normalization technique is proposed as an approach to handle sparse data and to handle noise in the data. The technique is evaluated transforming and normalizing the MoodMapper and Safebase data sets, the predictive capabilities are compared by forecasting the data set with aLSTM model. The results are compared to both the commonly used MinMax normalization technique and MinMax normalization with a time2vec layer. It was found the uniform interval normalization performed better on the sparse MoodMapper data set, and the denser Safebase data set. Future works consist of studying the performance of uniform interval normalization on other data sets and with other machine learning models.
23

The Role of Cognitive Effort in Decision Performance Using Data Representations: A Cognitive Fit Perspective

Bacic, Dinko 05 June 2014 (has links)
No description available.
24

Storing information through complex dynamics in recurrent neural networks

Molter, Colin C 20 May 2005 (has links)
The neural net computer simulations which will be presented here are based on the acceptance of a set of assumptions that for the last twenty years have been expressed in the fields of information processing, neurophysiology and cognitive sciences. First of all, neural networks and their dynamical behaviors in terms of attractors is the natural way adopted by the brain to encode information. Any information item to be stored in the neural net should be coded in some way or another in one of the dynamical attractors of the brain and retrieved by stimulating the net so as to trap its dynamics in the desired item's basin of attraction. The second view shared by neural net researchers is to base the learning of the synaptic matrix on a local Hebbian mechanism. The last assumption is the presence of chaos and the benefit gained by its presence. Chaos, although very simply produced, inherently possesses an infinite amount of cyclic regimes that can be exploited for coding information. Moreover, the network randomly wanders around these unstable regimes in a spontaneous way, thus rapidly proposing alternative responses to external stimuli and being able to easily switch from one of these potential attractors to another in response to any coming stimulus. In this thesis, it is shown experimentally that the more information is to be stored in robust cyclic attractors, the more chaos appears as a regime in the back, erratically itinerating among brief appearances of these attractors. Chaos does not appear to be the cause but the consequence of the learning. However, it appears as an helpful consequence that widens the net's encoding capacity. To learn the information to be stored, an unsupervised Hebbian learning algorithm is introduced. By leaving the semantics of the attractors to be associated with the feeding data unprescribed, promising results have been obtained in term of storing capacity.
25

Processus épidémiques sur réseaux dynamiques / Epidemic Processes on Dynamic Networks

Machens, Anna 24 October 2013 (has links)
Dans cette thèse nous contribuons à répondre aux questions sur les processus dynamiques sur réseaux temporels. En particulier, nous etudions l'influence des représentations de données sur les simulations des processus épidémiques, le niveau de détail nécessaire pour la représentation des données et sa dépendance des paramètres de la propagation de l'épidémie. Avec l'introduction de la matrice de distributions du temps de contacts nous espérons pouvoir améliorer dans le futur la précision des prédictions des épidémies et des stratégies d'immunisation en intégrant cette représentation des données aux modèles d'épidémies multi-échelles. De plus nous montrons comment les processus épidémiques dynamiques sont influencés par les propriétés temporelles des données. / In this thesis we contribute to provide insights into questions concerning dynamic epidemic processes on data-driven, temporal networks. In particular, we investigate the influence of data representations on the outcome of epidemic processes, shedding some light on the question how much detail is necessary for the data representation and its dependence on the spreading parameters. By introducing an improvement to the contact matrix representation we provide a data representation that could in the future be integrated into multi-scale epidemic models in order to improve the accuracy of predictions and corresponding immunization strategies. We also point out some of the ways dynamic processes are influenced by temporal properties of the data.
26

De dados à  informação: visualizar dimensões do bem-estar humano / Dado não fornecido pelo autor

Moraes, Wallace Alves 16 May 2018 (has links)
Visualização de dados significa o mapeamento e a apresentação de dados em gráficos através da manipulação de variáveis visuais - altura, largura, frequência, cor, posição da forma gráfica - para informar e comunicar um assunto subjacente. A presente investigação é o estudo de como a informação, visualização e divulgação de dados e estatísticas relativas ao bem-estar humano podem conscientizar a sociedade civil e gestores públicos a promoverem melhorias na qualidade de vida e em políticas públicas. Bem-estar é um conceito multidimensional que envolve todas as dimensões da vida - desnível social, estresse, início da vida, exclusão social, pobreza, mobilidade. Existem muitos métodos para mensurar o bem-estar humano, pode ser avaliado pelas abordagens objetiva e subjetiva. Sendo assim, nesta investigação é estudado o bem-estar humano: origem, significado, definições e descrições. Os instrumentos usados para mensurar os indicadores de bem-estar são investigados. A aplicação da pesquisa consiste na criação de um modelo de dados de indicadores de bem-estar aplicado à cidade de São Paulo baseado nos Objetivos de Desenvolvimento Sustentável da ONU. Dados são coletados de instituições governamentais para a criação de um sistema de visualização - na internet <usp.br/mappingwellbeing/visualize> - através de gráficos de dados dinâmicos. / Data visualization is the mapping and representation of data into graphs by manipulating visual variables - height, width, frequency, color, position of the graphical form - to inform and communicate an underlying information. This research studies how information, visualization, and dissemination of human well-being data e statistics can raise awareness among civil society and public managers to promote improvements in quality of life and public policies. Well-being is a multidimensional concept that involves all dimensions of life - social gap, stress, beginning of life, social exclusion, poverty, mobility. There are many methods to measure human well-being, can be analyzed as objective well-being and subjective well-being. Thus, in this research human well-being is studied: origin, meaning, definitions and descriptions; In the same way, the instruments used in its measurement. An inventory of indicators is created from the indexing and analysis of reports of national e international organizations. The applied research consists in the creation of a visualization system based on data model - with indicators of well-being applied to the São Paulo city - grounded from the UN Sustainable Development Goals. Data are collected from government institutions using the proposed framework to create an internet platform <usp.br/mappingwellbeing/visualize>.
27

Auditory Information Design

Barrass, Stephen, stephen.barrass@cmis.csiro.au January 1998 (has links)
The prospect of computer applications making "noises" is disconcerting to some. Yet the soundscape of the real world does not usually bother us. Perhaps we only notice a nuisance? This thesis is an approach for designing sounds that are useful information rather than distracting "noise". The approach is called TaDa because the sounds are designed to be useful in a Task and true to the Data. ¶ Previous researchers in auditory display have identified issues that need to be addressed for the field to progress. The TaDa approach is an integrated approach that addresses an array of these issues through a multifaceted system of methods drawn from HCI, visualisation, graphic design and sound design. A task-analysis addresses the issue of usefulness. A data characterisation addresses perceptual faithfulness. A case-based method provides semantic linkage to the application domain. A rule-based method addresses psychoacoustic control. A perceptually linearised sound space allows transportable auditory specifications. Most of these methods have not been used to design auditory displays before, and each has been specially adapted for this design domain. ¶ The TaDa methods have been built into computer-aided design tools that can assist the design of a more effective display, and may allow less than experienced designers to make effective use of sounds. The case-based method is supported by a database of examples that can be searched by an information analysis of the design scenario. The rule-based method is supported by a direct manipulation interface which shows the available sound gamut of an audio device as a 3D coloured object that can be sliced and picked with the mouse. These computer-aided tools are the first of their kind to be developed in auditory display. ¶ The approach, methods and tools are demonstrated in scenarios from the domains of mining exploration, resource monitoring and climatology. These practical applications show that sounds can be useful in a wide variety of information processing activities which have not been explored before. The sounds provide information that is difficult to obtain visually, and improve the directness of interactions by providing additional affordances.
28

Large Data Clustering And Classification Schemes For Data Mining

Babu, T Ravindra 12 1900 (has links)
Data Mining deals with extracting valid, novel, easily understood by humans, potentially useful and general abstractions from large data. A data is large when number of patterns, number of features per pattern or both are large. Largeness of data is characterized by its size which is beyond the capacity of main memory of a computer. Data Mining is an interdisciplinary field involving database systems, statistics, machine learning, visualization and computational aspects. The focus of data mining algorithms is scalability and efficiency. Large data clustering and classification is an important activity in Data Mining. The clustering algorithms are predominantly iterative requiring multiple scans of dataset, which is very expensive when data is stored on the disk. In the current work we propose different schemes that have both theoretical validity and practical utility in dealing with such a large data. The schemes broadly encompass data compaction, classification, prototype selection, use of domain knowledge and hybrid intelligent systems. The proposed approaches can be broadly classified as (a) compressing the data by some means in a non-lossy manner; cluster as well as classify the patterns in their compressed form directly through a novel algorithm, (b) compressing the data in a lossy fashion such that a very high degree of compression and abstraction is obtained in terms of 'distinct subsequences'; classify the data in such compressed form to improve the prediction accuracy, (c) with the help of incremental clustering, a lossy compression scheme and rough set approach, obtain simultaneous prototype and feature selection, (d) demonstrate that prototype selection and data-dependent techniques can reduce number of comparisons in multiclass classification scenario using SVMs, and (e) by making use of domain knowledge of the problem and data under consideration, we show that we obtaina very high classification accuracy with less number of iterations with AdaBoost. The schemes have pragmatic utility. The prototype selection algorithm is incremental, requiring a single dataset scan and has linear time and space requirements. We provide results obtained with a large, high dimensional handwritten(hw) digit data. The compression algorithm is based on simple concepts, where we demonstrate that classification of the compressed data improves computation time required by a factor 5 with prediction accuracy with both compressed and original data being exactly the same as 92.47%. With the proposed lossy compression scheme and pruning methods, we demonstrate that even with a reduction of distinct sequences by a factor of 6 (690 to 106), the prediction accuracy improves. Specifically, with original data containing 690 distinct subsequences, the classification accuracy is 92.47% and with appropriate choice of parameters for pruning, the number of distinct subsequences reduces to 106 with corresponding classification accuracy as 92.92%. The best classification accuracy of 93.3% is obtained with 452 distinct subsequences. With the scheme of simultaneous feature and prototype selection, we improved classification accuracy to better than that obtained with kNNC, viz., 93.58%, while significantly reducing the number of features and prototypes, achieving a compaction of 45.1%. In case of hybrid schemes based on SVM, prototypes and domain knowledge based tree(KB-Tree), we demonstrated reduction in SVM training time by 50% and testing time by about 30% as compared to complete data and improvement of classification accuracy to 94.75%. In case of AdaBoost the classification accuracy is 94.48%, which is better than those obtained with NNC and kNNC on the entire data; the training timing is reduced because of use of prototypes instead of the complete data. Another important aspect of the work is to devise a KB-Tree (with maximum depth of 4), that classifies a 10-category data in just 4 comparisons. In addition to hw data, we applied the schemes to Network Intrusion Detection Data (10% dataset of KDDCUP99) and demonstrated that the proposed schemes provided less overall cost than the reported values.
29

Pattern Synthesis Techniques And Compact Data Representation Schemes For Efficient Nearest Neighbor Classification

Pulabaigari, Viswanath 01 1900 (has links) (PDF)
No description available.
30

Editor pasportizace VUT / Pasport Editor of BUT

Bierza, Daniel Unknown Date (has links)
I will present the issue of passportization in my work. I will analyze the current status of the BUT buildings. I will describe possible solutions of passportization at the BUT in the future. I will focus on the analysis of the "obr" format through the method of reverse engineering. I will do the analysis of the acquired data. I will describe the way of saving of the passportization information. I will design a graphic browser and a passportization editor.

Page generated in 0.1029 seconds