• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 100
  • 9
  • 8
  • 6
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 172
  • 172
  • 90
  • 61
  • 46
  • 43
  • 31
  • 30
  • 27
  • 22
  • 19
  • 18
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

GeoSocial : um modelo de análise e agrupamento de população de pessoas baseado em hábitos de frequência e semântica de locais

Altmayer, Richard Mateus 12 April 2018 (has links)
Submitted by JOSIANE SANTOS DE OLIVEIRA (josianeso) on 2018-09-19T16:19:16Z No. of bitstreams: 1 Richard Mateus Altmayer_.pdf: 11624194 bytes, checksum: 033148b21ac20bc09f084ae426e1e45f (MD5) / Made available in DSpace on 2018-09-19T16:19:16Z (GMT). No. of bitstreams: 1 Richard Mateus Altmayer_.pdf: 11624194 bytes, checksum: 033148b21ac20bc09f084ae426e1e45f (MD5) Previous issue date: 2018-04-12 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / A utilização de informações sobre comportamento de navegação de usuários na web tem sido amplamente utilizada para traçar perfis comportamentais de usuários com o intuito de oferecer anúncios publicitários por segmentos ou categorias. Nesta mesma linha, hábitos de comportamento baseado em locais que um indivíduo frequenta no seu cotidiano também podem ser analisados. Este trabalho propõe um modelo de agrupamento de indivíduos de uma população para posterior análise de seus hábitos de frequência a locais (GeoSocial). Os padrões de frequência dos grupos formados representam características de comportamento da população e podem ajudar a identificar oportunidades mercadológicas ou auxiliar aos tomadores de decisão ligados ao governo proporem determinadas melhorias/mudanças na infra-estrutura de uma determinada cidade. As informações dos locais de interesse frequentados pelos usuários são capturadas por coordenadas GPS via aplicativo móvel desenvolvido. O aplicativo rastreia e armazena as localidades que o indivíduo frequenta, permite visualizar o seu tempo e locais de permanência e pode conectá-lo à uma rede social formada a partir das similaridades entre seus hábitos e de outros indivíduos. O modelo proposto engloba: i. um módulo de clusterização de usuários que utiliza a técnica Affinity Propagation; ii. um módulo de visualização interativa para análise dos grupos por meio da técnica de Coordenadas Paralelas. O GeoSocial é avaliado mediante a utilização de diferentes cenários, fazendo uso de dados artificiais gerados. A avaliação evidencia o potencial de adaptação do modelo à diferentes objetivos de análise. / Information about user navigation behavior on the web has been widely used to draw user behavioral profiles in order to offer advertisements segmented by categories. In this same line, behavior habits based on places that an individual attends in their daily life can also be analyzed. This paper proposes a clustering model of individuals for further analysis of their habits of frequency in places (GeoSocial). Patterns of the formed groups represent characteristics of population’s behavior and can help to identify market opportunities or to help decision makers linked to government to propose improvements/changes in the infrastructure of a city. Users information about their frequented interest places are captured by GPS coordinates by a mobile app developed. App tracks and storages places that are frequent individuals. It allows visualize their time permanency on places and connect they to a social network formed from the similarities between their habits and the others. The proposed model includes: i. a user clustering module based on Affinity Propagation technique; ii. an interactive visualization module to analyze individual data correlation of groups based on Parallel Coordinates technique. GeoSocial is evaluated by different scenarios, making use of artificial data generated. Evaluation indicates the possibility of the model to a multitude of objectives.
112

Data-driven framework for forecasting sedimentation at culverts

Xu, Haowen 01 May 2019 (has links)
The increasing intensity and frequency of precipitation in recent decades, combined with the human interventions in watersheds, has drastically altered the natural regimes of water and sediment transport in watersheds over the whole contiguous United States. Sediment-transport related concerns include the sustainability of aquatic biology, the stability of the river morphology, and the security and vulnerability of various riverine structures. For the present context, the concerns are related to the acceleration of upland erosion (sediment production) and in-stream sediment-transport processes that eventually lead to sediment accumulation at culverts (structures that pass streams under roadways). This nuisance has become widespread in many transportation agencies in the United States, as it has a direct bearing on maintaining normal culvert operations during extreme flows when these waterway crossings are essential for the communities they serve. Despite the prevalence of culvert sedimentation, current specifications for culvert design do not typically consider aspects of sediment transport and deposition. The overall study objective is to systematically identify the likelihood of culvert sedimentation as a function of stream and culvert geometry, along with landscape characteristics (process drivers of culvert sedimentation) in the culvert drainage area. The ideal approach for predicting sedimentation is to track sediment sources dislocated from the watershed, their overland movement, and their delivery into the streams using physical-based modeling. However, there are considerable knowledge gaps in addressing the sedimentation at culverts as an end-to-end process, especially in connecting the upland with in-stream processes and simulating the sediment deposition at culverts in non-uniform, unsteady flows, while also taking into account the vegetation growth in culverts’ vicinity. It is, therefore, no surprise that existing research, textbooks, and guidelines do not typically provide adequate information on sediment control at culverts. This dissertation presents a generalizable data-driven framework that integrates various machine-learning and visual analytics techniques with GIS in a web-based geospatial platform to explore the complex environmental processes of culvert sedimentation. The framework offers systematic procedures for (1) classifying the culvert sedimentation degree using a time-series of aerial images; (2) identifying key process-drivers from a variety of environmental and culvert structural characteristics through feature selections and interactive visual interfaces; (3) supporting human interactions to perceive empirical relationships between drivers and the culvert sedimentation degree through multivariate Geovisualization and Self-Organizing Map (SOM); and (4) forecasting culvert sedimentation potential across Iowa using machine learning algorithms. Developed using modular design and atop national datasets, the framework is generalizable and extendable, and therefore can be applied to address similar river management issues, such as habitat deterioration and water pollution, at the Contiguous US scale. The platform developed through this Ph.D. study offers a web-based problem-solving environment for a) managing inventory and retrieving culvert structural information; b) integrating diverse culvert-related datasets (e.g., culvert inventory, hydrological and land use data, and observations on the degree of sedimentation in the vicinity of culverts) in a digital repository; c) supporting culvert field inspections and real-time data collection through mobile devices; and d) hosting the data-driven framework for exploring culvert sedimentation drivers and forecasting culvert sedimentation potential across Iowa. Insights provided through the data-driven framework can be applied to support decisions for culvert management and sedimentation mitigation, as well as to provide suggestions on parameter selections for the design of these structures.
113

Analysis and Visualisation of Edge Entanglement in Multiplex Networks

Renoust, Benjamin 18 December 2013 (has links) (PDF)
When it comes to comprehension of complex phenomena, humans need to understand what interactions lie within them.These interactions are often captured with complex networks. However, the interaction pluralism is often shallowed by traditional network models. We propose a new way to look at these phenomena through the lens of multiplex networks, in which catalysts are drivers of the interaction through substrates. To study the entanglement of a multiplex network is to study how edges intertwine, in other words, how catalysts interact. Our entanglement analysis results in a full set of new objects which completes traditional network approaches: the entanglement homogeneity and intensity of the multiplex network, and the catalyst interaction network, with for each catalyst, an entanglement index. These objects are very suitable for embedment in a visual analytics framework, to enable comprehension of a complex structure. We thus propose of visual setting with coordinated multiple views. We take advantage of mental mapping and visual linking to present simultaneous information of a multiplex network at three different levels of abstraction. We complete brushing and linking with a leapfrog interaction that mimics the back-and-forth process involved in users' comprehension. The method is validated and enriched through multiple applications including assessing group cohesion in document collections, and identification of particular associations in social networks.
114

Visual analytics for maritime anomaly detection

Riveiro, María José January 2011 (has links)
The surveillance of large sea areas typically involves  the analysis of huge quantities of heterogeneous data.  In order to support the operator while monitoring maritime traffic, the identification of anomalous behavior or situations that might need further investigation may reduce operators' cognitive load. While it is worth acknowledging that existing mining applications support the identification of anomalies, autonomous anomaly detection systems are rarely used for maritime surveillance. Anomaly detection is normally a complex task that can hardly be solved by using purely visual or purely computational methods. This thesis suggests and investigates the adoption of visual analytics principles to support the detection of anomalous vessel behavior in maritime traffic data. This adoption involves studying the analytical reasoning process that needs to be supported,  using combined automatic and visualization approaches to support such process, and evaluating such integration. The analysis of data gathered during interviews and participant observations at various maritime control centers and the inspection of video recordings of real anomalous incidents lead to a characterization of the analytical reasoning process that operators go through when monitoring traffic. These results are complemented with a literature review of anomaly detection techniques applied to sea traffic. A particular statistical-based technique is implemented, tested, and embedded in a proof-of-concept prototype that allows user involvement in the detection process. The quantitative evaluation carried out by employing the prototype reveals that participants who used the visualization of normal behavioral models outperformed the group without aid. The qualitative assessment shows that  domain experts are positive towards providing automatic support and the visualization of normal behavioral models, since these aids may reduce reaction time, as well as increase trust and comprehensibility in the system. Based on the lessons learned, this thesis provides recommendations for designers and developers of maritime control and anomaly detection systems, as well as guidelines for carrying out evaluations of visual analytics environments. / Maria Riveiro is also affiliated to Informatics Research Centre, Högskolan i Skövde / Information Fusion Research Program, Högskolan i Skövde
115

Development of a geovisual analytics environment using parallel coordinates with applications to tropical cyclone trend analysis

Steed, Chad A. January 2008 (has links)
Thesis (Ph.D.)--Mississippi State University. Department of Computer Science and Engineering. / Title from title screen. Includes bibliographical references.
116

Sense, signal and software : a sensemaking analysis of meaning in early warning systems

Goosen, Ryno Johannes 12 1900 (has links)
Thesis (MPhil)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: This thesis considers the contribution that Karl Weick’s notion of sensemaking can make to an improved understanding of weak signals, cues, warning analysis, and software within early warning systems. Weick’s sensemaking provides a framework through which the above mentioned concepts are discussed and analysed. The concepts of weak signals, early warning systems, and Visual Analytics are investigated from within current business and formal intelligence viewpoints. Intelligence failure has been a characteristic of events such as 9/11, the recent financial crisis triggered by the collapse of Lehman Brothers, and the so-called Arab Spring. Popular methodologies such as early warning analysis, weak signal analysis and environmental scanning employed within both the business and government sphere failed to provide adequate early warning in many of these events. These failures warrant renewed attention as to what improvements can be made and how new technology can enhance early warning analysis. Chapter One is introductory and states the research question, methodology, and delimits the thesis. Chapter Two sets the scene by investigating current conceptions of the main constructs. Chapter Three explores Weick’s theory of sensemaking, and provides the analytical framework against which these concepts are then analysed in Chapter Four. The emphasis is directed towards the extent of integration of frames within the analysis phase of early warning systems and how frames may be incorporated within the theoretical foundation of Visual Analytics to enhance warning systems. The findings of this thesis suggest that Weick’s conceptualisation of sensemaking provide conceptual clarity to weak signal analysis in that Weick’s “seed” metaphor, representing the embellishment and elaboration of cues, epitomizes the progressive nature of weak signals. The importance of Weick’s notion of belief driven sensemaking, in specific the role of expectation in the elaboration of frames, and discussed and confirmed by various researchers in different study areas, is a core feature underlined in this thesis. The centrality of the act of noticing and the effect that framing and re-framing has thereon is highlighted as a primary notion in the process of not only making sense of warning signals but identifying them in the first place. This ties in to the valuable contribution Weick’s sensemaking makes to understanding the effect that a specification has on identifying transients and signals in the resulting visualization in Visual Analytic software. / AFRIKAANSE OPSOMMING: Hierdie tesis ondersoek hoe Karl Weick se konsep van singewing ons insig teenoor swak seine, tekens, waarskuwingsanalise en sagteware binne vroeë waarskuwingstelsels verbeter. Weick se bydrae verskaf ‘n raamwerk waarbinne hierdie konsepte geanaliseer en ondersoek kan word. Die konsep van swak seine, vroeë-waarskuwing en visuele analise word binne huidige besigheidsuitgangspunte, en die formele intelligensie arena ondersoek. Die mislukking van intelligensie is kenmerkend van gebeure soos 9/11, die onlangse finansiёle krisis wat deur die ondergang van Lehman Brothers ingelei is, en die sogenaamde “Arab Spring”. Hierdie gebeure het ‘n wêreldwye opskudding op ekonomiese en politiese vlak veroorsaak. Moderne metodologieё soos vroeë waarskuwingsanalise, swaksein-analise en omgewingsaanskouing binne regerings- en besigheidsverband het duidelik in hul doelstelling misluk om voortydig te waarsku oor hierdie gebeurtenisse. Dit is juis hierdie mislukkings wat dit noodsaaklik maak om meer aandag te skenk aan hierdie konsepte, asook nuwe tegnologie wat dit kan verbeter. Hoofstuk Een is inleidend en stel die navorsingsvraagstuk, doelwitte en afbakkening. Hoofstuk Twee lê die fondasie van die tesis deur ‘n ondersoek van die hoof konsepte. Hoofstuk Drie verskaf die teoretiese raamwerk, die van Weick se singewingsteorie, waarteen die hoof konsepte in Hoofstuk Twee ondersoek word in Hoofstuk Vier. Klem word gelê op die diepte van integrasie en die toepassing van raamwerke in die analisefase van vroeё waarskuwingstelsels en hoe dit binne die teoretiese beginsels van visuele analise geïnkorporeer word. Die bevindinge van hierdie tesis spreek die feit aan dat Weick se konsepsualisering van singewing konseptuele helderheid rakende die begrip “swakseine” verskaf. In hierdie verband verteenwoordig Weick se “saad”- metafoor die samewerking en uitbouing van seine en “padpredikante” wat die progressiewe aard van swakseine weerspieёl. Die kernbeskouing van hierdie tesis is die belangrikheid van Weick se geloofsgedrewesingewing, veral die uitkoms van die bou van raamwerke asook die bespreking hiervan deur verskeie navorsers. Die belangrikheid van die aksie om seine op te merk, en die effek wat dit op die herbeskouing van raamwerke het, asook die raaksien daarvan in die eerste plek word beklemtoon. Laasgenoemde dui ook aan tot watter mate Weick se singewingsteorie ‘n bydrae maak tot visuele analise veral in ons begrip van die gevolg wat data of inligtingspesifikasie het op die identifisering van seine en onsinnighede in visualisering binne visuele analise-sagteware.
117

Bridging Cyber and Physical Programming Classes: An Application of Semantic Visual Analytics for Programming Exams

January 2016 (has links)
abstract: With the advent of Massive Open Online Courses (MOOCs) educators have the opportunity to collect data from students and use it to derive insightful information about the students. Specifically, for programming based courses the ability to identify the specific areas or topics that need more attention from the students can be of immense help. But the majority of traditional, non-virtual classes lack the ability to uncover such information that can serve as a feedback to the effectiveness of teaching. In majority of the schools paper exams and assignments provide the only form of assessment to measure the success of the students in achieving the course objectives. The overall grade obtained in paper exams and assignments need not present a complete picture of a student’s strengths and weaknesses. In part, this can be addressed by incorporating research-based technology into the classrooms to obtain real-time updates on students' progress. But introducing technology to provide real-time, class-wide engagement involves a considerable investment both academically and financially. This prevents the adoption of such technology thereby preventing the ideal, technology-enabled classrooms. With increasing class sizes, it is becoming impossible for teachers to keep a persistent track of their students progress and to provide personalized feedback. What if we can we provide technology support without adding more burden to the existing pedagogical approach? How can we enable semantic enrichment of exams that can translate to students' understanding of the topics taught in the class? Can we provide feedback to students that goes beyond only numbers and reveal areas that need their focus. In this research I focus on bringing the capability of conducting insightful analysis to paper exams with a less intrusive learning analytics approach that taps into the generic classrooms with minimum technology introduction. Specifically, the work focuses on automatic indexing of programming exam questions with ontological semantics. The thesis also focuses on designing and evaluating a novel semantic visual analytics suite for in-depth course monitoring. By visualizing the semantic information to illustrate the areas that need a student’s focus and enable teachers to visualize class level progress, the system provides a richer feedback to both sides for improvement. / Dissertation/Thesis / Masters Thesis Computer Science 2016
118

Real-time Distributed Computation of Formal Concepts and Analytics / Calcul distribué des concepts formels en temps réel et analyse visuelle

De Alburquerque Melo, Cassio 19 July 2013 (has links)
Les progrès de la technologie pour la création, le stockage et la diffusion des données ont considérablement augmenté le besoin d’outils qui permettent effectivement aux utilisateurs les moyens d’identifier et de comprendre l’information pertinente. Malgré les possibilités de calcul dans les cadres distribuées telles que des outils comme Hadoop offrent, il a seulement augmenté le besoin de moyens pour identifier et comprendre les informations pertinentes. L’Analyse de Concepts Formels (ACF) peut jouer un rôle important dans ce contexte, en utilisant des moyens plus intelligents dans le processus d’analyse. ACF fournit une compréhension intuitive de la généralisation et de spécialisation des relations entre les objets et leurs attributs dans une structure connue comme un treillis de concepts. Cette thèse aborde le problème de l’exploitation et visualisation des concepts sur un flux de données. L’approche proposée est composé de plusieurs composants distribués qui effectuent le calcul des concepts d’une transaction de base, filtre et transforme les données, les stocke et fournit des fonctionnalités analytiques pour l’exploitation visuelle des données. La nouveauté de notre travail consiste à: (i) une architecture distribuée de traitement et d’analyse des concepts et l’exploitation en temps réel, (ii) la combinaison de l’ACF avec l’analyse des techniques d’exploration, y compris la visualisation des règles d’association, (iii) des nouveaux algorithmes pour condenser et filtrage des données conceptuelles et (iv) un système qui met en œuvre toutes les techniques proposées, Cubix, et ses étude de cas en biologie, dans la conception de systèmes complexes et dans les applications spatiales. / The advances in technology for creation, storage and dissemination of data have dramatically increased the need for tools that effectively provide users with means of identifying and understanding relevant information. Despite the great computing opportunities distributed frameworks such as Hadoop provide, it has only increased the need for means of identifying and understanding relevant information. Formal Concept Analysis (FCA) may play an important role in this context, by employing more intelligent means in the analysis process. FCA provides an intuitive understanding of generalization and specialization relationships among objects and their attributes in a structure known as a concept lattice. The present thesis addresses the problem of mining and visualising concepts over a data stream. The proposed approach is comprised of several distributed components that carry the computation of concepts from a basic transaction, filter and transforms data, stores and provides analytic features to visually explore data. The novelty of our work consists of: (i) a distributed processing and analysis architecture for mining concepts in real-time; (ii) the combination of FCA with visual analytics visualisation and exploration techniques, including association rules analytics; (iii) new algorithms for condensing and filtering conceptual data and (iv) a system that implements all proposed techniques, called Cubix, and its use cases in Biology, Complex System Design and Space Applications.
119

Spatial Multimedia Data Visualization

JAMONNAK, SUPHANUT 30 November 2021 (has links)
No description available.
120

Visual Analytics for Decision Making in Performance Evaluation

Jieqiong Zhao (8791535) 05 May 2020 (has links)
Performance analysis often considers numerous factors contributing to performance, and the relative importance of these factors is evolving based on dynamic conditions and requirements. Investigating large numbers of factors and understanding individual factors' predictability within the ultimate performance are challenging tasks. A visual analytics approach that integrates interactive analysis, novel visual representations, and predictive machine learning models can provide new capabilities to examine performance effectively and thoroughly. Currently, only limited research has been done on the possible applications of visual analytics for performance evaluation. In this dissertation, two specific types of performance analysis are presented: (1) organizational employee performance evaluation and (2) performance improvement of machine learning models with interactive feature selection. Both application scenarios leverage the human-in-the-loop approach to assist the identification of influential factors. For organizational employee performance evaluation, a novel visual analytics system, MetricsVis, is developed to support exploratory organizational performance analysis. MetricsVis incorporates hybrid evaluation metrics that integrate quantitative measurements of observed employee achievements and subjective feedback on the relative importance of these achievements to demonstrate employee performance at and between multiple levels regarding the organizational hierarchy. MetricsVis II extends the original system by including actual supervisor ratings and user-guided rankings to capture preferences from users through derived weights. Comparing user preferences with objective employee workload data enables users to relate user evaluation to historical observations and even discover potential bias. For interactive feature selection and model evaluation, a visual analytics system, FeatureExplorer, allows users to refine and diagnose a model iteratively by selecting features based on their domain knowledge, interchangeable features, feature importance, and the resulting model performance. FeatureExplorer enables users to identify stable, trustable, and credible predictive features that contribute significantly to a prediction model.

Page generated in 0.1574 seconds