191 |
Monitoramento on-line em sistemas distribuídos : mecanismo hierárquico para coleta de dados / On-line monitoring of distributed systems: a hierarchical mechanism for data collectionTesser, Rafael Keller January 2011 (has links)
Este trabalho propõe um modelo hierárquico para coleta de dados de monitoramento em sistemas distribuídos. Seu objetivo é proporcionar a análise on-line do comportamento de sistemas e programas distribuídos. O meio escolhido para realizar essa análise foi a visualização. Inicialmente é apresentada uma contextualização sobre monitoramento de sistemas distribuídos. Também são abordados aspectos específicos ao monitoramento de Grid. Após, é analisado um conjunto de ferramentas de monitoramento. Então tem-se a apresentação do modelo proposto. Esse é composto por coletores locais, por uma hierarquia de agregadores e por clientes. É utilizado o modelo push de transmissão de dados e há um mecanismo de subscrição aos coletores. Foi implementado um protótipo do modelo de coleta proposto, que foi utilizado na implementação de um protótipo de ferramenta de monitoramento on-line. Nessa, os dados coletados são fornecidos ao DIMVisual, que é um modelo de integração de dados para visualização. Para visualização, o protótipo utiliza a ferramenta TRIVA, que recebe os dados integrados como entrada. Essa ferramenta foi modificada para gerar uma visualização que é atualizada de maneira on-line. Também foram realizados experimentos para avaliar o tempo necessário para enviar mensagens com diferentes hierarquias e configurações dos coletores. Além disso, foi avaliada a capacidade de o cliente implementado processar os dados recebidos, gerando sua visualização. / This work proposes a hierarchical model for collecting monitoring data from distributed systems. Its goal is to allow the on-line analysis of the behavior of distributed systems and applications. The means we chose to perform this analysis is to generate a visualization of the collected information. In the beginning of this dissertation we present an overview of the monitoring of distributed systems. Aspects that are specific to the monitoring of Grid systems are also reviewed. Next, we have an analysis of a set of monitoring tools. Then we present the proposed model, which is composed by local collectors, an hierarchical structure of aggregators and clients. A push data transmission model is used in the model and it also has a subscription mechanism. A prototype monitoring tool was implemented, integrating the data collection model with DIMVisual and TRIVA. The former is a data integration model whose output is formatted to be used as input for a visualization tool. The later is a visualization tool which, in the prototype, receives the integrated data from DIMVisual. TRIVA generates a visualization of the received information, which is updated in an on-line fashion. In order to evaluate the model, we performed a set of experiments using the prototype. One of the experiments measured the time spent to send data though different hierarchies. In these tests we have also varied the quantity and the configuration of the collectors. In another experiment we evaluated the capacity of the client to process the received data.
|
192 |
Mineração e visualização de coleções de séries temporais / Mining and visualization of time series collectionsAlencar, Aretha Barbosa 10 December 2007 (has links)
A análise de séries temporais gera muitos desafios para profisionais em um grande número de domínios. Várias soluções de visualização integrada com algoritmos de mineração já foram propostas para tarefas exploratórias em coleções de séries temporais. À medida que o conjunto de dados cresce, estas soluções falham em promover uma boa associação entre séries temporais similares. Neste trabalho, é apresentada uma ferramenta para a análise exploratória e mineração de conjuntos de séries temporais que adota uma representação visual baseada em medidas de dissimilaridade entre séries. Esta representação é criada usando técnicas rápidas de projeção, de forma que as séries temporais possam ser visualizadas em espaços bidimensionais. Vários tipos de atributos visuais e conexões no grafo resultante podem ser utilizados para suportar a exploração dessa representação. Também é possível aplicar algumas tarefas de mineração de dados, como a classificação, para apoiar a busca por padrões. As visualizações resultantes têm se mostrado muito úteis na identificação de grupos de séries com comportamentos similares, que são mapeadas para a mesma vizinhança no espaço bidimensional. Grupos visuais de elementos, assim como outliers, são facilmente identficáveis. A ferramenta é avaliada por meio de sua aplicação a vários conjuntos de séries. Um dos estudos de caso explora dados de vazões de usinas hidrelétricas no Brasil, uma aplicação estratégica para o planejamento energético. / Time series analysis poses many challenges to professionals in a wide range of domains. Several visualization solutions integrated with mining algorithms have been proposed for exploratory tasks on time series collections. As the data sets grow large, though, the visual alternatives do not allow for a good association between similar time series. In this work, we introduce a tool for exploratory visualization and mining of large time series data sets that adopts a visual representation based on distance measures between series. This representation is created employing fast projection techniques, so the time series can be viewed in two-dimensional spaces. Various types of visual attributes and connection on the resulting graph can be applied to support exploration. It also supports data mining tasks, such as classification, to search for patterns. The resulting visualizations have proved very useful for identifying groups of series with similar behavior, which are mapped to the close neighborhoods in twodimensional spaces. Visual clusters of elements, as well as outliers, are easily identifiable. Case studies on several domains are presented to validate the tool. One of them is on a data set of stream ows in hydroelectric power plants in Brazil, a strategic application for energy planning.
|
193 |
Filmes nanoestruturados aplicados em biossensores para detecção precoce de câncer de pâncreas / Nanostructured films applied in biosensors for early diagnosis of pancreatic cancerSoares, Andrey Coatrini 16 February 2017 (has links)
A necessidade de dispositivos analíticos para detecção precoce de câncer tem motivado pesquisas em nanomateriais de baixo custo, com busca de sinergia para obter alta sensibilidade e seletividade em biossensores. Neste trabalho, o ácido 11-mercaptoundecanóico, o polímero natural quitosana e a proteína concanavalina A (Con A) foram usados como plataforma para imobilizar anticorpos anti-CA 19-9 e construir biossensores para detecção de câncer de pâncreas. Esses biossensores foram produzidos com uma matriz de filmes automontados por adsorção química (self-assembled monolayers, SAM) e por adsorção física (layer-by-layer, LbL). A caracterização com técnicas espectroscópicas e gravimétricas permitiu selecionar as arquiteturas com quitosana/ Con A 2:1 (sensor A), quitosana/ Con A 1:1 (sensor B) e 11-MUA (sensor E), como sendo mais favoráveis à imobilização do anticorpo anti-CA 19-9. Usando os biossensores com espectroscopia de impedância foi possível detectar baixas concentrações do biomarcador CA 19-9, com limites de detecção entre 0,17-0,69 U/mL, 0,31-0,91 U/mL e 0,56-0,91 U/mL para os sensores A, B e E, respectivamente. Esses limites são suficientes para detectar câncer de pâncreas nos estágios iniciais. A seletividade dos dispositivos foi inferida em uma série de experimentos de controle com amostras de células SW 620 e HT-29, ácido úrico, ácido ascórbico, glicose, manose, sérum e antígeno p24, indicando ausência de interferência não específica ao biomarcador. O uso de técnicas de visualização de informação permitiu facilmente distinguir essas amostras, classificando-as de acordo com a concentração do biomarcador em um mapa. Permitiu também quantificar a seletividade dos biossensores através do coeficiente de silhueta, com valores 0,853, 0,861 e 0,897 para os sensores A, B e E, respectivamente. Essa especificidade dos biossensores foi confirmada por medidas de PM-IRRAS, através das bandas de amida I e II em 1566 cm-1 e 1650 cm-1, indicando a interação específica entre anticorpo e antígeno, que pode ser modelada com uma isoterma de Langmuir-Freundlich. Quando a matriz de quitosana/ Con A foi substituída por um filme monocamada ou quando se empregou um biomarcador de maior peso molecular, a adsorção do biomarcador foi explicada por uma combinação de dois processos de Langmuir-Freundlich. Conclui-se que os biossensores de baixo custo podem ser eficientes para diagnóstico e prognóstico, e serem implementados na rede nacional de saúde com disseminação da tecnologia. / The need of analytical devices to detect cancer at early stages has motivated research into nanomaterials where synergy is sought to achieve high sensitivity and selectivity in biosensors. In this work, 11-mercaptoundecanoic acid, the polymer chitosan and the protein concanavalin A (Con A) were used as a platform to immobilize the anti-CA 19-9 antibody using the self-assembled monolayer (SAM) and the layer-by-layer (LbL) techniques. The characterization with spectroscopic and gravimetric techniques allowed us to select the architectures with chitosan/Con A 2:1 (Sensor A), chitosan/Con A 1:1 (Sensor B) and 11 MUA (Sensor E) as optimized for immobilization of anti-CA 19-9 antibodies. Using impedance spectroscopy, the biosensors were capable of detecting low concentrations of CA 19-9 biomarker, with limit of detection in the range 0.17-0.69 U/mL, 0.31-0.91 U/mL and 0.56-0.91 U/mL for sensors A, B and E, respectively. These limits are sufficient to detect pancreatic cancer at early stages. The selectivity of the biosensors was inferred in a series of control experiments with cell samples SW-620 and HT-29, uric acid, ascorbic acid, glucose, mannose, serum and p24 antigen, indicating the absence of non-specific interference. With information visualization techniques, these samples could be easily distinguished in a visual map, and be classified according to their content of CA 19-9. Furthermore, the selectivity could be quantified through the silhouette coefficient, with values 0.853, 0.861 and 0.897 for sensors A, B and E, respectively. This biosensor specificity was confirmed with PM-IRRAS measurements by monitoring the amide I and II bands at 1566 cm-1 and 1650 cm-1. The specific interaction between antibody and antigen was modeled with a Langmuir-Freundlich isotherm. When the chitosan/Con A matrix was replaced by a SAM monolayer or if a larger biomarker was employed, adsorption was explained by a combination of two Langmuir-Freundlich processes. In conclusion, low cost biosensors may be effective for diagnostics and prognostics, and may be further implemented in the Brazilian national health system with technology transfer.
|
194 |
Implementing Service Model Visualizations : Utilizing Hyperbolic Tree Structures for Visualizing Service Models in Telecommunication NetworksLundgren, Andreas January 2009 (has links)
<p>This paper describes the design, implementation and evaluation of HyperSALmon, a Java™ open source prototype for visualizing service models in telecommunication networks. For efficient browsing and graphical monitoring of service models using SALmon, a service modeling language and a monitoring engine (Leijon et al., 2008), some kind of interactive GUI that implements a visualization of the service model is desired. This is what HyperSALmon is intended to do. The prototype has been designed in accordance with suggestions derived from a current research report of visualization techniques (Sehlstedt, 2008) appropriate for displaying service model data. In addition to these suggestions domain experts at Data Ductus Nord AB has expressed an urge for implementation of further features, some of their suggestions are deduced from research documents (Leijon et al., 2008; Wallin and Leijon, 2007, 2006), while others have been stated orally in direct relation to the prototype implementation work. The main visualization proposal is to use tree structures. Thus, both traditional tree structures and hyperbolic tree structures have been utilized, where the main navigation is set to occur in the hyperbolic tree view. In order to contribute further to this report I provide a discussion addressing problems related to the context of implementing a prototype for service model visualization using open source frameworks that meets the requirements set by the service model network architecture, the domain experts and the suggestions in the research report (Sehlstedt, 2008,page 51-52). Finally, I will present drawn conclusions of the attempted prototype implementation, illustrating potential strengths and weaknesses and consequently introduce suggestions for possible improvement and further development.</p>
|
195 |
Perceptually Motivated Constraints on 3D VisualizationsForsell, Camilla January 2007 (has links)
<p>This thesis addresses some important characteristics of human visual perception and their implications for three-dimensional (3D) information visualization. The effort can be divided into two parts. First, findings from vision science are explored and validated. As a starting point, the compilation of perceptually motivated evidence about what constitutes an effective and efficient method for mapping of data is undertaken. Second, the knowledge obtained is used to create candidate visualizations and to demonstrate the predictive power of the findings.</p><p>Results indicate a general difficulty to convey metric, i.e. quantitative, information in 3D visualizations. Structure as defined by Euclidean geometry is not perceived with accuracy and information encoded by such distinctions are misunderstood or overlooked. On the other hand, qualitative properties as defined by affine geometry are salient are perceived with accuracy (paper I). These findings are also characteristic of two-dimensional (2D) visualizations when these need to be rapidly examined (paper II). </p><p>A novel method (3D surface glyphs) for abstract multivariate data sets was developed to investigate the possible merit of encoding information by qualitative distinctions, (paper III). Evaluations showed that the information conveyed was successfully utilized and that these types of glyph have great potential. The study also illustrated the predictive power of the earlier findings. These issues were further demonstrated when investigating how 3D perspective displays are unaffected by distortions in data when the patterns displayed were defined by affine properties (paper IV). In addition, a new metric for measuring the efficiency of visualizations is presented (paper III). </p><p>It is concluded that as long as visualizations are specified by qualitative properties, they could most probably be effectively and efficiently used. The need for user studies to determine if, when and how to choose a certain visualization technique for a given task is thereby significantly reduced.</p>
|
196 |
Perceptually Motivated Constraints on 3D VisualizationsForsell, Camilla January 2007 (has links)
This thesis addresses some important characteristics of human visual perception and their implications for three-dimensional (3D) information visualization. The effort can be divided into two parts. First, findings from vision science are explored and validated. As a starting point, the compilation of perceptually motivated evidence about what constitutes an effective and efficient method for mapping of data is undertaken. Second, the knowledge obtained is used to create candidate visualizations and to demonstrate the predictive power of the findings. Results indicate a general difficulty to convey metric, i.e. quantitative, information in 3D visualizations. Structure as defined by Euclidean geometry is not perceived with accuracy and information encoded by such distinctions are misunderstood or overlooked. On the other hand, qualitative properties as defined by affine geometry are salient are perceived with accuracy (paper I). These findings are also characteristic of two-dimensional (2D) visualizations when these need to be rapidly examined (paper II). A novel method (3D surface glyphs) for abstract multivariate data sets was developed to investigate the possible merit of encoding information by qualitative distinctions, (paper III). Evaluations showed that the information conveyed was successfully utilized and that these types of glyph have great potential. The study also illustrated the predictive power of the earlier findings. These issues were further demonstrated when investigating how 3D perspective displays are unaffected by distortions in data when the patterns displayed were defined by affine properties (paper IV). In addition, a new metric for measuring the efficiency of visualizations is presented (paper III). It is concluded that as long as visualizations are specified by qualitative properties, they could most probably be effectively and efficiently used. The need for user studies to determine if, when and how to choose a certain visualization technique for a given task is thereby significantly reduced.
|
197 |
Interactive Visualizations of Natural LanguageCollins, Christopher 06 August 2010 (has links)
While linguistic skill is a hallmark of humanity, the increasing volume of linguistic data each of us faces is causing individual and societal problems — ‘information overload’ is a commonly discussed condition. Tasks such as finding the most appropriate information online, understanding the contents of a personal email repository, and translating documents from another language are now commonplace. These tasks need not cause stress and feelings of overload: the human intellectual capacity is not the problem. Rather, the computational interfaces to linguistic data are problematic — there exists a Linguistic Visualization Divide in the current state-of-the-art. Through five design studies, this dissertation combines sophisticated natural language processing algorithms with information visualization techniques grounded in evidence of human visuospatial
capabilities. The first design study, Uncertainty Lattices, augments real-time computermediated communication, such as cross-language instant messaging chat
and automatic speech recognition. By providing explicit indications of algorithmic confidence, the visualization enables informed decisions about the quality of computational outputs.
Two design studies explore the space of content analysis. DocuBurst is an interactive visualization of document content, which spatially organizes words using an expert-created ontology. Broadening from single documents to document collections, Parallel Tag Clouds combine keyword extraction and coordinated visualizations to provide comparative overviews across subsets of a faceted text corpus. Finally, two studies address visualization for natural language processing
research. The Bubble Sets visualization draws secondary set relations around arbitrary collections of items, such as a linguistic parse tree. From this design study we propose a theory of spatial rights to consider when assigning visual encodings to data. Expanding considerations of spatial
rights, we present a formalism to organize the variety of approaches to coordinated and linked visualization, and introduce VisLink, a new method to relate and explore multiple 2d visualizations in 3d space. Intervisualization connections allow for cross-visualization queries and support
high level comparison between visualizations.
From the design studies we distill challenges common to visualizing language data, including maintaining legibility, supporting detailed reading, addressing data scale challenges, and managing problems arising from semantic ambiguity.
|
198 |
Representing information using parametric visual effects on groupware avatarsDielschneider, Shane 05 February 2010
Parametric visual effects such as texture generation and shape grammars can be controlled to produce visually perceptible variation. This variation can be rendered on avatars in groupware systems in real time to represent user information in online environments. This type of extra information has been shown to enrich recognition and characterization, but has previously been limited to iconic representations. Modern, highly graphical virtual worlds require more naturalistic and stylistically consistent techniques to represent information.<p>
A number of different parametric texture generation techniques are considered and a set of texture characteristics are developed. The variations of these texture characteristics are examined in a study to determine how well users can recognize the visual changes in each. Another study is done to determine how much screen space is required for users to recognize these visual changes in a subset of these texture characteristics.<p>
Additionally, an example shape generation system is developed as an example of how shape grammars and L-systems can be used to represent information using a space ship metaphor.<p>
These different parametric visual effects are implemented in an example prototype system using space ships. This prototype is a complete functioning groupware application developed in XNA that utilizes many parametric texture and shape effects.
|
199 |
Collaborative tagging : folksonomy, metadata, visualization, e-learning, thesisBateman, Scott 12 December 2007
Collaborative tagging is a simple and effective method for organizing and sharing web resources using human created metadata. It has arisen out of the need for an efficient method of personal organization, as the number of digital resources in everyday lives increases. While tagging has become a proven organization scheme through its popularity and widespread use on the Web, little is known about its implications and how it may effectively be applied in different situations. This is due to the fact that tagging has evolved through several iterations of use on social software websites, rather than through a scientific or an engineering design process. The research presented in this thesis, through investigations in the domain of e-learning, seeks to understand more about the scientific nature of collaborative tagging through a number of human subject studies. While broad in scope, touching on issues in human computer interaction, knowledge representation, Web system architecture, e-learning, metadata, and information visualization, this thesis focuses on how collaborative tagging can supplement the growing metadata requirements of e-learning. I conclude by looking at how the findings may be used in future research, through using information based in the emergent social networks of social software, to automatically adapt to the needs of individual users.
|
200 |
A data-assisted approach to supporting instructional interventions in technology enhanced learning environments2012 December 1900 (has links)
The design of intelligent learning environments requires significant up-front resources and expertise. These environments generally maintain complex and comprehensive knowledge bases describing pedagogical approaches, learner traits, and content models. This has limited the influence of these technologies in higher education, which instead largely uses learning content management systems in order to deliver non-classroom instruction to learners.
This dissertation puts forth a data-assisted approach to embedding intelligence within learning environments. In this approach, instructional experts are provided with summaries of the activities of learners who interact with technology enhanced learning tools. These experts, which may include instructors, instructional designers, educational technologists, and others, use this data to gain insight into the activities of their learners. These insights lead experts to form instructional interventions which can be used to enhance the learning experience. The novel aspect of this approach is that the actions of the intelligent learning environment are now not just those of the learners and software constructs, but also those of the educational experts who may be supporting the learning process.
The kinds of insights and interventions that come from application of the data-assisted approach vary with the domain being taught, the epistemology and pedagogical techniques being employed, and the particulars of the cohort being instructed. In this dissertation, three investigations using the data-assisted approach are described. The first of these demonstrates the effects of making available to instructors novel sociogram-based visualizations of online asynchronous discourse. By making instructors aware of the discussion habits of both themselves and learners, the instructors are better able to measure the effect of their teaching practice. This enables them to change their activities in response to the social networks that form between their learners, allowing them to react to deficiencies in the learning environment. Through these visualizations it is demonstrated that instructors can effectively change their pedagogy based on seeing data of their students’ interactions.
The second investigation described in this dissertation is the application of unsupervised machine learning to the viewing habits of learners using lecture capture facilities. By clustering learners into groups based on behaviour and correlating groups with academic outcome, a model of positive learning activity can be described. This is particularly useful for instructional designers who are evaluating the role of learning technologies in programs as it contextualizes how technologies enable success in learners. Through this investigation it is demonstrated that the viewership data of learners can be used to assist designers in building higher level models of learning that can be used for evaluating the use of specific tools in blended learning situations.
Finally, the results of applying supervised machine learning to the indexing of lecture video is described. Usage data collected from software is increasingly being used by software engineers to make technologies that are more customizable and adaptable. In this dissertation, it is demonstrated that supervised machine learning can provide human-like indexing of lecture videos that is more accurate than current techniques. Further, these indices can be customized for groups of learners, increasing the level of personalization in the learning environment. This investigation demonstrates that the data-assisted approach can also be used by application developers who are building software features for personalization into intelligent learning environments.
Through this work, it is shown that a data-assisted approach to supporting instructional interventions in technology enhanced learning environments is both possible and can positively impact the teaching and learning process. By making available to instructional experts the online activities of learners, experts can better understand and react to patterns of use that develop, making for a more effective and personalized learning environment. This approach differs from traditional methods of building intelligent learning environments, which apply learning theories a priori to instructional design, and do not leverage the in situ data collected about learners.
|
Page generated in 0.151 seconds