• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 67
  • 9
  • 6
  • 6
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 120
  • 120
  • 71
  • 48
  • 42
  • 36
  • 35
  • 31
  • 18
  • 17
  • 17
  • 17
  • 16
  • 14
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Estudo comparativo de descritores para recuperação de imagens por conteudo na web / Comparative study of descriptors for content-based image retrieval on the web

Penatti, Otávio Augusto Bizetto, 1984- 13 August 2018 (has links)
Orientador: Ricardo da Silva Torres / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-13T11:00:01Z (GMT). No. of bitstreams: 1 Penatti_OtavioAugustoBizetto_M.pdf: 2250748 bytes, checksum: 57d5b2f9120a8eae69ee9881d363e9ce (MD5) Previous issue date: 2009 / Resumo: A crescente quantidade de imagens geradas e disponibilizadas atualmente tem eito aumentar a necessidade de criação de sistemas de busca para este tipo de informação. Um método promissor para a realização da busca de imagens e a busca por conteúdo. Este tipo de abordagem considera o conteúdo visual das imagens, como cor, textura e forma de objetos, para indexação e recuperação. A busca de imagens por conteúdo tem como componente principal o descritor de imagens. O descritor de imagens é responsável por extrair propriedades visuais das imagens e armazená-las em vetores de características. Dados dois vetores de características, o descritor compara-os e retorna um valor de distancia. Este valor quantifica a diferença entre as imagens representadas pelos vetores. Em um sistema de busca de imagens por conteúdo, a distancia calculada pelo descritor de imagens é usada para ordenar as imagens da base em relação a uma determinada imagem de consulta. Esta dissertação realiza um estudo comparativo de descritores de imagens considerando a Web como cenário de uso. Este cenário apresenta uma quantidade muito grande de imagens e de conteúdo bastante heterogêneo. O estudo comparativo realizado nesta dissertação é feito em duas abordagens. A primeira delas considera a complexidade assinto tica dos algoritmos de extração de vetores de características e das funções de distancia dos descritores, os tamanhos dos vetores de características gerados pelos descritores e o ambiente no qual cada descritor foi validado originalmente. A segunda abordagem compara os descritores em experimentos práticos em quatro bases de imagens diferentes. Os descritores são avaliados segundo tempo de extração, tempo para cálculos de distancia, requisitos de armazenamento e eficácia. São comparados descritores de cor, textura e forma. Os experimentos são realizados com cada tipo de descritor independentemente e, baseado nestes resultados, um conjunto de descritores é avaliado em uma base com mais de 230 mil imagens heterogêneas, que reflete o conteúdo encontrado na Web. A avaliação de eficácia dos descritores na base de imagens heterogêneas é realizada por meio de experimentos com usuários reais. Esta dissertação também apresenta uma ferramenta para a realização automatizada de testes comparativos entre descritores de imagens. / Abstract: The growth in size of image collections and the worldwide availability of these collections has increased the demand for image retrieval systems. A promising approach to address this demand is to retrieve images based on image content (Content-Based Image Retrieval). This approach considers the image visual properties, like color, texture and shape of objects, for indexing and retrieval. The main component of a content-based image retrieval system is the image descriptor. The image descriptor is responsible for encoding image properties into feature vectors. Given two feature vectors, the descriptor compares them and computes a distance value. This value quantifies the difference between the images represented by their vectors. In a content-based image retrieval system, these distance values are used to rank database images with respect to their distance to a given query image. This dissertation presents a comparative study of image descriptors considering the Web as the environment of use. This environment presents a huge amount of images with heterogeneous content. The comparative study was conducted by taking into account two approaches. The first approach considers the asymptotic complexity of feature vectors extraction algorithms and distance functions, the size of the feature vectors generated by the descriptors and the environment where each descriptor was validated. The second approach compares the descriptors in practical experiments using four different image databases. The evaluation considers the time required for features extraction, the time for computing distance values, the storage requirements and the effectiveness of each descriptor. Color, texture, and shape descriptors were compared. The experiments were performed with each kind of descriptor independently and, based on these results, a set of descriptors was evaluated in an image database containing more than 230 thousand heterogeneous images, reflecting the content existent in the Web. The evaluation of descriptors effectiveness in the heterogeneous database was made by experiments using real users. This dissertation also presents a tool for executing experiments aiming to evaluate image descriptors. / Mestrado / Sistemas de Informação / Mestre em Ciência da Computação
112

EXPLORING ECOSYSTEMS IN INDIANA’S EDUCATION AND WORKFORCE DEVELOPMENT USING A DATA VISUALIZATION DASHBOARD

Yash S Gugale (8800853) 05 May 2020 (has links)
<div>Large datasets related to Indiana’s Education and Workforce development are used by various demographics such as stakeholders and decision makers in education and government, parents, teachers and employees of various companies to find trends and patterns in the data to better guide decision-making through statistical analysis. However, most of this data is scattered, textual and available in the form of excel sheets which makes it harder to look at the data from different perspectives, drill down and roll up the data and find trends and patterns in the data. Such data representation does not take into account the inherent characteristics of the user which can affect how well the user understands, perceives and interprets the data.</div><div>Information dashboards used to view and navigate between visualizations of different datasets, provide a coherent, central access to all data, and make it easy to view different aspects of the system. The purpose of this research is to create a new data visualization dashboard for visualizing education and workforce data and find which design principles are applicable while designing such a dashboard for the target demographic in the education and workforce domain. This study also aims at assessing how the introduction of such a data dashboard affects the work processes and decision making of stakeholders involved in education and workforce development in the state of Indiana.</div><div>User studies consisting of usability testing and semi-structured interviews with the stakeholders in education and workforce development in the state of Indiana is conducted to test the effectiveness of the dashboard. Finally, this research proposes how a regional map-based dashboard can be used as an effective method to design a data dashboard for education and workforce data for other states and other domains as well.</div>
113

CISTAR Cybersecurity Scorecard

Braiden M Frantz (8072417) 03 December 2019 (has links)
<p>Highly intelligent and technically savvy people are employed to hack data systems throughout the world for prominence or monetary gain. Organizations must combat these criminals with people of equal or greater ability. There have been reports of heightened threats from cyber criminals focusing upon the energy sector, with recent attacks upon natural gas pipelines and payment centers. The Center for Innovative and Strategic Transformation of Alkane Resources (CISTAR) working collaboratively with the Purdue Process Safety and Assurance Center (P2SAC) reached out to the Computer and Information Technology Department to assist with analysis of the current cybersecurity posture of the companies involved with the CISTAR initiative. This cybersecurity research project identifies the overall defensive cyber posture of CISTAR companies and provides recommendations on how to bolster internal cyberspace defenses through the identification of gaps and shortfalls, which aided the compilation of suggestions for improvement. Key findings include the correlation of reduced cybersecurity readiness to companies founded less than 10 years ago, cybersecurity professionals employed by all CISTAR companies and all CISTAR companies implementing basic NIST cybersecurity procedures.</p>
114

An Exploratory Study on The Trust of Information in Social Media

Chih-Yuan Chou (8630730) 17 April 2020 (has links)
This study examined the level of trust of information on social media. Specifically, I investigated the factors of performance expectancy with information-seeking motives that appear to influence the level of trust of information on various social network sites. This study utilized the following theoretical models: elaboration likelihood model (ELM), the uses and gratifications theory (UGT), the unified theory of acceptance and use of technology model (UTAUT), the consumption value theory (CVT), and the Stimulus-Organism-Response (SOR) Model to build a conceptual research framework for an exploratory study. The research investigated the extent to which information quality and source credibility influence the level of trust of information by visitors to the social network sites. The inductive content analysis on 189 respondents’ responses carefully addressed the proposed research questions and then further developed a comprehensive framework. The findings of this study contribute to the current research stream on information quality, fake news, and IT adoption as they relate to social media.
115

Inducing Conceptual User Models

Müller, Martin Eric 29 April 2002 (has links)
User Modeling and Machine Learning for User Modeling have both become important research topics and key techniques in recent adaptive systems. One of the most intriguing problems in the `information age´ is how to filter relevant information from the huge amount of available data. This problem is tackled by using models of the user´s interest in order to increase precision and discriminate interesting information from un-interesting data. However, any user modeling approach suffers from several major drawbacks: User models built by the system need to be inspectable and understandable by the user himself. Secondly, users in general are not willing to give feedback concerning user satisfaction by the delivered results. Without any evidence for the user´s interest, it is hard to induce a hypothetical user model at all. Finally, most current systems do not draw a line of distinction between domain knowledge and user model which makes the adequacy of a user model hard to determine. This thesis presents the novel approach of conceptual user models. Conceptual user models are easy to inspect and understand and allow for the system to explain its actions to the user. It is shown, that ILP can be applied for the task of inducing user models from feedback, and a method for using mutual feedback for sample enlargement is introduced. Results are evaluated independently of domain knowledge within a clear machine learning problem definition. The whole concept presented is realized in a meta web search engine called OySTER.
116

Search Interaction Optimization: A Human-Centered Design Approach

Speicher, Maximilian 20 September 2016 (has links)
Over the past 25 years, search engines have become one of the most important, if not the entry point of the World Wide Web. This development has been primarily due to the continuously increasing amount of available documents, which are highly unstructured. Moreover, the general trend is towards classifying search results into categories and presenting them in terms of semantic information that answer users' queries without having to leave the search engine. With the growing amount of documents and technological enhancements, the needs of users as well as search engines are continuously evolving. Users want to be presented with increasingly sophisticated results and interfaces while companies have to place advertisements and make revenue to be able to offer their services for free. To address the above needs, it is more and more important to provide highly usable and optimized search engine results pages (SERPs). Yet, existing approaches to usability evaluation are often costly or time-consuming and mostly rely on explicit feedback. They are either not efficient or not effective while SERP interfaces are commonly optimized primarily from a company's point of view. Moreover, existing approaches to predicting search result relevance, which are mostly based on clicks, are not tailored to the evolving kinds of SERPs. For instance, they fail if queries are answered directly on a SERP and no clicks need to happen. Applying Human-Centered Design principles, we propose a solution to the above in terms of a holistic approach that intends to satisfy both, searchers and developers. It provides novel means to counteract exclusively company-centric design and to make use of implicit user feedback for efficient and effective evaluation and optimization of usability and, in particular, relevance. We define personas and scenarios from which we infer unsolved problems and a set of well-defined requirements. Based on these requirements, we design and develop the Search Interaction Optimization toolkit. Using a bottom-up approach, we moreover define an eponymous, higher-level methodology. The Search Interaction Optimization toolkit comprises a total of six components. We start with INUIT [1], which is a novel minimal usability instrument specifically aiming at meaningful correlations with implicit user feedback in terms of client-side interactions. Hence, it serves as a basis for deriving usability scores directly from user behavior. INUIT has been designed based on reviews of established usability standards and guidelines as well as interviews with nine dedicated usability experts. Its feasibility and effectiveness have been investigated in a user study. Also, a confirmatory factor analysis shows that the instrument can reasonably well describe real-world perceptions of usability. Subsequently, we introduce WaPPU [2], which is a context-aware A/B testing tool based on INUIT. WaPPU implements the novel concept of Usability-based Split Testing and enables automatic usability evaluation of arbitrary SERP interfaces based on a quantitative score that is derived directly from user interactions. For this, usability models are automatically trained and applied based on machine learning techniques. In particular, the tool is not restricted to evaluating SERPs, but can be used with any web interface. Building on the above, we introduce S.O.S., the SERP Optimization Suite [3], which comprises WaPPU as well as a catalog of best practices [4]. Once it has been detected that an investigated SERP's usability is suboptimal based on scores delivered by WaPPU, corresponding optimizations are automatically proposed based on the catalog of best practices. This catalog has been compiled in a three-step process involving reviews of existing SERP interfaces and contributions by 20 dedicated usability experts. While the above focus on the general usability of SERPs, presenting the most relevant results is specifically important for search engines. Hence, our toolkit contains TellMyRelevance! (TMR) [5] — the first end-to-end pipeline for predicting search result relevance based on users’ interactions beyond clicks. TMR is a fully automatic approach that collects necessary information on the client, processes it on the server side and trains corresponding relevance models based on machine learning techniques. Predictions made by these models can then be fed back into the ranking process of the search engine, which improves result quality and hence also usability. StreamMyRelevance! (SMR) [6] takes the concept of TMR one step further by providing a streaming-based version. That is, SMR collects and processes interaction data and trains relevance models in near real-time. Based on a user study and large-scale log analysis involving real-world search engines, we have evaluated the components of the Search Interaction Optimization toolkit as a whole—also to demonstrate the interplay of the different components. S.O.S., WaPPU and INUIT have been engaged in the evaluation and optimization of a real-world SERP interface. Results show that our tools are able to correctly identify even subtle differences in usability. Moreover, optimizations proposed by S.O.S. significantly improved the usability of the investigated and redesigned SERP. TMR and SMR have been evaluated in a GB-scale interaction log analysis as well using data from real-world search engines. Our findings indicate that they are able to yield predictions that are better than those of competing state-of-the-art systems considering clicks only. Also, a comparison of SMR to existing solutions shows its superiority in terms of efficiency, robustness and scalability. The thesis concludes with a discussion of the potential and limitations of the above contributions and provides an overview of potential future work. / Im Laufe der vergangenen 25 Jahre haben sich Suchmaschinen zu einem der wichtigsten, wenn nicht gar dem wichtigsten Zugangspunkt zum World Wide Web (WWW) entwickelt. Diese Entwicklung resultiert vor allem aus der kontinuierlich steigenden Zahl an Dokumenten, welche im WWW verfügbar, jedoch sehr unstrukturiert organisiert sind. Überdies werden Suchergebnisse immer häufiger in Kategorien klassifiziert und in Form semantischer Informationen bereitgestellt, die direkt in der Suchmaschine konsumiert werden können. Dies spiegelt einen allgemeinen Trend wider. Durch die wachsende Zahl an Dokumenten und technologischen Neuerungen wandeln sich die Bedürfnisse von sowohl Nutzern als auch Suchmaschinen ständig. Nutzer wollen mit immer besseren Suchergebnissen und Interfaces versorgt werden, während Suchmaschinen-Unternehmen Werbung platzieren und Gewinn machen müssen, um ihre Dienste kostenlos anbieten zu können. Damit geht die Notwendigkeit einher, in hohem Maße benutzbare und optimierte Suchergebnisseiten – sogenannte SERPs (search engine results pages) – für Nutzer bereitzustellen. Gängige Methoden zur Evaluierung und Optimierung von Usability sind jedoch größtenteils kostspielig oder zeitaufwändig und basieren meist auf explizitem Feedback. Sie sind somit entweder nicht effizient oder nicht effektiv, weshalb Optimierungen an Suchmaschinen-Schnittstellen häufig primär aus dem Unternehmensblickwinkel heraus durchgeführt werden. Des Weiteren sind bestehende Methoden zur Vorhersage der Relevanz von Suchergebnissen, welche größtenteils auf der Auswertung von Klicks basieren, nicht auf neuartige SERPs zugeschnitten. Zum Beispiel versagen diese, wenn Suchanfragen direkt auf der Suchergebnisseite beantwortet werden und der Nutzer nicht klicken muss. Basierend auf den Prinzipien des nutzerzentrierten Designs entwickeln wir eine Lösung in Form eines ganzheitlichen Ansatzes für die oben beschriebenen Probleme. Dieser Ansatz orientiert sich sowohl an Nutzern als auch an Entwicklern. Unsere Lösung stellt automatische Methoden bereit, um unternehmenszentriertem Design entgegenzuwirken und implizites Nutzerfeedback für die effizienteund effektive Evaluierung und Optimierung von Usability und insbesondere Ergebnisrelevanz nutzen zu können. Wir definieren Personas und Szenarien, aus denen wir ungelöste Probleme und konkrete Anforderungen ableiten. Basierend auf diesen Anforderungen entwickeln wir einen entsprechenden Werkzeugkasten, das Search Interaction Optimization Toolkit. Mittels eines Bottom-up-Ansatzes definieren wir zudem eine gleichnamige Methodik auf einem höheren Abstraktionsniveau. Das Search Interaction Optimization Toolkit besteht aus insgesamt sechs Komponenten. Zunächst präsentieren wir INUIT [1], ein neuartiges, minimales Instrument zur Bestimmung von Usability, welches speziell auf sinnvolle Korrelationen mit implizitem Nutzerfeedback in Form Client-seitiger Interaktionen abzielt. Aus diesem Grund dient es als Basis für die direkte Herleitung quantitativer Usability-Bewertungen aus dem Verhalten von Nutzern. Das Instrument wurde basierend auf Untersuchungen etablierter Usability-Standards und -Richtlinien sowie Experteninterviews entworfen. Die Machbarkeit und Effektivität der Benutzung von INUIT wurden in einer Nutzerstudie untersucht und darüber hinaus durch eine konfirmatorische Faktorenanalyse bestätigt. Im Anschluss beschreiben wir WaPPU [2], welches ein kontextsensitives, auf INUIT basierendes Tool zur Durchführung von A/B-Tests ist. Es implementiert das neuartige Konzept des Usability-based Split Testing und ermöglicht die automatische Evaluierung der Usability beliebiger SERPs basierend auf den bereits zuvor angesprochenen quantitativen Bewertungen, welche direkt aus Nutzerinteraktionen abgeleitet werden. Hierzu werden Techniken des maschinellen Lernens angewendet, um automatisch entsprechende Usability-Modelle generieren und anwenden zu können. WaPPU ist insbesondere nicht auf die Evaluierung von Suchergebnisseiten beschränkt, sondern kann auf jede beliebige Web-Schnittstelle in Form einer Webseite angewendet werden. Darauf aufbauend beschreiben wir S.O.S., die SERP Optimization Suite [3], welche das Tool WaPPU sowie einen neuartigen Katalog von „Best Practices“ [4] umfasst. Sobald eine durch WaPPU gemessene, suboptimale Usability-Bewertung festgestellt wird, werden – basierend auf dem Katalog von „Best Practices“ – automatisch entsprechende Gegenmaßnahmen und Optimierungen für die untersuchte Suchergebnisseite vorgeschlagen. Der Katalog wurde in einem dreistufigen Prozess erarbeitet, welcher die Untersuchung bestehender Suchergebnisseiten sowie eine Anpassung und Verifikation durch 20 Usability-Experten beinhaltete. Die bisher angesprochenen Tools fokussieren auf die generelle Usability von SERPs, jedoch ist insbesondere die Darstellung der für den Nutzer relevantesten Ergebnisse eminent wichtig für eine Suchmaschine. Da Relevanz eine Untermenge von Usability ist, beinhaltet unser Werkzeugkasten daher das Tool TellMyRelevance! (TMR) [5], die erste End-to-End-Lösung zur Vorhersage von Suchergebnisrelevanz basierend auf Client-seitigen Nutzerinteraktionen. TMR ist einvollautomatischer Ansatz, welcher die benötigten Daten auf dem Client abgreift, sie auf dem Server verarbeitet und entsprechende Relevanzmodelle bereitstellt. Die von diesen Modellen getroffenen Vorhersagen können wiederum in den Ranking-Prozess der Suchmaschine eingepflegt werden, was schlussendlich zu einer Verbesserung der Usability führt. StreamMyRelevance! (SMR) [6] erweitert das Konzept von TMR, indem es einen Streaming-basierten Ansatz bereitstellt. Hierbei geschieht die Sammlung und Verarbeitung der Daten sowie die Bereitstellung der Relevanzmodelle in Nahe-Echtzeit. Basierend auf umfangreichen Nutzerstudien mit echten Suchmaschinen haben wir den entwickelten Werkzeugkasten als Ganzes evaluiert, auch, um das Zusammenspiel der einzelnen Komponenten zu demonstrieren. S.O.S., WaPPU und INUIT wurden zur Evaluierung und Optimierung einer realen Suchergebnisseite herangezogen. Die Ergebnisse zeigen, dass unsere Tools in der Lage sind, auch kleine Abweichungen in der Usability korrekt zu identifizieren. Zudem haben die von S.O.S.vorgeschlagenen Optimierungen zu einer signifikanten Verbesserung der Usability der untersuchten und überarbeiteten Suchergebnisseite geführt. TMR und SMR wurden mit Datenmengen im zweistelligen Gigabyte-Bereich evaluiert, welche von zwei realen Hotelbuchungsportalen stammen. Beide zeigen das Potential, bessere Vorhersagen zu liefern als konkurrierende Systeme, welche lediglich Klicks auf Ergebnissen betrachten. SMR zeigt gegenüber allen anderen untersuchten Systemen zudem deutliche Vorteile bei Effizienz, Robustheit und Skalierbarkeit. Die Dissertation schließt mit einer Diskussion des Potentials und der Limitierungen der erarbeiteten Forschungsbeiträge und gibt einen Überblick über potentielle weiterführende und zukünftige Forschungsarbeiten.
117

CyberWater: An open framework for data and model integration

Ranran Chen (18423792) 03 June 2024 (has links)
<p dir="ltr">Workflow management systems (WMSs) are commonly used to organize/automate sequences of tasks as workflows to accelerate scientific discoveries. During complex workflow modeling, a local interactive workflow environment is desirable, as users usually rely on their rich, local environments for fast prototyping and refinements before they consider using more powerful computing resources.</p><p dir="ltr">This dissertation delves into the innovative development of the CyberWater framework based on Workflow Management Systems (WMSs). Against the backdrop of data-intensive and complex models, CyberWater exemplifies the transition of intricate data into insightful and actionable knowledge and introduces the nuanced architecture of CyberWater, particularly focusing on its adaptation and enhancement from the VisTrails system. It highlights the significance of control and data flow mechanisms and the introduction of new data formats for effective data processing within the CyberWater framework.</p><p dir="ltr">This study presents an in-depth analysis of the design and implementation of Generic Model Agent Toolkits. The discussion centers on template-based component mechanisms and the integration with popular platforms, while emphasizing the toolkit’s ability to facilitate on-demand access to High-Performance Computing resources for large-scale data handling. Besides, the development of an asynchronously controlled workflow within CyberWater is also explored. This innovative approach enhances computational performance by optimizing pipeline-level parallelism and allows for on-demand submissions of HPC jobs, significantly improving the efficiency of data processing.</p><p dir="ltr">A comprehensive methodology for model-driven development and Python code integration within the CyberWater framework and innovative applications of GPT models for automated data retrieval are introduced in this research as well. It examines the implementation of Git Actions for system automation in data retrieval processes and discusses the transformation of raw data into a compatible format, enhancing the adaptability and reliability of the data retrieval component in the adaptive generic model agent toolkit component.</p><p dir="ltr">For the development and maintenance of software within the CyberWater framework, the use of tools like GitHub for version control and outlining automated processes has been applied for software updates and error reporting. Except that, the user data collection also emphasizes the role of the CyberWater Server in these processes.</p><p dir="ltr">In conclusion, this dissertation presents our comprehensive work on the CyberWater framework's advancements, setting new standards in scientific workflow management and demonstrating how technological innovation can significantly elevate the process of scientific discovery.</p>
118

The development of a computer literacy curriculum for California charter schools

Mobarak, Barbara Ann 01 January 2004 (has links)
To develop leaders for the 21st century, schools must be able to prepare students to meet the high academic, technical and workforce challenges. Charter schools are increasingly attempting to meet these challenges by educating students through innovative means and by creating effectual educational programs that are more conducive to the needs of the student. This document provides a computer literacy curriculum, which will facilitate student learning of computer literacy skills.
119

PLANT LEVEL IIOT BASED ENERGY MANAGEMENT FRAMEWORK

Liya Elizabeth Koshy (14700307) 31 May 2023 (has links)
<p><strong>The Energy Monitoring Framework</strong>, designed and developed by IAC, IUPUI, aims to provide a cloud-based solution that combines business analytics with sensors for real-time energy management at the plant level using wireless sensor network technology.</p> <p>The project provides a platform where users can analyze the functioning of a plant using sensor data. The data would also help users to explore the energy usage trends and identify any energy leaks due to malfunctions or other environmental factors in their plant. Additionally, the users could check the machinery status in their plant and have the capability to control the equipment remotely.</p> <p>The main objectives of the project include the following:</p> <ul> <li>Set up a wireless network using sensors and smart implants with a base station/ controller.</li> <li>Deploy and connect the smart implants and sensors with the equipment in the plant that needs to be analyzed or controlled to improve their energy efficiency.</li> <li>Set up a generalized interface to collect and process the sensor data values and store the data in a database.</li> <li>Design and develop a generic database compatible with various companies irrespective of the type and size.</li> <li> Design and develop a web application with a generalized structure. Hence the database can be deployed at multiple companies with minimum customization. The web app should provide the users with a platform to interact with the data to analyze the sensor data and initiate commands to control the equipment.</li> </ul> <p>The General Structure of the project constitutes the following components:</p> <ul> <li>A wireless sensor network with a base station.</li> <li>An Edge PC, that interfaces with the sensor network to collect the sensor data and sends it out to the cloud server. The system also interfaces with the sensor network to send out command signals to control the switches/ actuators.</li> <li>A cloud that hosts a database and an API to collect and store information.</li> <li>A web application hosted in the cloud to provide an interactive platform for users to analyze the data.</li> </ul> <p>The project was demonstrated in:</p> <ul> <li>Lecture Hall (https://iac-lecture-hall.engr.iupui.edu/LectureHallFlask/).</li> <li>Test Bed (https://iac-testbed.engr.iupui.edu/testbedflask/).</li> <li>A company in Indiana.</li> </ul> <p>The above examples used sensors such as current sensors, temperature sensors, carbon dioxide sensors, and pressure sensors to set up the sensor network. The equipment was controlled using compactable switch nodes with the chosen sensor network protocol. The energy consumption details of each piece of equipment were measured over a few days. The data was validated, and the system worked as expected and helped the user to monitor, analyze and control the connected equipment remotely.</p> <p><br></p>
120

EXPLORING GRAPH NEURAL NETWORKS FOR CLUSTERING AND CLASSIFICATION

Fattah Muhammad Tahabi (14160375) 03 February 2023 (has links)
<p><strong>Graph Neural Networks</strong> (GNNs) have become excessively popular and prominent deep learning techniques to analyze structural graph data for their ability to solve complex real-world problems. Because graphs provide an efficient approach to contriving abstract hypothetical concepts, modern research overcomes the limitations of classical graph theory, requiring prior knowledge of the graph structure before employing traditional algorithms. GNNs, an impressive framework for representation learning of graphs, have already produced many state-of-the-art techniques to solve node classification, link prediction, and graph classification tasks. GNNs can learn meaningful representations of graphs incorporating topological structure, node attributes, and neighborhood aggregation to solve supervised, semi-supervised, and unsupervised graph-based problems. In this study, the usefulness of GNNs has been analyzed primarily from two aspects - <strong>clustering and classification</strong>. We focus on these two techniques, as they are the most popular strategies in data mining to discern collected data and employ predictive analysis.</p>

Page generated in 0.0871 seconds