61 |
EDGE COMPUTING APPROACH TO INDOOR TEMPERATURE PREDICTION USING MACHINE LEARNINGHyemin Kim (11565625) 22 November 2021 (has links)
<p>This paper aims to present a novel approach to real-time indoor temperature forecasting to meet energy consumption constraints in buildings, utilizing computing resources available at the edge of a network, close to data sources. This work was inspired by the irreversible effects of global warming accelerated by greenhouse gas emissions from burning fossil fuels. As much as human activities have heavy impacts on global energy use, it is of utmost importance to reduce the amount of energy consumed in every possible scenario where humans are involved. According to the US Environmental Protection Agency (EPA), one of the biggest greenhouse gas sources is commercial and residential buildings, which took up 13 percent of 2019 greenhouse gas emissions in the United States. In this context, it is assumed that information of the building environment such as indoor temperature and indoor humidity, and predictions based on the information can contribute to more accurate and efficient regulation of indoor heating and cooling systems. When it comes to indoor temperature, distributed IoT devices in buildings can enable more accurate temperature forecasting and eventually help to build administrators in regulating the temperature in an energy-efficient way, but without damaging the indoor environment quality. While the IoT technology shows potential as a complement to HVAC control systems, the majority of existing IoT systems integrate a remote cloud to transfer and process all data from IoT sensors. Instead, the proposed IoT system incorporates the concept of edge computing by utilizing small computer power in close proximity to sensors where the data are generated, to overcome problems of the traditional cloud-centric IoT architecture. In addition, as the microcontroller at the edge supports computing, the machine learning-based prediction of indoor temperature is performed on the microcomputer and transferred to the cloud for further processing. The machine learning algorithm used for prediction, ANN (Artificial Neural Network) is evaluated based on error metrics and compared with simple prediction models.</p>
|
62 |
Using Latent Discourse Indicators to identify goodness in online conversationsAyush Jain (6012219) 16 January 2020 (has links)
In this work, we model latent discourse indicators to classify constructive and collaborative conversations online. Such conversations are considered good as they are rich in content and have a sense of direction to resolve an issue, solve a problem or gain new insights and knowledge. These unique discourse indicators are able to characterize flow of information, sentiment and community structure within discussions. We build a deep relational model that captures these complex discourse behaviors as latent variables and make a global prediction about overall conversation based on these higher level discourse behaviors. DRaiL, a Declarative Deep Relational Learning platform built on PyTorch, is used for our task in which relevant discourse behaviors are formulated as discrete latent variables and scored using a deep model. These variables capture the nuances involved in online conversations and provide the information needed for predicting the presence or absence of collaborative and constructive characterization in the entire conversational thread. We show that the joint modeling of such competing latent behaviors results in a performance improvement over the traditional direct classification methods in which all the raw features are just combined together to predict the final decision. The Yahoo News Annotated Comments Corpus is used as a dataset containing discussions on Yahoo news forums and final labels are annotated based on our precise and restricted definitions of positively labeled conversations. We formulated our annotation guidelines based on a sample set of conversations and resolved any conflict in specific annotation by revisiting those examples again.
|
63 |
Predictive Visual Analytics of Social Media Data for Supporting Real-time Situational AwarenessLuke Snyder (8764473) 01 May 2020 (has links)
<div>Real-time social media data can provide useful information on evolving events and situations. In addition, various domain users are increasingly leveraging real-time social media data to gain rapid situational awareness. Informed by discussions with first responders and government officials, we focus on two major barriers limiting the widespread adoption of social media for situational awareness: the lack of geotagged data and the deluge of irrelevant information during events. Geotags are naturally useful, as they indicate the location of origin and provide geographic context. Only a small portion of social media is geotagged, however, limiting its practical use for situational awareness. The deluge of irrelevant data provides equal difficulties, impeding the effective identification of semantically relevant information. Existing methods for short text relevance classification fail to incorporate users' knowledge into the classification process. Therefore, classifiers cannot be interactively retrained for specific events or user-dependent needs in real-time, limiting situational awareness. In this work, we first adapt, improve, and evaluate a state-of-the-art deep learning model for city-level geolocation prediction, and integrate it with a visual analytics system tailored for real-time situational awareness. We then present a novel interactive learning framework in which users rapidly identify relevant data by iteratively correcting the relevance classification of tweets in real-time. We integrate our framework with the extended Social Media Analytics and Reporting Toolkit (SMART) 2.0 system, allowing the use of our interactive learning framework within a visual analytics system adapted for real-time situational awareness.</div>
|
64 |
Deep Learning Based User Models for Interactive Optimization of Watershed DesignsAndrew Paul Hoblitzell (8086769) 11 December 2019 (has links)
<p>This dissertation combines stakeholder and analytical intelligence for consensus decision-making via an interactive optimization process. This dissertation outlines techniques for developing user models of subjective criteria of human stakeholders for an environmental decision support system called WRESTORE. The dissertation compares several user modeling techniques and develops methods for incorporating such user models selectively for interactive optimization, combining multiple objective and subjective criteria. </p><p>This dissertation describes additional functionality for our watershed planning system, called WRESTORE (Watershed REstoration Using Spatio-Temporal Optimization of REsources) (http://wrestore.iupui.edu). Techniques for performing the interactive optimization process in the presence of limited data are described. This work adds a user modeling component that develops a computational model of a stakeholder’s preferences and then integrates the user model component into the decision support system. <br></p><p>Our system is one of many decision support systems and is dependent upon stake- holder interaction. The user modeling component within the system utilizes deep learning, which can be challenging with limited data. Our work integrates user models with limited data with application-specific techniques to address some of these challenges. The dissertation describes steps for implementing accurate virtual stakeholder models based on limited training data. </p><p>Another method for dealing with limited data, based upon computing training data uncertainty, is also presented in this dissertation. Results presented show more stable convergence in fewer iterations when using an uncertainty-based incremental sampling method than when using stability based sampling or random sampling. The technique is described in additional detail. </p><p>The dissertation also discusses non-stationary reinforcement-based feature selection for the interactive optimization component of our system. The presented results indicate that the proposed feature selection approach can effectively mitigate against superfluous and adversarial dimensions which if left untreated can lead to degradation in both computational performance and interactive optimization performance against analytically determined environmental fitness functions. </p><p>The contribution of this dissertation lays the foundation for developing a framework for multi-stakeholder consensus decision-making in the presence of limited data.</p>
|
65 |
Leipziger Beiträge zur InformatikFähnrich, Klaus-Peter 20 November 2014 (has links)
In der Buchreihe "Leipziger Beiträge zur Informatik" erscheinen Berichte aus Forschungsvorhaben, Herausgeberbände im Bereich innovativer und sich etablierender Forschungsgebiete, Habilitationsschriften und Dissertationen sowie Konferenz-Proceedings und herausragende studentische Arbeiten. Der Wert dieser durch den „Leipziger Informatik Verbund“ (LIV) als Zusammenschluss und Interessenverbund verschiedener Informatik-Einrichtungen im Jahr 2003 begründeten Reihe liegt darin, zeitnah und umfassend über abgeschlossene oder laufende wissenschaftliche Arbeiten sowie über neu entstehende Forschungsfelder zu berichten. Die Reihe stellt die innovative Themenvielfalt in den Herausgeberbänden neben die hohe wissenschaftliche Durchdringung in Habilitationen und Dissertationen. Zudem ergänzt sie forschungsrelevante Bereiche mit praxisorientierten technischen Beiträgen und Dokumentationen.
|
66 |
Autoregressive Tensor Decomposition for NYC Taxi Data AnalysisZongwei Li (9192548) 31 July 2020 (has links)
Cities have adopted evolving urban digitization strategies, and most of those increasingly focus on data, especially in the field of public transportation. Transportation data have intuitively spatial and temporal characteristics, for they are often described with when and where the trips occur. Since a trip is often described with many attributes, the transportation data can be presented with a tensor, a container which can house data in $N$-dimensions. Unlike a traditional data frame, which only has column variables, tensor is intuitively more straightforward to explore spatio-temporal data-sets, which makes those attributes more easily interpreted. However, it requires unique techniques to extract useful and relatively correct information in attributes highly correlated with each other. This work presents a mixed model consisting of tensor decomposition combined with seasonal vector autoregression in time to find latent patterns within historical taxi data classified by types of taxis, pick-up and drop-off times of services in NYC, so that it can help predict the place and time where taxis are demanded. We validated the proposed approach using the experiment evaluation with real NYC tax data. The proposed method shows the best prediction among alternative models without geographical inference, and captures the daily patterns of taxi demands for business and entertainment needs.
|
67 |
Efficient and Secure Equality-based Two-party ComputationJavad Darivandpour (11190051) 27 July 2021 (has links)
<div>Multiparty computation refers to a scenario in which multiple distinct yet connected parties aim to jointly compute a functionality. Over recent decades, with the rapid spread of the internet and digital technologies, multiparty computation has become an increasingly important topic. In addition to the integrity of computation in such scenarios, it is essential to ensure that the privacy of sensitive information is not violated. Thus, secure multiparty computation aims to provide sound approaches for the joint computation of desired functionalities in a secure manner: Not only must the integrity of computation be guaranteed, but also each party must not learn anything about the other parties' private data. In other words, each party learns no more than what can be inferred from its own input and its prescribed output.</div><div><br></div><div> This thesis considers secure two-party computation over arithmetic circuits based on additive secret sharing. In particular, we focus on efficient and secure solutions for fundamental functionalities that depend on the equality of private comparands. The first direction we take is providing efficient protocols for two major problems of interest. Specifically, we give novel and efficient solutions for <i>private equality testing</i> and multiple variants of <i>secure wildcard pattern matching</i> over any arbitrary finite alphabet. These problems are of vital importance: Private equality testing is a basic building block in many secure multiparty protocols; and, secure pattern matching is frequently used in various data-sensitive domains, including (but not limited to) private information retrieval and healthcare-related data analysis. The second direction we take towards a performance improvement in equality-based secure two-party computation is via introducing a generic functionality-independent secure preprocessing that results in an overall computation and communication cost reduction for any subsequent protocol. We achieve this by providing the first precise functionality formulation and secure protocols for replacing original inputs with much smaller inputs such that this replacement neither changes the outcome of subsequent computations nor violates the privacy of sensitive inputs. Moreover, our input-size reduction opens the door to a new approach for efficiently solving Private Set Intersection. The protocols we give in this thesis are typically secure in the semi-honest adversarial threat model.</div>
|
68 |
Be More with Less: Scaling Deep-learning with Minimal SupervisionYaqing Wang (12470301) 28 April 2022 (has links)
<p> </p>
<p>Large-scale deep learning models have reached previously unattainable performance for various tasks. However, the ever-growing resource consumption of neural networks generates large carbon footprint, brings difficulty for academics to engage in research and stops emerging economies from enjoying growing Artificial Intelligence (AI) benefits. To further scale AI to bring more benefits, two major challenges need to be solved. Firstly, even though large-scale deep learning models achieved remarkable success, their performance is still not satisfactory when fine-tuning with only a handful of examples, thereby hindering widespread adoption in real-world applications where a large scale of labeled data is difficult to obtain. Secondly, current machine learning models are still mainly designed for tasks in closed environments where testing datasets are highly similar to training datasets. When the deployed datasets have distribution shift relative to collected training data, we generally observe degraded performance of developed models. How to build adaptable models becomes another critical challenge. To address those challenges, in this dissertation, we focus on two topics: few-shot learning and domain adaptation, where few-shot learning aims to learn tasks with limited labeled data and domain adaption address the discrepancy between training data and testing data. In Part 1, we show our few-shot learning studies. The proposed few-shot solutions are built upon large-scale language models with evolutionary explorations from improving supervision signals, incorporating unlabeled data and improving few-shot learning abilities with lightweight fine-tuning design to reduce deployment costs. In Part 2, domain adaptation studies are introduced. We develop a progressive series of domain adaption approaches to transfer knowledge across domains efficiently to handle distribution shifts, including capturing common patterns across domains, adaptation with weak supervision and adaption to thousands of domains with limited labeled data and unlabeled data. </p>
|
69 |
Building the Intelligent IoT-Edge: Balancing Security and Functionality using Deep Reinforcement LearningAnand A Mudgerikar (11791094) 19 December 2021 (has links)
<div>The exponential growth of Internet of Things (IoT) and cyber-physical systems is resulting in complex environments comprising of various devices interacting with each other and with users. In addition, the rapid advances in Artificial Intelligence are making those devices able to autonomously modify their behaviors through the use of techniques such as reinforcement learning (RL). There is thus the need for an intelligent monitoring system on the network edge with a global view of the environment to autonomously predict optimal device actions. However, it is clear however that ensuring safety and security in such environments is critical. To this effect, we develop a constrained RL framework for IoT environments that determines optimal devices actions with respect to user-defined goals or required functionalities using deep Q learning. We use anomaly based intrusion detection on the network edge to dynamically generate security and safety policies to constrain the RL agent in the framework. We analyze the balance required between ‘safety/security’ and ‘functionality’ in IoT environments by manipulating the exploration of safe and unsafe benefit state spaces in the RL framework. We instantiate the framework for testing on application layer control in smart home environments, and network layer control including network functionalities like rate control and routing, for SDN based environments.</div>
|
70 |
Occlusion Management in Conventional and Head-Mounted Display Visualization through the Relaxation of the Single Viewpoint/Timepoint ConstraintMeng-Lin Wu (6916283) 16 August 2019 (has links)
<div>In conventional computer graphics and visualization, images are synthesized following the planar pinhole camera (PPC) model. The PPC approximates physical imaging devices such as cameras and the human eye, which sample the scene with linear rays that originate from a single viewpoint, i.e. the pinhole. In addition, the PPC takes a snapshot of the scene, sampling it at a single instant in time, or timepoint, for each image. Images synthesized with these single viewpoint and single timepoint constraints are familiar to the user, as they emulate images captured with cameras or perceived by the human visual system. However, visualization using the PPC model suffers from the limitation of occlusion, when a region of interest (ROI) is not visible due to obstruction by other data. The conventional solution to the occlusion problem is to rely on the user to change the view interactively to gain line of sight to the scene ROIs. This approach of sequential navigation has the shortcomings of (1) inefficiency, as navigation is wasted when circumventing an occluder does not reveal an ROI, (2) inefficacy, as a moving or a transient ROI can hide or disappear before the user reaches it, or as scene understanding requires visualizing multiple distant ROIs in parallel, and (3) user confusion, as back-and-forth navigation for systematic scene exploration can hinder spatio-temporal awareness.</div><div><br></div><div>In this thesis we propose a novel paradigm for handling occlusions in visualization based on generalizing an image to incorporate samples from multiple viewpoints and multiple timepoints. The image generalization is implemented at camera model level, by removing the same timepoint restriction, and by removing the linear ray restriction, allowing for curved rays that are routed around occluders to reach distant ROIs. The paradigm offers the opportunity to greatly increase the information bandwidth of images, which we have explored in the context of both desktop and head-mounted display visualization, as needed in virtual and augmented reality applications. The challenges of multi-viewpoint multi-timepoint visualization are (1) routing the non-linear rays to find all ROIs or to reach all known ROIs, (2) making the generalized image easy to parse by enforcing spatial and temporal continuity and non-redundancy, (3) rendering the generalized images quickly as required by interactive applications, and (4) developing algorithms and user interfaces for the intuitive navigation of the compound cameras with tens of degrees of freedom. We have addressed these challenges (1) by developing a multiperspective visualization framework based on a hierarchical camera model with PPC and non-PPC leafs, (2) by routing multiple inflection point rays with direction coherence, which enforces visualization continuity, and without intersection, which enforces non-redundancy, (3) by designing our hierarchical camera model to provide closed-form projection, which enables porting generalized image rendering to the traditional and highly-efficient projection followed by rasterization pipeline implemented by graphics hardware, and (4) by devising naturalistic user interfaces based on tracked head-mounted displays that allow deploying and retracting the additional perspectives intuitively and without simulator sickness.</div>
|
Page generated in 0.0864 seconds