181 |
Heterogeneity and Density Aware Design of Computing SystemsArora, Manish 03 August 2018 (has links)
<p> The number of programmable cores that are available in systems continues to increase with advances in device scaling, integration, and iterative improvements. Today, systems are not just integrating more cores, but also integrating a variety of different types of processing cores, resulting in dense heterogeneous systems. However, important questions remain about the design methodology for dense heterogeneous systems. This thesis seeks to address these questions. </p><p> One typical methodology for heterogeneous system design is to comprise systems by using parts of homogeneous systems. Another commonly used technique to enable density is replication. However, these design methodologies are “heterogeneous system oblivious” and “density oblivious”. The components of the system are not aware or optimized for the heterogeneous system they would become a part of. Nor are they aware of the existence of other replicated components. This thesis shows that “heterogeneous system oblivious” and “density oblivious” design methodologies result in inefficient systems. This thesis proposes heterogeneity and density aware approaches to designing dense heterogeneous architectures.</p><p>
|
182 |
Supervised and Unsupervised Learning for Semantics Distillation in Multimedia ProcessingLiu, Yu 19 October 2018 (has links)
<p> In linguistic, "semantics" stands for the intended meaning in natural language, such as in words, phrases and sentences. In this dissertation, the concept "semantics" is defined more generally: the intended meaning of information in all multimedia forms. The multimedia forms include language domain text, as well as vision domain stationary images and dynamic videos. Specifically, semantics in multimedia are the media content of cognitive information, knowledge and idea that can be represented in text, images and video clips. A narrative story, for example, can be semantics summary of a novel book, or semantics summary of the movie originated from that book. Thus, semantic is a high level abstract knowledge that is independent from multimedia forms. </p><p> Indeed, the same amount of semantics can be represented either redundantly or concisely, due to diversified levels of expression ability of multimedia. The process of a redundantly represented semantics evolving into a concisely represented one is called "semantic distillation". And this evolving process can happen either in between different multimedia forms, or within the same form. </p><p> The booming growth of unorganized and unfiltered information is bringing to people an unwanted issue, information overload, where techniques of semantic distillation are in high demand. However, as opportunities always be side with challenges, machine learning and Artificial Intelligence (AI) today become far more advanced than that in the past, and provide with us powerful tools and techniques. Large varieties of learning methods has made countless of impossible tasks come to reality. Thus in this dissertation, we take advantages of machine learning techniques, with both supervised learning and unsupervised learning, to empower the solving of semantics distillation problems. </p><p> Despite the promising future and powerful machine learning techniques, the heterogeneous forms of multimedia involving many domains still impose challenges to semantics distillation approaches. A major challenge is the definition of "semantics" and the related processing techniques can be entirely different from one problem to another. Varying types of multimedia resources can introduce varying kinds of domain-specific limitations and constraints, where the obtaining of semantics also becomes domain-specific. Therefore, in this dissertation, with text language and vision as the two major domains, we approach four problems of all combinations of the two domains: <b>• Language to Vision Domain:</b> In this study, <i>Presentation Storytelling </i> is formulated as a problem that suggesting the most appropriate images from online sources for storytelling purpose given a text query. Particularly, we approach the problem with a two-step semantics processing method, where the semantics from a simple query is first expanded to a diverse semantic graph, and then distilled from a large number of searched web photos to a few representative ones. This two-step method is empowered by Conditional Random Field (CRF) model, and learned in supervised manner with human-labeled examples. <b>• Vision to Language Domain:</b> The second study, <i> Visual Storytelling</i>, formulates a problem of generating a coherent paragraph from a photo stream. Different from presentation storytelling, visual storytelling goes in opposite way: the semantics extracted from a handful photos are distilled into text. In this dissertation, we address this problem by revealing the semantics relationships in visual domain, and distilled into language domain with a new designed Bidirectional Attention Recurrent Neural Network (BARNN) model. Particularly, an attention model is embedded to the RNN so that the coherence can be preserved in language domain at the output being a human-like story. The model is trained with deep learning and supervised learning with public datasets. <b>• Dedicated Vision Domain:</b> To directly approach the information overload issue in vision domain, <i> Image Semantic Extraction</i> formulates a problem that selects a subset from multimedia user's photo albums. In the literature, this problem has mostly been approached with unsupervised learning process. However, in this dissertation, we develop a novel supervised learning method to attach the same problem. We specify visual semantics as a quantizable variables and can be measured, and build an encoding-decoding pipeline with Long-Short-Term-Memory (LSTM) to model this quantization process. The intuition of encoding-decoding pipeline is to imitate human: read-think-and-retell. That is, the pipeline first includes an LSTM encoder scanning all photos for "reading" comprised semantics, then concatenates with an LSTM decoder selecting the most representative ones for "thinking" the gist semantics, finally adds a dedicated residual layer revisiting the unselected ones for "verifying" if the semantics are complete enough. <b> • Dedicated Language Domain:</b> Distinct from above issues, in this part, we introduce a different genre of machine learning method, unsupervised learning. We will address a semantics distillation problem in language domain, <i> Text Semantic Extraction</i>, where the semantics in a letter sequence are extracted from printed images. (Abstract shortened by ProQuest.) </p><p>
|
183 |
Efficient Actor Recovery Paradigm for Wireless Sensor and Actor NetworksMahjoub, Reem Khalid 16 March 2018 (has links)
<p> Wireless sensor networks (WSNs) are becoming widely used worldwide. Wireless Sensor and Actor Networks (WSANs) represent a special category of WSNs wherein actors and sensors collaborate to perform specific tasks. WSANs have become one of the most preeminent emerging type of WSNs. Sensors with nodes having limited power resources are responsible for sensing and transmitting events to actor nodes. Actors are high-performance nodes equipped with rich resources that have the ability to collect, process, transmit data and perform various actions. WSANs have a unique architecture that distinguishes them from WSNs. Due to the characteristics of WSANs, numerous challenges arise. Determining the importance of factors usually depends on the application requirements. </p><p> The actor nodes are the spine of WSANs that collaborate to perform the specific tasks in an unsubstantiated and uneven environment. Thus, there is a possibility of high failure rate in such unfriendly scenarios due to several factors such as power fatigue of devices, electronic circuit failure, software errors in nodes or physical impairment of the actor nodes and inter-actor connectivity problem. It is essential to keep inter-actor connectivity in order to insure network connectivity. Thus, it is extremely important to discover the failure of a cut-vertex actor and network-disjoint in order to improve the Quality-of-Service (QoS). For network recovery process from actor node failure, optimal re-localization and coordination techniques should take place. </p><p> In this work, we propose an efficient actor recovery (EAR) paradigm to guarantee the contention-free traffic-forwarding capacity. The EAR paradigm consists of Node Monitoring and Critical Node Detection (NMCND) algorithm that monitors the activities of the nodes to determine the critical node. In addition, it replaces the critical node with backup node prior to complete node-failure which helps balances the network performance. The packet is handled using Network Integration and Message Forwarding (NIMF) algorithm that determines the source of forwarding the packets (Either from actor or sensor). This decision-making capability of the algorithm controls the packet forwarding rate to maintain the network for longer time. Furthermore, for handling the proper routing strategy, Priority-Based Routing for Node Failure Avoidance (PRNFA) algorithm is deployed to decide the priority of the packets to be forwarded based on the significance of information available in the packet. To validate the effectiveness of the proposed EAR paradigm, we compare the performance of our proposed work with state-of the art localization algorithms. Our experimental results show superior performance in regards to network life, residual energy, reliability, sensor/ actor recovery time and data recovery. </p><p>
|
184 |
Computation and Communication Optimization in Many-Core Heterogeneous Server-on-ChipReza, Md Farhadur 12 May 2018 (has links)
<p> To make full use of parallelism of many cores in network-on-chip (NoC) based server-on-chip, this dissertation addresses the problem of computation and communication optimization during task-resource co-allocation of large-scale applications onto heterogeneous NoCs. Both static and dynamic task mapping and resource configuration have been performed while making the solution aware of power, thermal, dark/dim silicon, and capacity issues of chip. Our objectives are to minimize energy consumption and hotspots for improving NoC performance in terms of latency and throughput while meeting the above-mentioned chip constraints. Task-resource allocation and configuration problems have been formulated using linear programming (LP) optimization for optimal solutions. Due to high time complexity of LP solutions, fast heuristic approaches are proposed to get the near-optimal mapping and configuration solutions in a finite time for many-core systems. </p><p> • We first present the hotspots minimization problems and solutions in NoC based many-core server-on-chip considering both computation and communication demands of the applications while meeting the chip constraints in terms of chip area budget, computational capacity of nodes, and communication capacity of links. </p><p> • We then address power and thermal limitations in dark silicon era by proposing run-time resource management strategy and mapping for minimization of both hotspots and overall chip energy in many-core NoC. </p><p> • We then present the power-thermal aware load-balanced mapping in heterogeneous CPU-GPU systems in many-core NoC. Distributed resource management strategy in CPU-GPU system using CPUs for system management and latency-sensitive tasks and GPUs for throughput-intensive tasks has been proposed. </p><p> • We propose a neural network model to dynamically monitor, predict, and configure NoC resources. This work applies local and global neural networks classifiers for configuring NoC based on demands of applications and chip constraints. </p><p> • Due to the integration of many-cores in a single chip, we propose express channels for improving NoC performance in terms of latency and throughput. We also propose mapping methodologies for efficient task-resource co-allocation in express channel enabled many-core NoC.</p><p>
|
185 |
Algorithms for Graph Drawing ProblemsHe, Dayu 08 August 2017 (has links)
<p> A graph G is called <i>planar</i> if it can be drawn on the plan such that no two distinct edges intersect each other but at common endpoints. Such drawing is called a plane embedding of <i>G.</i> A plane graph is a graph with a fixed embedding. A straight-line drawing <i>G</i> of a graph <i>G</i> = (<i>V, E</i>) is a drawing where each vertex of <i>V</i> is drawn as a distinct point on the plane and each edge of <i>G</i> is drawn as a line segment connecting two end vertices. In this thesis, we study a set of planar graph drawing problems. </p><p> First, we consider the problem of <i>monotone drawing:</i> A path <i>P</i> in a straight line drawing Γ is <i>monotone </i> if there exists a line l such that the orthogonal projections of the vertices of P on l appear along l in the order they appear in <i> P.</i> We call l a monotone line (or <i>monotone direction</i>) of <i>P. G</i> is called a monotone drawing of <i> G</i> if it contains at least one monotone path <i>P<sub>uw</sub></i> between every pair of vertices <i>u,w</i> of <i>G.</i> Monotone drawings were recently introduced by Angelini et al. and represent a new visualization paradigm, and is also closely related to several other important graph drawing problems. As in many graph drawing problems, one of the main concerns of this research is to reduce the drawing size, which is the size of the smallest integer grid such that every graph in the graph class can be drawn in such a grid. We present two approaches for the problem of monotone drawings of trees. Our first approach show that every <i>n</i>-vertex tree <i>T</i> admits a monotone drawing on a grid of size <i> O</i>(<i>n</i><sup>1.205</sup>) × <i>O</i>(<i> n</i><sup>1.205</sup>) grid. Our second approach further reduces the size of drawing to 12n × 12n, which is asymptotically optimal. Both of our two drawings can be constructed in <i>O(n)</i> time.</p><p> We also consider monotone drawings of 3-connected plane graphs. We prove that the classical Schnyder drawing of 3-connected plane graphs is a monotone drawing on a <i>f × f</i> grid, which can be constructed in <i> O(n)</i> time. </p><p> Second, we consider the problem of orthogonal drawing. An <i>orthogonal drawing</i> of a plane graph <i>G</i> is a planar drawing of <i> G</i> such that each vertex of <i>G</i> is drawn as a point on the plane, and each edge is drawn as a sequence of horizontal and vertical line segments with no crossings. Orthogonal drawing has attracted much attention due to its various applications in circuit schematics, relationship diagrams, data flow diagrams etc. . Rahman et al. gave a necessary and sufficient condition for a plane graph <i>G</i> of maximum degree 3 to have an orthogonal drawing without bends. An orthogonal drawing <i>D(G)</i> is <i> orthogonally</i> convex if all faces of <i>D(G)</i> are orthogonally convex polygons. Chang et al. gave a necessary and sufficient condition (which strengthens the conditions in the previous result) for a plane graph <i> G</i> of maximum degree 3 to have an orthogonal convex drawing without bends. We further strengthen the results such that if <i>G</i> satisfies the same conditions as in previous papers, it not only has an orthogonally convex drawing, but also a stronger star-shaped orthogonal drawing.</p><p>
|
186 |
Interactive Data Management and Data AnalysisYang, Ying 05 August 2017 (has links)
<p> Everyone today has a big data problem. Data is everywhere and in different formats, they can be referred to as data lakes, data streams, or data swamps. To extract knowledge or insights from the data or to support decision-making, we need to go through a process of collecting, cleaning, managing and analyzing the data. In this process, data cleaning and data analysis are two of the most important and time-consuming components. </p><p> One common challenge in these two components is a lack of interaction. The data cleaning and data analysis are typically done as a batch process, operating on the whole dataset without any feedback. This leads to long, frustrating delays during which users have no idea if the process is effective. Lacking interaction, human expert effort is needed to make decisions on which algorithms or parameters to use in the systems for these two components. </p><p> We should teach computers to talk to humans, not the other way around. This dissertation focuses on building systems --- Mimir and CIA --- that help user conduct data cleaning and analysis through interaction. Mimir is a system that allows users to clean big data in a cost- and time-efficient way through interaction, a process I call on-demand ETL. Convergent inference algorithms (CIA) are a family of inference algorithms in probabilistic graphical models (PGM) that enjoys the benefit of both exact and approximate inference algorithms through interaction. </p><p> Mimir provides a general language for user to express different data cleaning needs. It acts as a shim layer that wraps around the database making it possible for the bulk of the ETL process to remain within a classical deterministic system. Mimir also helps users to measure the quality of an analysis result and provides rankings for cleaning tasks to improve the result quality in a cost efficient manner. CIA focuses on providing user interaction through the process of inference in PGMs. The goal of CIA is to free users from the upfront commitment to either approximate or exact inference, and provide user more control over time/accuracy trade-offs to direct decision-making and computation instance allocations. This dissertation describes the Mimir and CIA frameworks to demonstrate that it is feasible to build efficient interactive data management and data analysis systems.</p><p>
|
187 |
Multi-level behavioral self-organization in computer-animated lifelike synthetic agentsQin, Hong 01 January 1999 (has links)
No description available.
|
188 |
Strong-DISM| A First Attempt to a Dynamically Typed Assembly Language (D-TAL)Hernandez, Ivory 05 December 2017 (has links)
<p> Dynamically Typed Assembly Language (D-TAL) is not only a lightweight and effective solution to the gap generated by the drop in security produced by the translation of high-level language instructions to low-level language instructions, but it considerably eases up the burden generated by the level of complexity required to implement typed assembly languages statically. Although there are tradeoffs between the static and dynamic approaches, focusing on a dynamic approach leads to simpler, easier to reason about, and more feasible ways to understand deployment of types over monomorphically-typed or untyped intermediate languages. On this occasion, DISM, a simple but powerful and mature untyped assembly language, is extended by the addition of type annotations (on memory and registers) to produce an instance of D-TAL. Strong-DISM, the resulting language, statically, lends itself to simpler analysis about type access and security as the correlation between datatypes and instructions with their respective memory and registers becomes simpler to observe; while dynamically, it disallows operations and further eliminates conditions that from high level languages could be used to violate/circumvent security.</p><p>
|
189 |
Programming QR code scanner, communicating Android devices, and unit testing in fortified cardsPatil, Aniket V. 07 December 2017 (has links)
<p> In the contemporary world, where smartphones have become an essential part of our day-to-day lives, Fortified Cards aims to let people monitor the security of their payments using their smartphones. Fortified Cards, as a project, is an endeavor to revolutionize credit or debit card payments using the Quick Response (QR) technology and the International Mobile Equipment Identity (IMEI) number. </p><p> The emphasis in the Android application of Fortified Cards is on the QR technology, communication between two Android devices, and testing the application under situations that could possibly have a negative impact on the successful implementation of the project. The documentation of the project exemplifies the working of the application in a graphical format using an activity diagram, which is a step-by-step guide for any developer to gain a better insight and the detailed description of the successful implementation of the project.</p><p>
|
190 |
Modeling, Designing, and Implementing an Ad-hoc M-Learning Platform that Integrates Sensory Data to Support Ubiquitous LearningNguyen, Hien M. 18 September 2015 (has links)
Learning at any-time, at anywhere, using any mobile computing platform learning (which we refer to as “education in your palm”) empowers informal and formal education. It supports the continued creation of knowledge outside a classroom, after-school programs, community-based organizations, museums, libraries, and shopping malls with under-resourced settings. In doing so, it fosters the continued creation of a cumulative body of knowledge in informal and formal education. Anytime, anywhere, using any device computing platform learning means that students are not required to attend traditional classroom settings in order to learn. Instead, students will be able to access and share learning resources from any mobile computing platform, such as smart phones, tablets using highly dynamic mobile and wireless ad-hoc networks. There has been little research on how to facilitate the integrated use of the service description, discovery and integration resources available in mobile and wireless ad-hoc networks including description schemas and mobile learning objects, and in particular as it relates to the consistency, availability, security and privacy of spatio-temporal and trajectory information. Another challenge is finding, combining and creating suitable learning modules to handle the inherent constraints of mobile learning, resource-poor mobile devices and ad-hoc networks.
The aim of this research is to design, develop and implement the cutting edge context-aware and ubiquitous self-directed learning methodologies using ad-hoc and sensor networks. The emphasis of our work is on defining an appropriate mobile learning object and the service adaptation descriptions as well as providing mechanisms for ad-hoc service discovery and developing concepts for the seamless integration of the learning objects and their contents with a particular focus on preserving data and privacy. The research involves a combination of modeling, designing, and developing a mobile learning system in the absence of a networking infrastructure that integrates sensory data to support ubiquitous learning. The system includes mechanisms to allow content exchange among the mobile ad-hoc nodes to ensure consistency and availability of information. It also provides an on-the-fly content service discovery, query request, and retrieving data from mobile nodes and sensors.
|
Page generated in 0.0693 seconds