• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • Tagged with
  • 28
  • 28
  • 28
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Supervised and Unsupervised Learning for Semantics Distillation in Multimedia Processing

Liu, Yu 19 October 2018 (has links)
<p> In linguistic, "semantics" stands for the intended meaning in natural language, such as in words, phrases and sentences. In this dissertation, the concept "semantics" is defined more generally: the intended meaning of information in all multimedia forms. The multimedia forms include language domain text, as well as vision domain stationary images and dynamic videos. Specifically, semantics in multimedia are the media content of cognitive information, knowledge and idea that can be represented in text, images and video clips. A narrative story, for example, can be semantics summary of a novel book, or semantics summary of the movie originated from that book. Thus, semantic is a high level abstract knowledge that is independent from multimedia forms. </p><p> Indeed, the same amount of semantics can be represented either redundantly or concisely, due to diversified levels of expression ability of multimedia. The process of a redundantly represented semantics evolving into a concisely represented one is called "semantic distillation". And this evolving process can happen either in between different multimedia forms, or within the same form. </p><p> The booming growth of unorganized and unfiltered information is bringing to people an unwanted issue, information overload, where techniques of semantic distillation are in high demand. However, as opportunities always be side with challenges, machine learning and Artificial Intelligence (AI) today become far more advanced than that in the past, and provide with us powerful tools and techniques. Large varieties of learning methods has made countless of impossible tasks come to reality. Thus in this dissertation, we take advantages of machine learning techniques, with both supervised learning and unsupervised learning, to empower the solving of semantics distillation problems. </p><p> Despite the promising future and powerful machine learning techniques, the heterogeneous forms of multimedia involving many domains still impose challenges to semantics distillation approaches. A major challenge is the definition of "semantics" and the related processing techniques can be entirely different from one problem to another. Varying types of multimedia resources can introduce varying kinds of domain-specific limitations and constraints, where the obtaining of semantics also becomes domain-specific. Therefore, in this dissertation, with text language and vision as the two major domains, we approach four problems of all combinations of the two domains: <b>&bull; Language to Vision Domain:</b> In this study, <i>Presentation Storytelling </i> is formulated as a problem that suggesting the most appropriate images from online sources for storytelling purpose given a text query. Particularly, we approach the problem with a two-step semantics processing method, where the semantics from a simple query is first expanded to a diverse semantic graph, and then distilled from a large number of searched web photos to a few representative ones. This two-step method is empowered by Conditional Random Field (CRF) model, and learned in supervised manner with human-labeled examples. <b>&bull; Vision to Language Domain:</b> The second study, <i> Visual Storytelling</i>, formulates a problem of generating a coherent paragraph from a photo stream. Different from presentation storytelling, visual storytelling goes in opposite way: the semantics extracted from a handful photos are distilled into text. In this dissertation, we address this problem by revealing the semantics relationships in visual domain, and distilled into language domain with a new designed Bidirectional Attention Recurrent Neural Network (BARNN) model. Particularly, an attention model is embedded to the RNN so that the coherence can be preserved in language domain at the output being a human-like story. The model is trained with deep learning and supervised learning with public datasets. <b>&bull; Dedicated Vision Domain:</b> To directly approach the information overload issue in vision domain, <i> Image Semantic Extraction</i> formulates a problem that selects a subset from multimedia user's photo albums. In the literature, this problem has mostly been approached with unsupervised learning process. However, in this dissertation, we develop a novel supervised learning method to attach the same problem. We specify visual semantics as a quantizable variables and can be measured, and build an encoding-decoding pipeline with Long-Short-Term-Memory (LSTM) to model this quantization process. The intuition of encoding-decoding pipeline is to imitate human: read-think-and-retell. That is, the pipeline first includes an LSTM encoder scanning all photos for "reading" comprised semantics, then concatenates with an LSTM decoder selecting the most representative ones for "thinking" the gist semantics, finally adds a dedicated residual layer revisiting the unselected ones for "verifying" if the semantics are complete enough. <b> &bull; Dedicated Language Domain:</b> Distinct from above issues, in this part, we introduce a different genre of machine learning method, unsupervised learning. We will address a semantics distillation problem in language domain, <i> Text Semantic Extraction</i>, where the semantics in a letter sequence are extracted from printed images. (Abstract shortened by ProQuest.) </p><p>
12

Efficient Actor Recovery Paradigm for Wireless Sensor and Actor Networks

Mahjoub, Reem Khalid 16 March 2018 (has links)
<p> Wireless sensor networks (WSNs) are becoming widely used worldwide. Wireless Sensor and Actor Networks (WSANs) represent a special category of WSNs wherein actors and sensors collaborate to perform specific tasks. WSANs have become one of the most preeminent emerging type of WSNs. Sensors with nodes having limited power resources are responsible for sensing and transmitting events to actor nodes. Actors are high-performance nodes equipped with rich resources that have the ability to collect, process, transmit data and perform various actions. WSANs have a unique architecture that distinguishes them from WSNs. Due to the characteristics of WSANs, numerous challenges arise. Determining the importance of factors usually depends on the application requirements. </p><p> The actor nodes are the spine of WSANs that collaborate to perform the specific tasks in an unsubstantiated and uneven environment. Thus, there is a possibility of high failure rate in such unfriendly scenarios due to several factors such as power fatigue of devices, electronic circuit failure, software errors in nodes or physical impairment of the actor nodes and inter-actor connectivity problem. It is essential to keep inter-actor connectivity in order to insure network connectivity. Thus, it is extremely important to discover the failure of a cut-vertex actor and network-disjoint in order to improve the Quality-of-Service (QoS). For network recovery process from actor node failure, optimal re-localization and coordination techniques should take place. </p><p> In this work, we propose an efficient actor recovery (EAR) paradigm to guarantee the contention-free traffic-forwarding capacity. The EAR paradigm consists of Node Monitoring and Critical Node Detection (NMCND) algorithm that monitors the activities of the nodes to determine the critical node. In addition, it replaces the critical node with backup node prior to complete node-failure which helps balances the network performance. The packet is handled using Network Integration and Message Forwarding (NIMF) algorithm that determines the source of forwarding the packets (Either from actor or sensor). This decision-making capability of the algorithm controls the packet forwarding rate to maintain the network for longer time. Furthermore, for handling the proper routing strategy, Priority-Based Routing for Node Failure Avoidance (PRNFA) algorithm is deployed to decide the priority of the packets to be forwarded based on the significance of information available in the packet. To validate the effectiveness of the proposed EAR paradigm, we compare the performance of our proposed work with state-of the art localization algorithms. Our experimental results show superior performance in regards to network life, residual energy, reliability, sensor/ actor recovery time and data recovery. </p><p>
13

Computation and Communication Optimization in Many-Core Heterogeneous Server-on-Chip

Reza, Md Farhadur 12 May 2018 (has links)
<p> To make full use of parallelism of many cores in network-on-chip (NoC) based server-on-chip, this dissertation addresses the problem of computation and communication optimization during task-resource co-allocation of large-scale applications onto heterogeneous NoCs. Both static and dynamic task mapping and resource configuration have been performed while making the solution aware of power, thermal, dark/dim silicon, and capacity issues of chip. Our objectives are to minimize energy consumption and hotspots for improving NoC performance in terms of latency and throughput while meeting the above-mentioned chip constraints. Task-resource allocation and configuration problems have been formulated using linear programming (LP) optimization for optimal solutions. Due to high time complexity of LP solutions, fast heuristic approaches are proposed to get the near-optimal mapping and configuration solutions in a finite time for many-core systems. </p><p> &bull; We first present the hotspots minimization problems and solutions in NoC based many-core server-on-chip considering both computation and communication demands of the applications while meeting the chip constraints in terms of chip area budget, computational capacity of nodes, and communication capacity of links. </p><p> &bull; We then address power and thermal limitations in dark silicon era by proposing run-time resource management strategy and mapping for minimization of both hotspots and overall chip energy in many-core NoC. </p><p> &bull; We then present the power-thermal aware load-balanced mapping in heterogeneous CPU-GPU systems in many-core NoC. Distributed resource management strategy in CPU-GPU system using CPUs for system management and latency-sensitive tasks and GPUs for throughput-intensive tasks has been proposed. </p><p> &bull; We propose a neural network model to dynamically monitor, predict, and configure NoC resources. This work applies local and global neural networks classifiers for configuring NoC based on demands of applications and chip constraints. </p><p> &bull; Due to the integration of many-cores in a single chip, we propose express channels for improving NoC performance in terms of latency and throughput. We also propose mapping methodologies for efficient task-resource co-allocation in express channel enabled many-core NoC.</p><p>
14

Algorithms for Graph Drawing Problems

He, Dayu 08 August 2017 (has links)
<p> A graph G is called <i>planar</i> if it can be drawn on the plan such that no two distinct edges intersect each other but at common endpoints. Such drawing is called a plane embedding of <i>G.</i> A plane graph is a graph with a fixed embedding. A straight-line drawing <i>G</i> of a graph <i>G</i> = (<i>V, E</i>) is a drawing where each vertex of <i>V</i> is drawn as a distinct point on the plane and each edge of <i>G</i> is drawn as a line segment connecting two end vertices. In this thesis, we study a set of planar graph drawing problems. </p><p> First, we consider the problem of <i>monotone drawing:</i> A path <i>P</i> in a straight line drawing &Gamma; is <i>monotone </i> if there exists a line l such that the orthogonal projections of the vertices of P on l appear along l in the order they appear in <i> P.</i> We call l a monotone line (or <i>monotone direction</i>) of <i>P. G</i> is called a monotone drawing of <i> G</i> if it contains at least one monotone path <i>P<sub>uw</sub></i> between every pair of vertices <i>u,w</i> of <i>G.</i> Monotone drawings were recently introduced by Angelini et al. and represent a new visualization paradigm, and is also closely related to several other important graph drawing problems. As in many graph drawing problems, one of the main concerns of this research is to reduce the drawing size, which is the size of the smallest integer grid such that every graph in the graph class can be drawn in such a grid. We present two approaches for the problem of monotone drawings of trees. Our first approach show that every <i>n</i>-vertex tree <i>T</i> admits a monotone drawing on a grid of size <i> O</i>(<i>n</i><sup>1.205</sup>) &times; <i>O</i>(<i> n</i><sup>1.205</sup>) grid. Our second approach further reduces the size of drawing to 12n &times; 12n, which is asymptotically optimal. Both of our two drawings can be constructed in <i>O(n)</i> time.</p><p> We also consider monotone drawings of 3-connected plane graphs. We prove that the classical Schnyder drawing of 3-connected plane graphs is a monotone drawing on a <i>f &times; f</i> grid, which can be constructed in <i> O(n)</i> time. </p><p> Second, we consider the problem of orthogonal drawing. An <i>orthogonal drawing</i> of a plane graph <i>G</i> is a planar drawing of <i> G</i> such that each vertex of <i>G</i> is drawn as a point on the plane, and each edge is drawn as a sequence of horizontal and vertical line segments with no crossings. Orthogonal drawing has attracted much attention due to its various applications in circuit schematics, relationship diagrams, data flow diagrams etc. . Rahman et al. gave a necessary and sufficient condition for a plane graph <i>G</i> of maximum degree 3 to have an orthogonal drawing without bends. An orthogonal drawing <i>D(G)</i> is <i> orthogonally</i> convex if all faces of <i>D(G)</i> are orthogonally convex polygons. Chang et al. gave a necessary and sufficient condition (which strengthens the conditions in the previous result) for a plane graph <i> G</i> of maximum degree 3 to have an orthogonal convex drawing without bends. We further strengthen the results such that if <i>G</i> satisfies the same conditions as in previous papers, it not only has an orthogonally convex drawing, but also a stronger star-shaped orthogonal drawing.</p><p>
15

Interactive Data Management and Data Analysis

Yang, Ying 05 August 2017 (has links)
<p> Everyone today has a big data problem. Data is everywhere and in different formats, they can be referred to as data lakes, data streams, or data swamps. To extract knowledge or insights from the data or to support decision-making, we need to go through a process of collecting, cleaning, managing and analyzing the data. In this process, data cleaning and data analysis are two of the most important and time-consuming components. </p><p> One common challenge in these two components is a lack of interaction. The data cleaning and data analysis are typically done as a batch process, operating on the whole dataset without any feedback. This leads to long, frustrating delays during which users have no idea if the process is effective. Lacking interaction, human expert effort is needed to make decisions on which algorithms or parameters to use in the systems for these two components. </p><p> We should teach computers to talk to humans, not the other way around. This dissertation focuses on building systems --- Mimir and CIA --- that help user conduct data cleaning and analysis through interaction. Mimir is a system that allows users to clean big data in a cost- and time-efficient way through interaction, a process I call on-demand ETL. Convergent inference algorithms (CIA) are a family of inference algorithms in probabilistic graphical models (PGM) that enjoys the benefit of both exact and approximate inference algorithms through interaction. </p><p> Mimir provides a general language for user to express different data cleaning needs. It acts as a shim layer that wraps around the database making it possible for the bulk of the ETL process to remain within a classical deterministic system. Mimir also helps users to measure the quality of an analysis result and provides rankings for cleaning tasks to improve the result quality in a cost efficient manner. CIA focuses on providing user interaction through the process of inference in PGMs. The goal of CIA is to free users from the upfront commitment to either approximate or exact inference, and provide user more control over time/accuracy trade-offs to direct decision-making and computation instance allocations. This dissertation describes the Mimir and CIA frameworks to demonstrate that it is feasible to build efficient interactive data management and data analysis systems.</p><p>
16

Strong-DISM| A First Attempt to a Dynamically Typed Assembly Language (D-TAL)

Hernandez, Ivory 05 December 2017 (has links)
<p> Dynamically Typed Assembly Language (D-TAL) is not only a lightweight and effective solution to the gap generated by the drop in security produced by the translation of high-level language instructions to low-level language instructions, but it considerably eases up the burden generated by the level of complexity required to implement typed assembly languages statically. Although there are tradeoffs between the static and dynamic approaches, focusing on a dynamic approach leads to simpler, easier to reason about, and more feasible ways to understand deployment of types over monomorphically-typed or untyped intermediate languages. On this occasion, DISM, a simple but powerful and mature untyped assembly language, is extended by the addition of type annotations (on memory and registers) to produce an instance of D-TAL. Strong-DISM, the resulting language, statically, lends itself to simpler analysis about type access and security as the correlation between datatypes and instructions with their respective memory and registers becomes simpler to observe; while dynamically, it disallows operations and further eliminates conditions that from high level languages could be used to violate/circumvent security.</p><p>
17

Programming QR code scanner, communicating Android devices, and unit testing in fortified cards

Patil, Aniket V. 07 December 2017 (has links)
<p> In the contemporary world, where smartphones have become an essential part of our day-to-day lives, Fortified Cards aims to let people monitor the security of their payments using their smartphones. Fortified Cards, as a project, is an endeavor to revolutionize credit or debit card payments using the Quick Response (QR) technology and the International Mobile Equipment Identity (IMEI) number. </p><p> The emphasis in the Android application of Fortified Cards is on the QR technology, communication between two Android devices, and testing the application under situations that could possibly have a negative impact on the successful implementation of the project. The documentation of the project exemplifies the working of the application in a graphical format using an activity diagram, which is a step-by-step guide for any developer to gain a better insight and the detailed description of the successful implementation of the project.</p><p>
18

On detection, analysis and characterization of transient and parametric failures in nano-scale CMOS VLSI

Sanyal, Alodeep 01 January 2010 (has links)
As we move deep into nanometer regime of CMOS VLSI (45nm node and below), the device noise margin gets sharply eroded because of continuous lowering of device threshold voltage together with ever increasing rate of signal transitions driven by the consistent demand for higher performance. Sharp erosion of device noise margin vastly increases the likelihood of intermittent failures (also known as parametric failures) during device operation as opposed to permanent failures caused by physical defects introduced during manufacturing process. The major sources of intermittent failures are capacitive crosstalk between neighbor interconnects, abnormal drop in power supply voltage (also known as droop), localized thermal gradient, and soft errors caused by impact of high energy particles on semiconductor surface. In nanometer technology, these intermittent failures largely outnumber the permanent failures caused by physical defects. Therefore, it is of paramount importance to come up with efficient test generation and test application methods to accurately detect and characterize these classes of failures. Soft error rate (SER) is an important design metric used in semiconductor industry and represented by number of such errors encountered per Billion hours of device operation, known as Failure-In-Time (FIT) rate. Soft errors are rare events. Traditional techniques for SER characterization involve testing multiple devices in parallel, or testing the device while keeping it in a high energy neutron bombardment chamber to artificially accelerate the occurrence of single events. Motivated by the fact that measurement of SER incurs high time and cost overhead, in this thesis, we propose a two step approach: ⟨i⟩ a new filtering technique based on amplitude of the noise pulse, which significantly reduces the set of soft error susceptible nodes to be considered for a given design; followed by ⟨ii⟩ an Integer Linear Program (ILP)-based pattern generation technique that accelerates the SER characterization process by 1-2 orders of magnitude compared to the current state-of-the-art. During test application, it is important to distinguish between an intermittent failure and a permanent failure. Motivated by the fact that most of the intermittent failures are temporally sparse in nature, we present a novel design-for-testability (DFT) architecture which facilitates application of the same test vector twice in a row. The underlying assumption here is that a soft fail will not manifest its effect in two consecutive test cycles whereas the error caused by a physical defect will produce an identically corrupt output signature in both test cycles. Therefore, comparing the output signature for two consecutive applications of the same test vector will accurately distinguish between a soft fail and a hard fail. We show application of this DFT technique in measuring soft error rate as well as other circuit marginality related parametric failures, such as thermal hot-spot induced delay failures. A major contribution of this thesis lies on investigating the effect of multiple sources of noise acting together in exacerbating the noise effect even further. The existing literature on signal integrity verification and test falls short of taking the combined noise effects into account. We particularly focus on capacitive crosstalk on long signal nets. A typical long net is capacitively coupled with multiple aggressors and also tend to have multiple fanout gates. Gate leakage current that originates in fanout receivers, flows backward and terminates in the driver causing a shift in driver output voltage. This effect becomes more prominent as gate oxide is scaled more aggressively. In this thesis, we first present a dynamic simulation-based study to establish the significance of the problem, followed by proposing an automatic test pattern generation (ATPG) solution which uses 0-1 Integer Linear Program (ILP) to maximize the cumulative voltage noise at a given victim net due to crosstalk and gate leakage loading in conjunction with propagating the fault effect to an observation point. Pattern pairs generated by this technique are useful for both manufacturing test application as well as signal integrity verification for nanometer designs. This research opens up a new direction for studying nanometer noise effects and motivates us to extend the study to other noise sources in tandem including voltage drop and temperature effects.
19

Software techniques to reduce the energy consumption of low-power devices at the limits of digital abstractions

Salajegheh, Mastooreh Negin 01 January 2012 (has links)
My thesis explores the effectiveness of software techniques that bend digital abstractions in order to allow embedded systems to do more with less energy. Recent years have witnessed a proliferation of low-power embedded devices with power ranges of few milliwatts to microwatts. The capabilities and size of the embedded systems continue to improve dramatically; however, improvements in battery density and energy harvesting have failed to mimic a Moore's law. Thus, energy remains a formidable bottleneck for low-power embedded systems. Instead of trying to create hardware with ideal energy proportionality, my dissertation evaluates how to use unconventional and probabilistic computing that bends traditional abstractions and interfaces in order to reduce energy consumption while protecting program semantics. My thesis considers four methods that unleash energy otherwise squandered on communication, storage, time keeping, or sensing: 1) CCCP, which provides an energy-efficient storage alternative to local non-volatile storage by relying on cryptographic backscatter radio communication, 2) Half-Wits, which reduces energy consumption by 30% by allowing operation of embedded systems at below-spec supply voltages and implementing NOR flash memory error recovery in firmware rather than strictly in hardware, 3) TARDIS, which exploits the decay properties of SRAM to estimate the duration of a power failure ranging from seconds to several hours depending on hardware parameters, and 4) Nonsensors, which allow operation of analog to digital converters at low voltages without any hardware modifications to the existing circuitry.
20

Exploiting energy harvesting for passive embedded computing systems

Gummeson, Jeremy 01 January 2014 (has links)
The key limitation in mobile computing systems is energy - without a stable power supply, these systems cannot process, store, or communicate data. This problem is of particular interest since the storage density of battery technologies do not follow scaling trends similar to Moore's law. This means that depending on application performance requirements and lifetime objectives, a battery may dominate the overall system weight and form factor; this could result in an overall size that is either inconvenient or unacceptable for a particular application. As device features have scaled down in size, entire embedded systems have been implemented on a single die or chip, resulting in the battery becoming the form factor bottleneck. One way to diminish the impact that batteries have on mobile embedded system design is to decrease reliance on buffered energy by providing the ability to harvest power from the environment or infrastructure. There are a spectrum of design choices available that utilize harvested power, but of particular interest are those that use small energy buffers and depend almost entirely on harvested power; by minimizing buffer size, we decrease form factor and mitigate reliance on batteries. Since harvested power is not continuously available in embedded computing systems, this brings forth a unique set of design challenges. First, we address the design challenges that emerge from mobile computing systems that use minimal energy buffers. Specifically, we explore the design space of a computational radio frequency identification (RFID) platform that uses a small solar harvesting unit to replenish a capacitor-based energy storage unit. We show that such a system's performance can be enhanced while in a reader's field of interrogation and also allows for device operation while completely decoupled from reader infrastructure. We also provide a toolset that simulates system performance using a set of experimentally obtained light intensity traces gathered from a mobile subject. Next, we show how energy buffered from such a harvesting-based system can be used to implement an efficient burst protocol that allows a computational RFID to quickly offload buffered data while in contact with a reader. The burst mechanism is implemented by re-purposing existing RFID protocol primitives, which allows for compatibility with existing reader infrastructure. We show that bursts provide signicant improvements to individual tag throughput, while co-existing with tags that do not use the burst protocol. Next, we show that energy harvesting can be used to enable a novel security mechanism for embedded devices equipped with Near Field Communications (NFC). NFC is growing in pervasiveness, especially on mobile phones, but many open security questions remain. We enable NFC security by harvesting energy via magnetic induction, use the harvested energy to power an integrated reader chip, and selectively block malicious messages via passive load modulation after sniffing message contents. We show that such a platform is feasible based on energy harvested opportunistically from mobile phones, successfully blocking a class of messages while allowing others through. Finally, we demonstrate that energy harvested from mobile phones can be used to implement wirelessly powered ubiquitous displays. One drawback of illuminated displays is that they need a continuous source of power to maintain their state -- this is an undesirable property, especially since the display is typically the highest power consumption system component of embedded devices. Electronic paper technologies eliminate this drawback by providing a display that requires no energy to maintain state. By combining NFC energy harvesting and communication, and electronic paper technologies, we implement a companion display for mobile phones that obtains all the energy required for a display update while communicating with a user application running on a mobile phone. The companion display assists the phone in displaying static information while the power hungry display remains unpowered.

Page generated in 0.1028 seconds