• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 61784
  • 6049
  • 5658
  • 3722
  • 3436
  • 2277
  • 2277
  • 2277
  • 2277
  • 2277
  • 2264
  • 1224
  • 1146
  • 643
  • 535
  • Tagged with
  • 103608
  • 45421
  • 28888
  • 20550
  • 17952
  • 12457
  • 10983
  • 10844
  • 9121
  • 8524
  • 7162
  • 6389
  • 6211
  • 6180
  • 6059
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Modeling, Designing, and Implementing an Ad-hoc M-Learning Platform that Integrates Sensory Data to Support Ubiquitous Learning

Nguyen, Hien M. 18 September 2015 (has links)
Learning at any-time, at anywhere, using any mobile computing platform learning (which we refer to as “education in your palm”) empowers informal and formal education. It supports the continued creation of knowledge outside a classroom, after-school programs, community-based organizations, museums, libraries, and shopping malls with under-resourced settings. In doing so, it fosters the continued creation of a cumulative body of knowledge in informal and formal education. Anytime, anywhere, using any device computing platform learning means that students are not required to attend traditional classroom settings in order to learn. Instead, students will be able to access and share learning resources from any mobile computing platform, such as smart phones, tablets using highly dynamic mobile and wireless ad-hoc networks. There has been little research on how to facilitate the integrated use of the service description, discovery and integration resources available in mobile and wireless ad-hoc networks including description schemas and mobile learning objects, and in particular as it relates to the consistency, availability, security and privacy of spatio-temporal and trajectory information. Another challenge is finding, combining and creating suitable learning modules to handle the inherent constraints of mobile learning, resource-poor mobile devices and ad-hoc networks. The aim of this research is to design, develop and implement the cutting edge context-aware and ubiquitous self-directed learning methodologies using ad-hoc and sensor networks. The emphasis of our work is on defining an appropriate mobile learning object and the service adaptation descriptions as well as providing mechanisms for ad-hoc service discovery and developing concepts for the seamless integration of the learning objects and their contents with a particular focus on preserving data and privacy. The research involves a combination of modeling, designing, and developing a mobile learning system in the absence of a networking infrastructure that integrates sensory data to support ubiquitous learning. The system includes mechanisms to allow content exchange among the mobile ad-hoc nodes to ensure consistency and availability of information. It also provides an on-the-fly content service discovery, query request, and retrieving data from mobile nodes and sensors.
192

Sketch-based digital storyboards and floor plans for authoring computer-generated film pre-visuals

Matthews, Timothy January 2012 (has links)
Pre-visualisation is an important tool for planning films during the pre-production phase of filmmaking. Existing pre-visualisation authoring tools do not effectively support the user in authoring pre-visualisations without impairing software usability. These tools require the user to either have programming skills, be experienced in modelling and animation, or use drag-and-drop style interfaces. These interaction methods do not intuitively fit with pre-production activities such as floor planning and storyboarding, and existing tools that apply a storyboarding metaphor do not automatically interpret user sketches. The goal of this research was to investigate how sketch-based user interfaces and methods from computer vision could be used for supporting pre-visualisation authoring using a storyboarding approach. The requirements for such a sketch-based storyboarding tool were determined from literature and an interview with Triggerfish Animation Studios. A framework was developed to support sketch-based pre-visualisation authoring using a storyboarding approach. Algorithms for describing user sketches, recognising objects and performing pose estimation were designed to automatically interpret user sketches. A proof of concept prototype implementation of this framework was evaluated in order to assess its usability benefit. It was found that the participants could author pre-visualisations effectively, efficiently and easily. The results of the usability evaluation also showed that the participants were satisfied with the overall design and usability of the prototype tool. The positive and negative findings of the evaluation were interpreted and combined with existing heuristics in order to create a set of guidelines for designing similar sketch-based pre-visualisation authoring tools that apply the storyboarding approach. The successful implementation of the proof of concept prototype tool provides practical evidence of the feasibility of sketch-based pre-visualisation authoring. The positive results from the usability evaluation established that sketch-based interfacing techniques can be used effectively with a storyboarding approach for authoring pre-visualisations without impairing software usability.
193

Using discrimination graphs to represent visual knowledge

Mulder, Jan A. January 1985 (has links)
This dissertation is concerned with the representation of visual knowledge. Image features often have many different local interpretations. As a result, visual interpretations are often ambiguous and hypothetical. In many model-based vision systems the problem of representing ambiguous and hypothetical interpretations is not very specifically addressed. Generally, specialization hierarchies are used to suppress a potential explosion in local interpretations. Such a solution has problems, as many local interpretations cannot be represented by a single hierarchy. As well, ambiguous and hypothetical interpretations tend to be represented along more than one knowledge representation dimension limiting modularity in representation and control. In this dissertation a better solution is proposed. Classes of objects which have local features with similar appearance in the image are represented by discrimination graphs. Such graphs are directed and acyclic. Their leaves represent classes of elementary objects. All other nodes represent abstract (and sometimes unnatural) classes of objects, which intensionally represent the set of elementary object classes that descend from them. Rather than interpreting each image feature as an elementary object, we use the abstract class that represents the complete set of possible (elementary) objects. Following the principle of least commitment, the interpretation of each image feature is repeatedly forced into more restrictive classes as the context for the image feature is expanded, until the image no longer provides subclassification information. This approach is called discrimination vision, and it has several attractive features. First, hypothetical and ambiguous interpretations can be represented along one knowledge representation dimension. Second, the number of hypotheses represented for a single image feature can be kept small. Third, in an interpretation graph competing hypotheses can be represented in the domain of a single variable. This often eliminates the need for restructuring the graph when a hypothesis is invalidated. Fourth, the problem of resolving ambiguity can be treated as a constraint satisfaction problem which is a well researched problem in Computational Vision. Our system has been implemented as Mapsee-3, a program for interpreting sketch maps. A hierarchical arc consistency algorithm has been used to deal with the inherently hierarchical discrimination graphs. Experimental data show that, for the domain implemented, this algorithm is more efficient than standard arc consistency algorithms. / Science, Faculty of / Computer Science, Department of / Graduate
194

On detection, analysis and characterization of transient and parametric failures in nano-scale CMOS VLSI

Sanyal, Alodeep 01 January 2010 (has links)
As we move deep into nanometer regime of CMOS VLSI (45nm node and below), the device noise margin gets sharply eroded because of continuous lowering of device threshold voltage together with ever increasing rate of signal transitions driven by the consistent demand for higher performance. Sharp erosion of device noise margin vastly increases the likelihood of intermittent failures (also known as parametric failures) during device operation as opposed to permanent failures caused by physical defects introduced during manufacturing process. The major sources of intermittent failures are capacitive crosstalk between neighbor interconnects, abnormal drop in power supply voltage (also known as droop), localized thermal gradient, and soft errors caused by impact of high energy particles on semiconductor surface. In nanometer technology, these intermittent failures largely outnumber the permanent failures caused by physical defects. Therefore, it is of paramount importance to come up with efficient test generation and test application methods to accurately detect and characterize these classes of failures. Soft error rate (SER) is an important design metric used in semiconductor industry and represented by number of such errors encountered per Billion hours of device operation, known as Failure-In-Time (FIT) rate. Soft errors are rare events. Traditional techniques for SER characterization involve testing multiple devices in parallel, or testing the device while keeping it in a high energy neutron bombardment chamber to artificially accelerate the occurrence of single events. Motivated by the fact that measurement of SER incurs high time and cost overhead, in this thesis, we propose a two step approach: ⟨i⟩ a new filtering technique based on amplitude of the noise pulse, which significantly reduces the set of soft error susceptible nodes to be considered for a given design; followed by ⟨ii⟩ an Integer Linear Program (ILP)-based pattern generation technique that accelerates the SER characterization process by 1-2 orders of magnitude compared to the current state-of-the-art. During test application, it is important to distinguish between an intermittent failure and a permanent failure. Motivated by the fact that most of the intermittent failures are temporally sparse in nature, we present a novel design-for-testability (DFT) architecture which facilitates application of the same test vector twice in a row. The underlying assumption here is that a soft fail will not manifest its effect in two consecutive test cycles whereas the error caused by a physical defect will produce an identically corrupt output signature in both test cycles. Therefore, comparing the output signature for two consecutive applications of the same test vector will accurately distinguish between a soft fail and a hard fail. We show application of this DFT technique in measuring soft error rate as well as other circuit marginality related parametric failures, such as thermal hot-spot induced delay failures. A major contribution of this thesis lies on investigating the effect of multiple sources of noise acting together in exacerbating the noise effect even further. The existing literature on signal integrity verification and test falls short of taking the combined noise effects into account. We particularly focus on capacitive crosstalk on long signal nets. A typical long net is capacitively coupled with multiple aggressors and also tend to have multiple fanout gates. Gate leakage current that originates in fanout receivers, flows backward and terminates in the driver causing a shift in driver output voltage. This effect becomes more prominent as gate oxide is scaled more aggressively. In this thesis, we first present a dynamic simulation-based study to establish the significance of the problem, followed by proposing an automatic test pattern generation (ATPG) solution which uses 0-1 Integer Linear Program (ILP) to maximize the cumulative voltage noise at a given victim net due to crosstalk and gate leakage loading in conjunction with propagating the fault effect to an observation point. Pattern pairs generated by this technique are useful for both manufacturing test application as well as signal integrity verification for nanometer designs. This research opens up a new direction for studying nanometer noise effects and motivates us to extend the study to other noise sources in tandem including voltage drop and temperature effects.
195

Software techniques to reduce the energy consumption of low-power devices at the limits of digital abstractions

Salajegheh, Mastooreh Negin 01 January 2012 (has links)
My thesis explores the effectiveness of software techniques that bend digital abstractions in order to allow embedded systems to do more with less energy. Recent years have witnessed a proliferation of low-power embedded devices with power ranges of few milliwatts to microwatts. The capabilities and size of the embedded systems continue to improve dramatically; however, improvements in battery density and energy harvesting have failed to mimic a Moore's law. Thus, energy remains a formidable bottleneck for low-power embedded systems. Instead of trying to create hardware with ideal energy proportionality, my dissertation evaluates how to use unconventional and probabilistic computing that bends traditional abstractions and interfaces in order to reduce energy consumption while protecting program semantics. My thesis considers four methods that unleash energy otherwise squandered on communication, storage, time keeping, or sensing: 1) CCCP, which provides an energy-efficient storage alternative to local non-volatile storage by relying on cryptographic backscatter radio communication, 2) Half-Wits, which reduces energy consumption by 30% by allowing operation of embedded systems at below-spec supply voltages and implementing NOR flash memory error recovery in firmware rather than strictly in hardware, 3) TARDIS, which exploits the decay properties of SRAM to estimate the duration of a power failure ranging from seconds to several hours depending on hardware parameters, and 4) Nonsensors, which allow operation of analog to digital converters at low voltages without any hardware modifications to the existing circuitry.
196

Exploiting energy harvesting for passive embedded computing systems

Gummeson, Jeremy 01 January 2014 (has links)
The key limitation in mobile computing systems is energy - without a stable power supply, these systems cannot process, store, or communicate data. This problem is of particular interest since the storage density of battery technologies do not follow scaling trends similar to Moore's law. This means that depending on application performance requirements and lifetime objectives, a battery may dominate the overall system weight and form factor; this could result in an overall size that is either inconvenient or unacceptable for a particular application. As device features have scaled down in size, entire embedded systems have been implemented on a single die or chip, resulting in the battery becoming the form factor bottleneck. One way to diminish the impact that batteries have on mobile embedded system design is to decrease reliance on buffered energy by providing the ability to harvest power from the environment or infrastructure. There are a spectrum of design choices available that utilize harvested power, but of particular interest are those that use small energy buffers and depend almost entirely on harvested power; by minimizing buffer size, we decrease form factor and mitigate reliance on batteries. Since harvested power is not continuously available in embedded computing systems, this brings forth a unique set of design challenges. First, we address the design challenges that emerge from mobile computing systems that use minimal energy buffers. Specifically, we explore the design space of a computational radio frequency identification (RFID) platform that uses a small solar harvesting unit to replenish a capacitor-based energy storage unit. We show that such a system's performance can be enhanced while in a reader's field of interrogation and also allows for device operation while completely decoupled from reader infrastructure. We also provide a toolset that simulates system performance using a set of experimentally obtained light intensity traces gathered from a mobile subject. Next, we show how energy buffered from such a harvesting-based system can be used to implement an efficient burst protocol that allows a computational RFID to quickly offload buffered data while in contact with a reader. The burst mechanism is implemented by re-purposing existing RFID protocol primitives, which allows for compatibility with existing reader infrastructure. We show that bursts provide signicant improvements to individual tag throughput, while co-existing with tags that do not use the burst protocol. Next, we show that energy harvesting can be used to enable a novel security mechanism for embedded devices equipped with Near Field Communications (NFC). NFC is growing in pervasiveness, especially on mobile phones, but many open security questions remain. We enable NFC security by harvesting energy via magnetic induction, use the harvested energy to power an integrated reader chip, and selectively block malicious messages via passive load modulation after sniffing message contents. We show that such a platform is feasible based on energy harvested opportunistically from mobile phones, successfully blocking a class of messages while allowing others through. Finally, we demonstrate that energy harvested from mobile phones can be used to implement wirelessly powered ubiquitous displays. One drawback of illuminated displays is that they need a continuous source of power to maintain their state -- this is an undesirable property, especially since the display is typically the highest power consumption system component of embedded devices. Electronic paper technologies eliminate this drawback by providing a display that requires no energy to maintain state. By combining NFC energy harvesting and communication, and electronic paper technologies, we implement a companion display for mobile phones that obtains all the energy required for a display update while communicating with a user application running on a mobile phone. The companion display assists the phone in displaying static information while the power hungry display remains unpowered.
197

Automatic and Systematic Detection of Software-exploitable Hardware Vulnerabilities

Xiao, Yuan January 2020 (has links)
No description available.
198

Architecture and Compiler Support for Parallel Consistency, Coherence, and Security

Zhang, Rui January 2020 (has links)
No description available.
199

Effective Resource and Workload Management in Data Centers

Lu, Lei 01 January 2014 (has links)
The increasing demand for storage, computation, and business continuity has driven the growth of data centers. Managing data centers efficiently is a difficult task because of the wide variety of datacenter applications, their ever-changing intensities, and the fact that application performance targets may differ widely. Server virtualization has been a game-changing technology for IT, providing the possibility to support multiple virtual machines (VMs) simultaneously. This dissertation focuses on how virtualization technologies can be utilized to develop new tools for maintaining high resource utilization, for achieving high application performance, and for reducing the cost of data center management.;For multi-tiered applications, bursty workload traffic can significantly deteriorate performance. This dissertation proposes an admission control algorithm AWAIT, for handling overloading conditions in multi-tier web services. AWAIT places on hold requests of accepted sessions and refuses to admit new sessions when the system is in a sudden workload surge. to meet the service-level objective, AWAIT serves the requests in the blocking queue with high priority. The size of the queue is dynamically determined according to the workload burstiness.;Many admission control policies are triggered by instantaneous measurements of system resource usage, e.g., CPU utilization. This dissertation first demonstrates that directly measuring virtual machine resource utilizations with standard tools cannot always lead to accurate estimates. A directed factor graph (DFG) model is defined to model the dependencies among multiple types of resources across physical and virtual layers.;Virtualized data centers always enable sharing of resources among hosted applications for achieving high resource utilization. However, it is difficult to satisfy application SLOs on a shared infrastructure, as application workloads patterns change over time. AppRM, an automated management system not only allocates right amount of resources to applications for their performance target but also adjusts to dynamic workloads using an adaptive model.;Server consolidation is one of the key applications of server virtualization. This dissertation proposes a VM consolidation mechanism, first by extending the fair load balancing scheme for multi-dimensional vector scheduling, and then by using a queueing network model to capture the service contentions for a particular virtual machine placement.
200

Soft Error Event Monte Carlo Modeling and Simulation: Impacts of Soft Error Events on Computer Memory

Unknown Date (has links)
This dissertation addresses the creation of a unique, adaptable, and light-weight core methodology to address the problem of Soft Error Modeling and Simulation. This core methodology was successfully tailored, validated, and expanded to work with a diverse cross-section of realistic memory devices, reliability techniques, and soft error event behaviors. These devices were shielded by a mutually supporting trio of reliability techniques while under the threat of soft error events. The techniques included in this dissertation are: (1) error correction codes, (2) interleaving distance, and (3) scrubbing. The strike-times, soft error event types, and bit error severities of the Soft Error Events were stochastically estimated using publicly available research findings published from a variety of proprietary reliability data sources. This proprietary data was gathered from certain secret vendor-specific computer memory devices. Both logically-oriented and physically-oriented memory cell organizational perspectives were incorporated into the core-methodology that was tailored to create the Simulators implemented within this dissertation. The failure probabilities of memory devices were calculated by the simulators that were designed and implemented within this dissertation. The results of these simulations were validated for specific test cases against the published literature models. This core methodology was applied to create scalable Simulators that were implemented utilizing a variety of soft error event behavioral characteristics, memory device design constraints, and reliability technique parameters. This core methodology and the simulators created from its application may be utilized by researchers to address a variety of open research questions in the field. An open research question was answered within this dissertation as proof of the effectiveness of the core methodology. This particular research question concerned establishing the significance of Soft Error Event (SEE) topography by studying the impact of Topographically reflective SEEs on the overall failure probability and corresponding reliability of the simulated memory device over time. To address this open research question, the Topographic 2-Parameter Weibull Soft Error (T2P-WSE) Simulator stochastically estimates the topographic strike-patterns of SEE severities based on the most commonly encountered Multiple Cell Upset shapes gathered by a commercial grade 3D-TCAD-based Neutron Particle Strike Simulation in a generic 45 nm SRAM (Static Random Access Memory) memory device. Both the failure probability and reliability results generated by the Topographic 2-Parameter Weibull Soft Error (T2P-WSE) Simulator were shown to be significantly different from the Row-Depth-Only 2-Parameter Weibull Soft Error Simulator (S2P-WSE) when given equivalent inputs. As documented within this dissertation, this conclusion was verified and confirmed from both a visual and statistical standpoint. Topography was observed to play a significant role in the overall failure probability of the device. It was concluded that the failure probability of the T2P-WSE Simulator was significantly reduced in comparison to the failure probability of the S2P-WSE Simulator. As defined for a variety of input parameters, the S2P-WSE Simulator consistently over-estimated the failure probability of the device. The reason for this outcome is directly related to the row-depth-only bit error severity assumption of the S2P-WSE Simulator. The row-depth-only assumption forces every MCU SEE impacts the device to spread its bit errors in a fixed row-depth-only pattern as opposed to a more realistic topographic pattern incorporated such as the patterns encoded into the T2P-WSE Simulator for the 45 nm memory chip geometry. This conclusion only served to reinforce the initial observation that when taking into account the spread of the bit errors, one would significantly reduce the overall failure probability for a memory storage device implemented with an interleaving distance architecture by taking into account its topographic shape. The core methodology calls for the stochastic estimation of the strike-time, type, and bit-error severity that represent all simulated soft error events destined to impact the simulated device at some simulation time unit over the total simulation run-time. These Soft Error Events will strike the device at the appointed strike-time and be mitigated by the chosen set of mutually supporting reliability techniques. These reliability techniques include the following: (1) error correcting codes, (2) interleaving distance, and (3) scrubbing. This core methodology was fitted to the Compound Poisson and a logical memory cell organization for the Compound Poisson Soft Error Simulator. This core methodology was also successfully applied to the 2 Parameter Weibull Failure Distribution and a Physical Memory Cell organization. Both CPSE and S2P-WSE Simulators proved equally capable in calculating the failure probability of any variety of simulated memory storage devices shielded by the three integrated reliability techniques under the Impact of these stochastically determined soft error events. This failure probability over simulated time was utilized to evaluate all of the secondary results of the Core Methodology including such results as the Mean-Time-To Failure and Failures-In-Time Number at the conclusion of each simulation run. All of the simulators presented within this dissertation were implemented within a Matlab programming environment. / A Dissertation submitted to the Department of Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Summer Semester 2016. / July 12, 2016. / Compound Poisson, Computer Memory, Reliability, Soft Errors, Stochastic Simulation, Weibull Distribution / Includes bibliographical references. / Michael Mascagni, Professor Directing Dissertation; Dennis Duke, University Representative; Robert van Engelen, Committee Member; Piyush Kumar, Committee Member.

Page generated in 0.1383 seconds