• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 167
  • 167
  • 167
  • 60
  • 37
  • 31
  • 31
  • 31
  • 29
  • 27
  • 26
  • 23
  • 20
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Power-Efficient and Low-Latency Memory Access for CMP Systems with Heterogeneous Scratchpad On-Chip Memory

Chen, Zhi 01 January 2013 (has links)
The gradually widening speed disparity of between CPU and memory has become an overwhelming bottleneck for the development of Chip Multiprocessor (CMP) systems. In addition, increasing penalties caused by frequent on-chip memory accesses have raised critical challenges in delivering high memory access performance with tight power and latency budgets. To overcome the daunting memory wall and energy wall issues, this thesis focuses on proposing a new heterogeneous scratchpad memory architecture which is configured from SRAM, MRAM, and Z-RAM. Based on this architecture, we propose two algorithms, a dynamic programming and a genetic algorithm, to perform data allocation to different memory units, therefore reducing memory access cost in terms of power consumption and latency. Extensive and intensive experiments are performed to show the merits of the heterogeneous scratchpad architecture over the traditional pure memory system and the effectiveness of the proposed algorithms.
112

MiSFIT: Mining Software Fault Information and Types

Kidwell, Billy R 01 January 2015 (has links)
As software becomes more important to society, the number, age, and complexity of systems grow. Software organizations require continuous process improvement to maintain the reliability, security, and quality of these software systems. Software organizations can utilize data from manual fault classification to meet their process improvement needs, but organizations lack the expertise or resources to implement them correctly. This dissertation addresses the need for the automation of software fault classification. Validation results show that automated fault classification, as implemented in the MiSFIT tool, can group faults of similar nature. The resulting classifications result in good agreement for common software faults with no manual effort. To evaluate the method and tool, I develop and apply an extended change taxonomy to classify the source code changes that repaired software faults from an open source project. MiSFIT clusters the faults based on the changes. I manually inspect a random sample of faults from each cluster to validate the results. The automatically classified faults are used to analyze the evolution of a software application over seven major releases. The contributions of this dissertation are an extended change taxonomy for software fault analysis, a method to cluster faults by the syntax of the repair, empirical evidence that fault distribution varies according to the purpose of the module, and the identification of project-specific trends from the analysis of the changes.
113

Design and Development of Geographical Information System (GIS) Map for Nuclear Waste Streams

Appunni, Sandhya 21 November 2014 (has links)
A nuclear waste stream is the complete flow of waste material from origin to treatment facility to final disposal. The objective of this study was to design and develop a Geographic Information Systems (GIS) module using Google Application Programming Interface (API) for better visualization of nuclear waste streams that will identify and display various nuclear waste stream parameters. A proper display of parameters would enable managers at Department of Energy waste sites to visualize information for proper planning of waste transport. The study also developed an algorithm using quadratic Bézier curve to make the map more understandable and usable. Microsoft Visual Studio 2012 and Microsoft SQL Server 2012 were used for the implementation of the project. The study has shown that the combination of several technologies can successfully provide dynamic mapping functionality. Future work should explore various Google Maps API functionalities to further enhance the visualization of nuclear waste streams.
114

Scheduling Medical Application Workloads on Virtualized Computing Systems

Delgado, Javier 30 March 2012 (has links)
This dissertation presents and evaluates a methodology for scheduling medical application workloads in virtualized computing environments. Such environments are being widely adopted by providers of “cloud computing” services. In the context of provisioning resources for medical applications, such environments allow users to deploy applications on distributed computing resources while keeping their data secure. Furthermore, higher level services that further abstract the infrastructure-related issues can be built on top of such infrastructures. For example, a medical imaging service can allow medical professionals to process their data in the cloud, easing them from the burden of having to deploy and manage these resources themselves. In this work, we focus on issues related to scheduling scientific workloads on virtualized environments. We build upon the knowledge base of traditional parallel job scheduling to address the specific case of medical applications while harnessing the benefits afforded by virtualization technology. To this end, we provide the following contributions: An in-depth analysis of the execution characteristics of the target applications when run in virtualized environments. A performance prediction methodology applicable to the target environment. A scheduling algorithm that harnesses application knowledge and virtualization-related benefits to provide strong scheduling performance and quality of service guarantees. In the process of addressing these pertinent issues for our target user base (i.e. medical professionals and researchers), we provide insight that benefits a large community of scientific application users in industry and academia. Our execution time prediction and scheduling methodologies are implemented and evaluated on a real system running popular scientific applications. We find that we are able to predict the execution time of a number of these applications with an average error of 15%. Our scheduling methodology, which is tested with medical image processing workloads, is compared to that of two baseline scheduling solutions and we find that it outperforms them in terms of both the number of jobs processed and resource utilization by 20-30%, without violating any deadlines. We conclude that our solution is a viable approach to supporting the computational needs of medical users, even if the cloud computing paradigm is not widely adopted in its current form.
115

Design and Development of a Comprehensive and Interactive Diabetic Parameter Monitoring System - BeticTrack

Chowdhury, Nusrat 01 December 2019 (has links)
A novel, interactive Android app has been developed that monitors the health of type 2 diabetic patients in real-time, providing patients and their physicians with real-time feedback on all relevant parameters of diabetes. The app includes modules for recording carbohydrate intake and blood glucose; for reminding patients about the need to take medications on schedule; and for tracking physical activity, using movement data via Bluetooth from a pair of wearable insole devices. Two machine learning models were developed to detect seven physical activities: sitting, standing, walking, running, stair ascent, stair descent and use of elliptical trainers. The SVM and decision tree models produced an average accuracy of 85% for these seven activities. The decision tree model is implemented in an app that classifies human activity in real-time.
116

PROCESSOR TEMPERATURE AND RELIABILITY ESTIMATION USING ACTIVITY COUNTERS

Chhablani, Mayank 23 March 2016 (has links)
With the advent of technology scaling lifetime reliability is an emerging threat in high-performance and deadline-critical systems. High on-chip thermal gradients accelerates localised thermal elevations (hotspots) which increases the aging rate of the semiconductor devices. As a result, reliable operation of the processors has become a challenging task. Therefore, cost effective schemes for estimating temperature and reliability are crucial. In this work we present a reliability estimation scheme that is based on a light-weight temperature estimation technique that monitors hardware events. Unlike previously pro- posed hardware counter-based approaches, our approach involves a linear-temporal-feedback estimator, taking into account the effects of thermal inertia. The proposed approach shows an average absolute error of We then present a counter-based technique to estimate the thermal accelerated aging factor (TAAF), which is an indicator of lifetime reliability. Results demonstrate that the estimation error is within [−3, +5].
117

A Hardware Framework for Yield and Reliability Enhancement in Chip Multiprocessors

Pan, Abhisek 01 January 2009 (has links) (PDF)
Device reliability and manufacturability have emerged as dominant concerns in end-of-road CMOS devices. Today an increasing number of hardware failures are attributed to device reliability problems that cause partial system failure or shutdown. Also maintaining an acceptable manufacturing yield is seen as challenge because of smaller feature sizes, process variation, and reduced headroom for burn-in tests. In this project we investigate a hardware-based scheme for improving yield and reliability of a homogeneous chip multiprocessor (CMP). The proposed solution involves a hardware framework that enables us to utilize the redundancies inherent in a multi-core system to keep the system operational in face of partial failures due to hard faults (faults due to manufacturing defects or permanent faults developed during system lifetime). A micro-architectural modification allows a faulty core in a multiprocessor system to use another core as a coprocessor to service any instruction that the former cannot execute correctly by itself. This service is accessed to improve yield and reliability, but at the cost of some loss of performance. In order to quantify this loss we have used a cycle-accurate architectural simulator to simulate the performance of dual-core and quad-core systems with one or more cores sustaining partial failure. Simulation studies indicate that when a large and sparingly-used unit such as a floating point unit fails in a core, even for a floating point intensive benchmark, we can continue to run the faulty core with as little as 10% performance impact and minimal area overhead. Incorporating this recovery mechanism entails some modifications in the microprocessor micro-architecture. The modifications are also described here through a simplified model of a superscalar processor.
118

Computational Delay in Vehicles and Its Effect on Real Time Scheduling

Jain, Abhinna 01 January 2012 (has links) (PDF)
Present research into critical embedded control systems tends to focus on the computational elements and largely ignore the link between the computational and physical elements. This link is very important since the computational capability of the computer can greatly affect the performance and dynamics of the system it controls. The control computer is in the feedback loop of control systems and contributes to feedback delay in addition to already existing mechanical delays. While mechanical delays are compensated in control design, variable computational delays cause system to underperform in its intended physical behavior and impose a cost in terms of fuel or time. For this reason, the scheduler in a real-time operating systems should not focus only on the task deadlines, but also on efficient scheduling which minimizes the effect of computational delay on the controlled plant. The proposed work provides a systematic framework to manage and evaluate the implications of computational delay in vehicles. The work also includes cost sensitive real-time control task scheduling heuristics and Dynamic Voltage Scaling (DVS) for better energy/thermal control. We show through simulations that our heuristic achieves a significant improvement in cost over the traditional real-time scheduling algorithm Earliest Deadline First (EDF) and show that it can adjust according to energy constraints imposed on the system.
119

N3asics: Designing Nanofabrics with Fine-Grained Cmos Integration

Panchapakeshan, Pavan 01 January 2012 (has links) (PDF)
Nanoscale-computing fabrics based on novel materials such as semiconductor nanowires, carbon nanotubes, graphene, etc. have been proposed in recent years. These fabrics employ unconventional manufacturing techniques like Nano-imprint lithography or Super-lattice Nanowire Pattern Transfer to produce ultra-dense nano-structures. However, one key challenge that has received limited attention is the interfacing of unconventional/self-assembly based approaches with conventional CMOS manufacturing to build integrated systems. We propose a novel nanofabric approach that mixes unconventional nanomanufacturing with CMOS manufacturing flow and design rules to build a reliable nanowire-CMOS 3-D integrated fabric called N3ASICs with no new manufacturing constraints. In N3ASICs active devices are formed on a dense semiconductor nanowire array and standard area distributed pins/vias, metal interconnects route signals in 3D. The proposed N3ASICs fabric is fully described and thoroughly evaluated at all design levels. Novel nanowire based devices are envisioned and characterized based on 3D physics modeling. Overall N3ASICs fabric design, associated circuits, interconnection approach, and a layer-by-layer assembly sequence for the fabric are introduced. System level metrics such as power, performance, and density for a nanoprocessor design built using N3ASICs were evaluated and compared against a functionally equivalent CMOS design. We show that the N3ASICs version of the processor is 3X denser and 5X more power efficient for a comparable performance than the 16-nm scaled CMOS version without any new/unknown-manufacturing requirement. Systematic yield implications due to mask overlay misalignment have been evaluated. A partitioning approach to build complex circuits has been studied.
120

A Neural Network Approach to Border Gateway Protocol Peer Failure Detection and Prediction

White, Cory B. 01 December 2009 (has links) (PDF)
The size and speed of computer networks continue to expand at a rapid pace, as do the corresponding errors, failures, and faults inherent within such extensive networks. This thesis introduces a novel approach to interface Border Gateway Protocol (BGP) computer networks with neural networks to learn the precursor connectivity patterns that emerge prior to a node failure. Details of the design and construction of a framework that utilizes neural networks to learn and monitor BGP connection states as a means of detecting and predicting BGP peer node failure are presented. Moreover, this framework is used to monitor a BGP network and a suite of tests are conducted to establish that this neural network approach as a viable strategy for predicting BGP peer node failure. For all performed experiments both of the proposed neural network architectures succeed in memorizing and utilizing the network connectivity patterns. Lastly, a discussion of this framework's generic design is presented to acknowledge how other types of networks and alternate machine learning techniques can be accommodated with relative ease.

Page generated in 0.0831 seconds