• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 6
  • 6
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Built-In Fault Masking For Defect Tolerance And Parameter Variation Mitigation In Nano-Processors

Joshi, Prachi 01 January 2011 (has links) (PDF)
Nanoscale manufacturing techniques enable very high density nano fabrics but may cause orders of magnitude higher levels of defects and variability than in today‟s CMOS processes. As a result, nanoscale architectures typically introduce redundancy at multiple levels of abstractions to mask faults. Schemes such as Triple Modular Redundancy (TMR) and structural redundancies are tailored to maximize yield but can impact performance significantly. For example, due to increases in circuit fan-in and fan-out, a quadratic performance impact is often projected. In this thesis, we introduce a new class of redundancy schemes called FastTrack, designed to provide fault tolerance but without their negative impact on performance. FastTrack relies on combining non-uniform structural redundancy with uniquely biased nanoscale voters. A variety of such techniques are employed on a Wire Streaming Processor (WISP-0) implemented on the Nanoscale Application Specific Integrated Circuits (NASIC) nanowire fabric. We show that FastTrack schemes can provide 23% higher effective yield than conventional redundancy schemes even at 10% defect rates. Most importantly, the yield improvement is achieved in conjunction with 79% lesser performance impact at 10% defect rate. This is the first redundancy scheme we are aware of to achieve such degree of fault masking without the considerable performance impact of conventional approaches. The same setup is also used to mask the effects of parameter variation. FastTrack techniques show up to 6X performance improvement compared to more traditional redundancy schemes even at higher defect rates. In the absence of defects, a FastTrack scheme can be up to 7X faster than a traditional redundancy scheme.
2

Parameter Variation Sensing and Estimation in Nanoscale Fabrics

Zhang, Jianfeng 01 January 2013 (has links) (PDF)
Parameter variations introduced by manufacturing imprecision are becoming more influential on circuit performance. This is especially the case in emerging nanoscale fabrics due to unconventional manufacturing steps (e.g., nano-imprint) and aggressive scaling. These parameter variations can lead to performance deterioration and consequently yield loss. Parameter variations are typically addressed pre-fabrication with circuit design targeting worst-case timing scenarios. However, this approach is pessimistic and much of performance benefits can be lost. By contrast, if parameter variations can be estimated post-manufacturing, adaptive techniques or reconfiguration could be used to provide more optimal level of tolerance. To estimate parameter variations during run-time, on-chip variation sensors are gaining in importance because of their easy implementation. In this thesis, we propose novel on-chip variation sensors to estimate variations in physical parameters for emerging nanoscale fabrics. Based on the characteristics of systematic and random variations, two separate sensors are designed to estimate the extent of systematic variations and the statistical distribution of random variations from measured fall and rise times in the sensors respectively. The proposed sensor designs are evaluated through HSPICE Monte Carlo simulations with known variation cases injected. Simulation results show that the estimation error of the systematic-variation sensor is less than 1.2% for all simulated cases; and for the random-variation sensor, the worst-case estimation error is 12.7% and the average estimation error is 8% for all simulations. In addition, to address the placement of on-chip sensors, we calculate sensor area and the effective range of systematic-variation sensor. Then using a processor designed in nanoscale fabrics as a target, an example for sensor placement is introduced. Based on the sensor placement, external noises that may affect the measured fall and rise times of outputs are identified. Through careful analysis, we find that these noises do not deteriorate the accuracy of the systematic-variation sensor, but affect the accuracy of the random-variation sensor. We believe that the proposed on-chip variation sensors in conjunction with post-fabrication compensation techniques would be able to improve system-level performance in nanoscale fabrics, which may be an efficient alternative to making worst-case assumptions on parameter variations in nanoscale designs.
3

Dynamic HIV/AIDS parameter estimation with Applications

Filter, Ruben Arnold 13 June 2005 (has links)
This dissertation is primarily concerned with dynamic HIV/AIDS parameter estimation, set against the background of engineering, biology and medical science. The marriage of these seemingly divergent fields creates a dynamic research environment that is the source of many novel results and practical applications for people living with HIV/AIDS. A method is presented to extract model parameters for the three-dimensional HIV/AIDS model in situations where an orthodox LSQ method would fail. This method allows information from outside the dataset to be added to the cost functional so that parameters can be estimated even from sparse data. Estimates in literature were for at most two parameters per dataset, whereas the procedures described herein can estimate all six parameters. A standard table for data acquisition in hospitals and clinics is analyzed to show that the table would contain enough information to extract a suitable parameter estimate for the model. Comparison with a published experiment validates the method, and shows that it becomes increasingly hard to coordinate assumptions and implicit information when analyzing real data. Parameter variations during the course of HIV/AIDS are not well understood. The results show that parameters vary over time. The analysis of parameter variation is augmented with a novel two-stage approach of model identification for the six-dimensional model. In this context, the higher-dimensional models allow an explanation for the onset of AIDS from HIV without any variation in the model parameters. The developed estimation procedure was successfully used to analyze the data from forty four patients of Southern Africa in the HIVNET 28 vaccine readiness trial. The results are important to form a benchmark for the study of vaccination. The results show that after approximately 17 months from seroconversion, oscillations in viremia flattened to a log10 based median set point of 4:08, appearing no different from reported studies in subtype B HIV-1 infected male cohorts. Together with these main outcomes, an analysis of confidence intervals for set point, days to set point and the individual parameters is presented. When estimates for the HIVNET 28 cohort are combined, the data allows a meaningful first estimate of parameters of the three-dimensional HIV/AIDS model for patients from southern Africa. The theoretical basis is used to develop an application that allows medical practitioners to estimate the three-dimensional model parameters for HIV/AIDS patients. The program demands little background knowledge from the user, but for practitioners with experience in mathematical modeling, there is ample opportunity to fine-tune the procedures for special needs. / Dissertation (MEng)--University of Pretoria, 2006. / Electrical, Electronic and Computer Engineering / Unrestricted
4

Eco-Inspired Robustness Analysis of Linear Uncertain Systems Using Elemental Sensitivities

Dande, Ketan Kiran 19 June 2012 (has links)
No description available.
5

Influence of geometry and placement configuration on side forces in compression springs

Rahul Deshmukh (7847843) 12 November 2019 (has links)
<div>A leading cause of premature failure and excessive wear and tear in mechanical components that rely on compression springs for their operation is the development of unwanted side forces when the spring is compressed.</div><div>These side forces are usually around 10% - 20% of the magnitude of the axial load and point in different directions in the plane perpendicular to the axis of the spring.</div><div>The magnitude and direction of the resultant of side forces varies very non-linearly and unpredictably even though the axial force behavior of the spring is very consistent and predictable.</div><div>Since these side forces have to be resisted by the housing components that hold the spring in place, it is difficult to design these components for optimal operation.</div><div><br></div><div>The hypothesis of this study is that side forces are highly sensitive to small changes in spring geometry and its placement configuration in the housing. <br></div><div><div>Several experiments are conducted to measure the axial and side forces in barrel springs and two different types of finite element models are developed and calibrated to model the spring behavior. </div><div>Spring geometry and placement are parameterized using several control variables and an approach based on design of experiments is used to identify the critical parameters that control the behavior of side-forces. </div><div>The models resulted in deeper insight into the development of side forces as the spring is progressively loaded and how its contact interactions with the housing lead to changes in the side force.</div><div>It was found that side-forces are indeed sensitive to variations in spring geometry and placement.</div><div>These sensitivities are quantified to enable designers to and manufacturers of such springs to gain more control of side force variations between different spring specimens.</div></div>
6

Ultra High Compression For Weather Radar Reflectivity Data

Makkapati, Vishnu Vardhan 17 November 2006 (has links)
Honeywell Technology Solutions Lab, India / Weather is a major contributing factor in aviation accidents, incidents and delays. Doppler weather radar has emerged as a potent tool to observe weather. Aircraft carry onboard radars but their range and angular resolution are limited. Networks of ground-based weather radars provide extensive coverage of weather over large geographic regions. It would be helpful if these data can be transmitted to the pilot. However, these data are highly voluminous and the bandwidth of the ground-air communication links is limited and expensive. Hence, these data have to be compressed to an extent where they are suitable for transmission over low-bandwidth links. Several methods have been developed to compress pictorial data. General-purpose schemes do not take into account the nature of data and hence do not yield high compression ratios. A scheme for extreme compression of weather radar data is developed in this thesis that does not significantly degrade the meteorological information contained in these data. The method is based on contour encoding. It approximates a contour by a set of systematically chosen ‘control points’ that preserve its fine structure up to a certain level. The contours may be obtained using a thresholding process based on NWS or custom reflectivity levels. This process may result in region and hole contours, enclosing `high' or `low' areas, which may be nested. A tag bit is used to label region and hole contours. The control point extraction method first obtains a smoothed reference contour by averaging the original contour. Then the points on the original contour with maximum deviation from the smoothed contour between the crossings of these contours are identified and are designated as control points. Additional control points are added midway between the control point and the crossing points on either side of it, if the length of the segment between the crossing points exceeds a certain length. The control points, referenced with respect to the top-left corner of each contour for compact quantification, are transmitted to the receiving end. The contour is retrieved from the control points at the receiving end using spline interpolation. The region and hole contours are identified using the tag bit. The pixels between the region and hole contours at a given threshold level are filled using the color corresponding to it. This method is repeated till all the contours for a given threshold level are exhausted, and the process is carried out for all other thresholds, thereby resulting in a composite picture of the reconstructed field. Extensive studies have been conducted by using metrics such as compression ratio, fidelity of reconstruction and visual perception. In particular the effect of the smoothing factor, the choice of the degree of spline interpolation and the choice of thresholds are studied. It has been shown that a smoothing percentage of about 10% is optimal for most data. A degree 2 of spline interpolation is found to be best suited for smooth contour reconstruction. Augmenting NWS thresholds has resulted in improved visual perception, but at the expense of a decrease in the compression ratio. Two enhancements to the basic method that include adjustments to the control points to achieve better reconstruction and bit manipulations on the control points to obtain higher compression are proposed. The spline interpolation inherently tends to move the reconstructed contour away from the control points. This has been somewhat compensated by stretching the control points away from the smoothed reference contour. The amount and direction of stretch are optimized with respect to actual data fields to yield better reconstruction. In the bit manipulation study, the effects of discarding the least significant bits of the control point addresses are analyzed in detail. Simple bit truncation introduces a bias in the contour description and reconstruction, which is removed to a great extent by employing a bias compensation mechanism. The results obtained are compared with other methods devised for encoding weather radar contours.

Page generated in 0.1164 seconds