Spelling suggestions: "subject:"errors."" "subject:"horrors.""
251 |
Modeling and Mitigation of Soft Errors in Nanoscale SRAMsJahinuzzaman, Shah M. January 2008 (has links)
Energetic particle (alpha particle, cosmic neutron, etc.) induced single event data upset or soft error has emerged as a key reliability concern in SRAMs in sub-100 nanometre technologies. Low operating voltage, small node capacitance, high packing density, and lack of error masking mechanisms are primarily responsible for the soft error susceptibility of SRAMs. In addition, since SRAM occupies the majority of die area in system-on-chips (SoCs) and microprocessors, different leakage reduction techniques, such as, supply voltage reduction, gated grounding, etc., are applied to SRAMs in order to limit the overall chip leakage. These leakage reduction techniques exponentially increase the soft error rate in SRAMs. The soft error rate is further accentuated by process variations, which are prominent in scaled-down technologies. In this research, we address these concerns and propose techniques to characterize and mitigate soft errors in nanoscale SRAMs.
We develop a comprehensive analytical model of the critical charge, which is a key to assessing the soft error susceptibility of SRAMs. The model is based on the dynamic behaviour of the cell and a simple decoupling technique for the non-linearly coupled storage nodes. The model describes the critical charge in terms of NMOS and PMOS transistor parameters, cell supply voltage, and noise current parameters. Consequently, it enables characterizing the spread of critical charge due to process induced variations in these parameters and to manufacturing defects, such as, resistive contacts or vias. In addition, the model can estimate the improvement in critical charge when MIM capacitors are added to the cell in order to improve the soft error robustness. The model is validated by SPICE simulations (90nm CMOS) and radiation test. The critical charge calculated by the model is in good agreement with SPICE simulations with a maximum discrepancy of less than 5%. The soft error rate estimated by the model for low voltage (sub 0.8 V) operations is within 10% of the soft error rate measured in the radiation test. Therefore, the model can serve as a reliable alternative to time consuming SPICE simulations for optimizing the critical charge and hence the soft error rate at the design stage.
In order to limit the soft error rate further, we propose an area-efficient multiword based error correction code (MECC) scheme. The MECC scheme combines four 32 bit data words to form a composite 128 bit ECC word and uses an optimized 4-input transmission-gate XOR logic. Thus MECC significantly reduces the area overhead for check-bit storage and the delay penalty for error correction. In addition, MECC interleaves two composite words in a row for limiting cosmic neutron induced multi-bit errors. The ground potentials of the composite words are controlled to minimize leakage power without compromising the read data stability. However, use of composite words involves a unique write operation where one data word is written while other three data words are read to update the check-bits. A power efficient word line signaling technique is developed to facilitate the write operation. A 64 kb SRAM macro with MECC is designed and fabricated in a commercial 90nm CMOS technology. Measurement results show that the SRAM consumes 534 μW at 100 MHz with a data latency of 3.3 ns for a single bit error correction. This translates into 82% per-bit energy saving and 8x speed improvement over recently reported multiword ECC schemes. Accelerated neutron radiation test carried out at TRIUMF in Vancouver confirms that the proposed MECC scheme can correct up to 85% of soft errors.
|
252 |
Statistical Power in Ergonomic Intervention StudiesHurley, Kevin 12 April 2010 (has links)
As awareness of the costs of workplace injury and illness continues to grow, there has been an increased demand for effective ergonomic interventions to reduce the prevalence of musculoskeletal disorders (MSDs). The goal of ergonomic interventions is to reduce exposures (mechanical and psychosocial); however there is conflicting evidence about the impact of these interventions as many studies produce inconclusive or conflicting results. In order to provide a clearer picture of the effectiveness of these interventions, we must find out if methodological issues, particularly statistical power, are limiting this research. The purpose of this study was to review and examine factors influencing statistical power in ergonomic intervention papers from five peer reviewed journals in 2008. A standardized review was performed by two reviewers. Twenty eight ergonomic intervention papers met the inclusion criteria and were fully reviewed. Data and trends from the reviewed papers were summarized specifically looking at the research designs used, the outcome measures used, if statistical power was mentioned, if a rationale for sample size was reported, if standardized and un-standardized effect sizes were reported, if confidence intervals were reported, the alpha levels used, if pair-wise correlation values were provided, if mean values and standard deviations were provided for all measures and the location of the studies. Also, the studies were rated based on the outcomes of their intervention into one of three categories (shown to be effective, inconclusive and not shown to be effective). Between these three groupings comparisons of post hoc power, standardized effect sizes, un-standardized effect sizes and coefficients of variation were made. The results indicate that in general, a lack of statistical power is indeed a concern and may be due to the sample sizes used, effect sizes produced, extremely high variability in some of the measures, the lack of attention paid to statistical power during research design and the lack of appropriate statistical reporting guidelines in journals where ergonomic intervention research may be published. A total of 69.6% of studies reviewed had a majority of measures with less than .50 power and 71.4% of all measures used had CVs of > .20.
|
253 |
Characterization of a vertical two axis latheLeclerc, Michael Edward 14 April 2005 (has links)
The primary barrier to the production of better machined parts is machine tool error. Present day applications are requiring closer machine part tolerances. The errors in dimensional part accuracy derive from the machine, in this case, a vertical two axis CNC lathe. A two axis vertical lathe can be utilized to produce a variety of parts ranging from cylindrical features to spherical features. A vertical lathe requires a spindle to rotate the work at speeds reaching 3000rpm, while simultaneously requiring the machine tool to be positioned in such a manner to remove material and produce an accurate part. For this to be possible, the machine tool must be precisely controlled in order to produce the correct contours on the part. There are many sources of errors to be considered in the two axis vertical lathe. Each axis of importance contains six degrees of freedom. The machine has linear displacement, angular, spindle thermal drift, straightness, parallelism, orthogonal, machine tool offset and roundness error. These error components must be measured in order to determine the resultant error.
The characterization of the machine addresses thermal behavior and geometric errors. This thesis presents the approach of determining the machine tool errors and using these errors to transform the actual tool path closer to the nominal tool path via compensation schemes. One of these schemes uses a laser interferometer in conjunction with a homogenous transformation matrix to construct the compensated path for a circular arc, facing and turning. The other scheme uses a ball bar system to directly construct the compensated tool path for a circular arc. Test parts were created to verify the improvement of the part accuracy using the compensated tool paths.
|
254 |
A Study of Problem-Solving Strategies and Errors in linear equations with two unknowns for Junior High School StudentsLee, Yi-chin 10 June 2007 (has links)
This research referred to Basic Competency Test from 2001 to 2006 to construct test and analyzed 207 ninth-graders¡¦ problem-solving strategies as well as errors in solving linear equations with two unknowns. Furthermore, the investigator referred to the contents of interview, to investigate the factors that cause students¡¦ mistakes.
Results shows that the main strategy for solving equations is 'to add and subtract the elimination approach', while for solving application problems is 'organizing side by side'. The errors for solving equations are mistaking concepts including Equality Axiom, etc. The errors for solving application problems are mostly concerned about the translation and holistic mistake. Through analyzing data from interviews, the reasons for mistakes in solving equations are: mutual interference of experience; mixed up different operation rules; or, solving a problem with the wrong concept built by themselves. The reasons for mistakes in solving application problems are: insufficient language ability; the lack of the self-monitoring; and limitation in strategies for solving problems.
Finally, based on the results of this research, the researcher gave suggestions in three aspects. Hopefully, this research can assist teachers to have more variety in teaching methods heading towards an aim to benefit in students¡¦ learning.
|
255 |
A Study on Manufacturing Errors and Positioning Accuracy of Curvic CouplingsLin, Cheng-Ta 04 July 2000 (has links)
Curvic couplings have been widely applied in industry and almost can be seen in the index table. Following the development of the precision machine industry, except the manufacturing precision, the positioning accuracy is also concerned to meet the demands for curvic couplings.
The aim of this thesis is to investigate on manufacturing errors and positioning accuracy of curvic couplings. Discuss the geometric and indexing characteristics of curvic couplings firstly. In the meantime, the new parameter tooth surface of curvic couplings is derived by tooth generating theory. Than, mathematical models are established to simulate curvic couplings at present that are manufactured by grinding machine with different setting errors. Secondly, the practical examples based on the effects of elemental error on the manufacturing error are investigated. Finally, an experimental apparatus is built to measure the actual positioning accuracy of curvic couplings at different input speeds.
|
256 |
A study of point-contact polishing tool system design for axially symmetric free surfaceLee, Keng-yi 20 July 2009 (has links)
The goal of this thesis is to develop a novel polishing tool system. This system can be attached to a CNC machine and execute a precision polishing job mainly for an axially symmetric free surface. The precision polishing job is to remove the error surface profile on the work to improve its form precision, which was left by the previous machining process. An inferential rule, which was based on a top-down planning strategy, was utilized to gradually decompose the design goals of the tool system to facilitate the process for generating all of the possible design proposals. The major design goal is to render all the rotational axes of the tool system to exactly intersect at the tool center. To analyze the effects of the structure and interface stiffness of tool system on the major goal, the finite element method was adopted. Further, the homogeneous transformation scheme is applied to establish the forward kinematic error of the designed system and to analyze the effect of different manufacturing and assembly errors on the major goal.Accordingly, two novel polishing tool systems were developed. The simulation study indicated that the total errors after assembly at the tool center and the two rotation axes were dominated by the stiffness at the interfaces of the tool system, in addition to the influence of structure stiffness. An assembly strategy was then proposed in the study to reduce the total error.
|
257 |
Accuracy and precision of a technique to assess residual limb volume with a measuring-tapeJarl, Gustav January 2003 (has links)
<p>Transtibial stump volume can change dramatically postoperatively and jeopardise prosthetic fitting. Differences between individuals make it hard to give general recommendations of when to fit with a definitive prosthesis. Measuring the stump volume on every patient could solve this, but most methods for volume assessments are too complicated for clinical use.</p><p>The aim of this study was to evaluate accuracy and intra- and interrater precision of a method to estimate stump volume from circumferential measurements. The method approximates the stump as a number of cut cones and the tip as a sphere segment.</p><p>Accuracy was evaluated theoretically on six scanned stump models in CAPOD software and manually on six stump models. Precision was evaluated by comparing measurements made by four CPOs on eight stumps. Measuring devices were a wooden rule and a metal circumference rule. The errors were estimated with intraclass correlation coefficient (ICC), where 0,85 was considered acceptable, and a clinical criterion that a volume error of ±5% was acceptable (5% corresponds to one stocking).</p><p>The method was accurate on all models in theory but accurate on only four models in reality. The ICC was 0,95-1,00 for intrarater precision but only 0,76 for interrater precision. Intra- and interrater precision was unsatisfying when using clinical criteria. Variations between estimated tip heights and circumferences were causing the errors.</p><p>The method needs to be developed and is not suitable for stumps with narrow ends. Using a longer rule (about 30 cm) with a set square end to assess tip heights is recommended to improve precision. Using a flexible measuring-tape (possible to disinfect) with a spring-loaded handle could improve precision of the circumferential measurements.</p>
|
258 |
An elementary mathematics teacher's use of discourse practices in supporting English learner students in classroom repairShein, Paichi Pat, January 1900 (has links)
Thesis (Ph. D.)--UCLA, 2009. / Vita. Description based on print version record. Includes bibliographical references (leaves 119-125).
|
259 |
Linear estimation for data with error ellipsesAmen, Sally Kathleen 21 August 2012 (has links)
When scientists collect data to be analyzed, regardless of what quantities are being measured, there are inevitably errors in the measurements. In cases where two independent variables are measured with errors, many existing techniques can produce an estimated least-squares linear fit to the data, taking into consideration the size of the errors in both variables. Yet some experiments yield data that do not only contain errors in both variables, but also a non-zero covariance between the errors. In such situations, the experiment results in measurements with error ellipses with tilts specified by the covariance terms.
Following an approach suggested by Dr. Edward Robinson, Professor of Astronomy at the University of Texas at Austin, this report describes a methodology that finds the estimates of linear regression parameters, as well as an estimated covariance matrix, for a dataset with tilted error ellipses. Contained in an appendix is the R code for a program that produces these estimates according to the methodology. This report describes the results of the program run on a dataset of measurements of the surface brightness and Sérsic index of galaxies in the Virgo cluster. / text
|
260 |
Regressió lineal amb errors en ambdós eixos, apliació a la calibració i a la comparació de mètodes analíticsRiu Rusell, Jordi 12 April 1999 (has links)
Linear Regression with Errors in Both Axes. Application toCalibration and Comparison of Analytical Methods.Jordi Riu Rusell. Doctoral Thesis.An important subject in analytical chemistry is the comparison of analytical methods at different levels of concentration using linear regression. As the two methods normally have associated errors, the regression line should be found using what is known as regression methods that take into account the errors in both axes. Another field in which these regression methods can be applied is in linear calibration, since there are some analytical techniques (for instance, X-ray fluorescence), in which the calibration line is found with certified reference materials (CRM) of the analyte of interest, each of which presents associated uncertainties to the concentration values.To find the regression line, we used the bivariate least squares (BLS) regression method, which takes into account the individual errors in both axes, and we developed the following tests:· individual tests on the coefficients of the regression lineUseful for detecting constant or proportional errors. We developed and validated the expressions for calculating the probabilities of  error taking into account the selected  probabilities of error and the bias set by the analyst.· joint confidence interval on the coefficients of the regression lineThis test can be used to compare two analytical methods, when one wants to check whether the intercept of the regression line significantly differs from zero and simultaneously if the slope significantly differs from unity. We have developed and validated the joint confidence interval for the coefficients of the BLS regression method, and the joint confidence interval for the coefficients of the MLS (multivariate least squares) regression method. The latter is useful for comparing more than two analytical methods.· confidence intervals in predictionUseful for finding the concentration value and its confidence interval from the instrumental response. It is also used in method comparison studies when one wants to know the confidence interval of a sample from the results of the same sample analysed by another method.
|
Page generated in 0.0297 seconds