• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1398
  • 1011
  • 380
  • 88
  • 62
  • 59
  • 45
  • 38
  • 21
  • 19
  • 14
  • 12
  • 11
  • 8
  • 8
  • Tagged with
  • 3655
  • 1140
  • 591
  • 492
  • 382
  • 356
  • 299
  • 251
  • 249
  • 248
  • 229
  • 224
  • 217
  • 215
  • 209
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Combination of trace and scan signals for debuggability enhancement in post-silicon validation

Han, Kihyuk 19 July 2013 (has links)
Pre-silicon verification is an essential part of integrated circuit design to capture functional design errors. Complex simulation, emulation and formal verification tools are used in a virtual environment before the device is manufactured in silicon. However, as the design complexity increases and the design cycle becomes shorter for fast time-to-market, design errors are more likely to escape from the pre-silicon verification and functional bugs are found during the actual operation. Since manufacturing test primarily focuses on the physical defects, post-silicon validation is the final gatekeeper to capture these escaped design bugs. Consequently, post-silicon validation has become a critical path in shortening the development cycle of System-On-Chip(SoC) design. A major challenge in post-silicon validation is the limited observability of internal states caused by the limited storage capacity available for silicon debugging. Since a post-silicon validation operates on a fabricated chip, recording the values of each and every internal signals is not possible. Due to this limitation of post-silicon validation, acquiring the circuit's internal behavior with the limited available resources is a very challenging task in post-silicon validation. There are two main categories to expand the observability: trace and scan signal based approaches. Real time system response during silicon debug can be acquired using a trace signal based technique; however due to the limited space for the trace buffer, the selection of the trace signals is very critical in maximizing the observability of the internal states. The scan based approach provides high observability and requires no additional design overhead; however the designers cannot acquire the real time system response since the circuit operation has to be stopped to transfer the internal states. Recent research has shown that observability can be enhanced if trace and scan signals can be efficiently combined together, compared to the other debugging scenarios where only trace signals are monitored. This dissertation proposes an enhanced and systematic algorithm for the efficient combination of trace and scan signals using restorability values to maximize the observability of internal circuit states. In order to achieve this goal, we first introduce a technique to calculate restorability values accurately by considering both local and global connectivity of the circuit. Based on these restorability values, the dynamic trace signal selection algorithm is proposed to provide a higher number of restored states regardless of the incoming test vectors. Instead of using total restorability values, we separate 0 and 1 restorability values to differentiate the different circuit responses to the different incoming test vectors. Also, the two groups of trace signals can be selected dynamically based on the characteristics of the incoming test vectors to minimize the performance degradation with respect to the different incoming test vectors. Second, we propose a new algorithm to find the optimal number of trace signals, when trace and scan signals are combined together for better observability. Our technique utilizes restorability values and finds the optimal number of trace signals so that the remaining space of trace buffer can be utilized for the scan signals. Observability can be enhanced further with data compression technique. Since the entries of the dictionary are determined from the golden simulation, a high compression ratio can be achieved with little extra hardware overhead. Experimental results on benchmark circuits and a real industry design show that the proposed technique provides a higher number of restored states compared to the existing techniques. / text
112

Designs and methodologies for post-silicon timing characterization

Jang, Eun Jung 24 October 2013 (has links)
Timing analysis is a key sign-off step in the design of today's chips, but technology scaling introduces many sources of variability and uncertainty that are difficult to model and predict. The result of these uncertainties is a degradation in our ability to predict the performance of fabricated chips, i.e., a lack of model-to-hardware matching. The prediction of circuit performance is the result of a complex hierarchy of models ranging from the basic MOSFET device model to full-chip models of important performance metrics including power, frequency of operation, etc. The assessment of the quality of such models is an important activity, but it is becoming harder and more complex with rising levels of variability and the increase in the number of systematic effects observed in modern CMOS processes. The purpose of this research is (i) to introduce special-purpose test structures that specifically focus on ensuring the accuracy of gate timing models, and (ii) to introduce methods that analyze the extracted information, in the form of path delay measurements, using the proposed test structures. The certification of digital design correctness (the so-called signoff) is based largely on the results of performing Static Timing Analysis (STA), which, in turn, is based entirely on the gate timing models. The proposed test structures compare favorably to alternative approaches; they are far easier to measure than direct delay measurement, and they are much more general than simple ring-oscillator structures. Furthermore, the structures are specified at a high level, allowing them to be synthesized using a standard ASIC place-and-route flow, thus capturing the local layout systematic effects which can sometimes be lost by simpler (e.g., ring oscillator) structures. For the silicon timing analysis, we propose methods that deduce segment delays from the path delay measurements. These estimated segment delays using our methods can be directly compared with the timing models. Therefore, it will be easy to identify the cause of timing mismatches. Deducing segment delays from path delays, however, is not an easy problem. The difficulties associated with deconvolving segment delays from measured path delays come from insufficient sampling points. To overcome this limitation, we first group the segments based on certain characteristics of segments, and adapt Moore-Penrose pseudo-inverse method to approximately solve the segment delays. Secondly, we used equality-constrained least squares methods, which enable us to find a unique and optimized solution of segment delays from underdetermined systems. We also propose another improved test structure that has a built-in test pattern generator, and hence does not require ATPG (Automatic Test Pattern Generation). It is a self-timed circuit, and this feature makes the test structure run as fast as it can. Therefore, measurements can be made under high speed switching conditions. Finally, we can study dynamic effects such as timing effects of different levels of switching activities and voltage drop with the new test structure. / text
113

Testing concurrent software systems

Kilgore, Richard Brian 28 August 2008 (has links)
Not available
114

The Influence of Validation of Pain-Related Thoughts and Feelings on Positive and Negative Affect

Edmond, Sara Nicole January 2015 (has links)
<p>There are an unlimited number of ways a person may respond to someone sharing pain-related thoughts or feelings. Understanding what types of responses may result in positive outcomes for individuals with pain is important, yet limited research has been conducted in this area. The purpose of this dissertation was to understand how validation as a response to verbal disclosures about pain influences positive and negative affect, pain intensity, and pain tolerance as compared to other responses. To examine this question, an experimental design with best friend dyads was used. Participants engaged in a pain induction task and were asked to verbally share about their pain, and either their friend or a research assistant delivered validating, neutral, or invalidating responses. Results found that receiving validating was related to greater positive affect and reduced negative affect as compared to receiving in validating responses, and some group differences emerged between participants who received responses from friends as compared to research assistants.</p> / Dissertation
115

Validation of PCR assays for detection of Shiga toxin-producing E. coli O104:H4 and O121 in food

Tawe, Johanna January 2014 (has links)
Shigatoxin-producing Escherichia coli (STEC) can cause infections in humans which can beserious and sometimes fatal. There is a great need for methods that are able to detect differentserogroups of STEC. In this project, conventional and real-time PCR assays for detection ofSTEC O104:H4 and O121, as recommended by the European Union Reference Laboratory(EU-RL) for STEC, were validated. The specificity, limit of detection, repeatability,efficiency and robustness were determined for three real-time PCR assays. The validationshowed that the real-time PCR reactions were specific and sensitive although some additionaltests are required.
116

Validation Methodologies for Construction Engineering and Management Research

Liu, Jiali 11 July 2013 (has links)
Validation of results is an important phase in the organization of a researcher’s work. Libraries and the internet offer a number of sources for guidance with respect to conducting validation in a variety of fields. However, construction engineering and management (CEM) is an area for which such information is unavailable. CEM is an interdisciplinary field, comprised of a variety of subjects: human resources management, project planning, social sciences, etc. This broad range means that the choice of appropriate validation methodologies is critical for ensuring a high level of confidence in research outcomes. In other words, the selection of appropriate validation methodologies represents a significant challenge for CEM researchers. To assist civil engineering researchers as well as students undertaking master’s or doctoral CEM studies, this thesis therefore presents a comprehensive review of validation methodologies in this area. The validation methodologies commonly applied include experimental studies, observational studies, empirical studies, case studies, surveys, functional demonstration, and archival data analysis. The author randomly selected 365 papers based on three main perspectives: industry best practices in construction productivity, factors that affect labour productivity, and technologies for improving construction productivity. The validation methodologies that were applied in each category of studies were examined and recorded in analysis tables. Based on the analysis and discussion of the findings, the author summarized the final results, indicating such items as the highest percentage of a particular methodology employed in each category and the top categories in which that methodology was applied. The research also demonstrates a significant increasing trend in the use of functional demonstration over the past 34 years. As well, a comparison of the period from 1980 to 2009 with the period from 2010 to the present revealed a decrease in the number of papers that reported validation methodology that was unclear. These results were validated through analysis of variation (ANOVA) and least significant difference (LSD) analysis. Furthermore, the relationship between the degree of validation and the number of citations is explored. The study showed that the number of citations is positively related to the degree of validations in a specific category, based on the data acquired from the examination of articles in Constructability and Factors categories. However, based on the data acquired from the examination of articles in the year 2010, we failed to conclude that there existed significant difference between clear-validation group and unclear validation group at the 95 % confidence level.
117

Cross-Validation for Model Selection in Model-Based Clustering

O'Reilly, Rachel 04 September 2012 (has links)
Clustering is a technique used to partition unlabelled data into meaningful groups. This thesis will focus on the area of clustering called model-based clustering, where it is assumed that data arise from a finite number of subpopulations, each of which follows a known statistical distribution. The number of groups and shape of each group is unknown in advance, and thus one of the most challenging aspects of clustering is selecting these features. Cross-validation is a model selection technique which is often used in regression and classification, because it tends to choose models that predict well, and are not over-fit to the data. However, it has rarely been applied in a clustering framework. Herein, cross-validation is applied to select the number of groups and covariance structure within a family of Gaussian mixture models. Results are presented for both real and simulated data. / Ontario Graduate Scholarship Program
118

Experimental Validation of an Elastic Registration Algorithm for Ultrasound Images

Leung, Corina 29 October 2007 (has links)
Ultrasound is a favorable tool for intra-operative surgical guidance due to its fast imaging speed and non-invasive nature. However, deformations of the anatomy caused by breathing, heartbeat, and movement of the patient make it difficult to track the location of anatomical landmarks during intra-operative ultrasound-guided interventions. While elastic registration can be used to compensate for image misalignment, its adaptation for clinical use has only been gradual due to the lack of standardized guidelines to quantify the performance of different registration techniques. Evaluation of elastic registration algorithms is a difficult task since the point to point correspondence between images is usually unknown. This poses a major challenge in the validation of non-rigid registration techniques for performance comparisons. Current validation guidelines for non-rigid registration algorithms exist for the comparison of techniques for magnetic resonance images of the brain. These frameworks provide users with standardized brain datasets and performance measures based on brain region alignment, intensity differences between images, and inverse consistency of transformations. These metrics may not all be suitable for ultrasound registration algorithms due to the different properties of the imaging modalities. Furthermore, other metrics are required for validating the registration performance on different anatomical images with large deformations such as the liver. This work presents a validation framework dedicated for ultrasound elastic registration algorithms. Quantitative validation metrics are evaluated for ultrasound images. These include a simulation technique to measure registration accuracy, a segmentation algorithm to extract anatomical landmarks to measure feature overlap, and a technique to measure the alignment of images using similarity metrics. An extensive study of an ultrasound temporal registration algorithm is conducted using the proposed validation framework. Experiments are performed on a large database of 2D and 3D US images of the carotid artery and the liver to assess the performance of this algorithm. In addition, two graphical user interfaces which integrate the image registration and segmentation techniques have been developed to visualize the performance of these algorithms on ultrasound images captured in real time. In the future, these interfaces may be used to enhance ultrasound examination. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2007-10-24 22:35:20.875
119

Using Cluster Analysis, Cluster Validation, and Consensus Clustering to Identify Subtypes

Shen, Jess Jiangsheng 26 November 2007 (has links)
Pervasive Developmental Disorders (PDDs) are neurodevelopmental disorders characterized by impairments in social interaction, communication and behaviour [Str04]. Given the diversity and varying severity of PDDs, diagnostic tools attempt to identify homogeneous subtypes within PDDs. The diagnostic system Diagnostic and Statistical Manual of Mental Disorders - Fourth Edition (DSM-IV) divides PDDs into five subtypes. Several limitations have been identified with the categorical diagnostic criteria of the DSM-IV. The goal of this study is to identify putative subtypes in the multidimensional data collected from a group of patients with PDDs, by using cluster analysis. Cluster analysis is an unsupervised machine learning method. It offers a way to partition a dataset into subsets that share common patterns. We apply cluster analysis to data collected from 358 children with PDDs, and validate the resulting clusters. Notably, there are many cluster analysis algorithms to choose from, each making certain assumptions about the data and about how clusters should be formed. A way to arrive at a meaningful solution is to use consensus clustering to integrate results from several clustering attempts that form a cluster ensemble into a unified consensus answer, and can provide robust and accurate results [TJPA05]. In this study, using cluster analysis, cluster validation, and consensus clustering, we identify four clusters that are similar to – and further refine  three of the five subtypes defined in the DSM-IV. This study thus confirms the existence of these three subtypes among patients with PDDs. / Thesis (Master, Computing) -- Queen's University, 2007-11-15 23:34:36.62 / OGS, QGA
120

Clinical validation of the Walking Impairment Questionnaire in patients with peripheral arterial disease: defining high and low walking performance values

Sagar, Stephen Peter 25 August 2011 (has links)
Objective: The validity of the Walking Impairment Questionnaire (WIQ) as a clinical tool for use by clinicians in the conservative management of patients with peripheral arterial disease (PAD) has not been well established. The objective of this study was to determine the validity of the WIQ as a tool to identify high and low walking ability (performance) in patients with PAD. Methods: We conducted a cross-sectional study and enrolled 132 new and existing PAD patients who consecutively attended the vascular clinic at Kingston General Hospital between May 2010 and May 2011. Patients with an Ankle Brachial Index ≤0.9 were approached for study inclusion. Participants were excluded if they had (a) severe ischemia requiring intervention; (b) comorbid conditions that limited walking (angina, congestive heart failure, chronic obstructive pulmonary disease or severe arthritis); (c) wheel chair, cane or walker requirement; (d) non-compressible arteries; and/or (e) severe cognitive impairment. Walking performance was assessed with the Walking Impairment Questionnaire (surrogate measure) and a standardized graded treadmill test (gold standard measure). Other study variables were obtained via questionnaire (age, sex, comorbid conditions and smoking status) or direct measurement (weight, height, waist circumference). Results: 123 patients completed the treadmill test (70.7% males, mean age of 66.5 and mean ABI of 0.6 with range 0-0.9). The scores on the WIQ ranged from 0 to 100 and absolute claudication distance (ACD) ranged from 0.03 to 0.98 miles. All WIQ subscale and overall scores were positively and moderately associated with the ACD (r values 0.63 to 0.68, p<0.05). Based on the area under the curve of the receiver operating characteristics curve analysis, an overall WIQ score of 42.5 or less identified low performers (sensitivity 0.9, specificity 0.7, area under the curve 0.89) while a combined distance and stair score of 75.5 or more identified high performers (sensitivity 0.4, specificity 0.9, area under the curve 0.81). Conclusions: Based on these findings, the WIQ, an easily administered self-report questionnaire, and the cutoffs identified could be used to quantify and classify walking ability in PAD patients, making this a potentially useful tool for clinicians to manage PAD patients. / Thesis (Master, Community Health & Epidemiology) -- Queen's University, 2011-08-25 13:03:34.694

Page generated in 0.0389 seconds