• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 1
  • Tagged with
  • 13
  • 13
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Analysis and Optimization for Testing Using IEEE P1687

Ghani Zadegan, Farrokh January 2010 (has links)
The IEEE P1687 (IJTAG) standard proposal aims at providing a standardized interface between on-chip embedded test, debug and monitoring logic (instruments), such as scan-chains and temperature sensors, and the Test Access Port of IEEE Standard 1149.1 mainly used for board test. A key feature in P1687 is to include Segment Insertion Bits (SIBs) in the scan path. SIBs make it possible to construct a multitude of different P1687 networks for the same set of instruments, and provide flexibility in test scheduling. The work presented in this thesis consists of two parts. In the first part, analysis regarding test application time is given for P1687 networks while making use of two test schedule types, namely concurrent and sequential test scheduling. Furthermore, formulas and novel algorithms are presented to compute the test time for a given P1687 network and a given schedule type. The algorithms are implemented and employed in extensive experiments on realistic industrial designs. In the second part, design of IEEE P1687 networks is studied. Designing the P1687 network that results in the least test application time for a given set of instruments, is a time-consuming task in the absence of automatic design tools. In this thesis work, novel algorithms are presented for automated design of P1687 networks which are optimized with respect to test application time and the required number of SIBs. The algorithms are implemented and demonstrated in experiments on industrial SOCs.
2

Staff Prediction Analysis : Effort Estimation In System Test

Vukovic, Divna, Wester, Cecilia January 2001 (has links)
This master thesis is made in 2001 at Blekinge Institute of Technology and Symbian, which is a software company in Ronneby, Sweden. The purpose of the thesis is to find a suitable prediction and estimation model for the test effort. To do this, we have studied the State of the Art in cost/effort estimation and fault prediction. The conclusion of this thesis is that it is hard to make a general proposal, which is applicable for all organisations. For Symbian we have proposed a model based on use and test cases to predict the test effort.
3

Protocols of DPOAE Measurements Aimed at Reducing Test Time

Smurzynski, Jacek, Janssen, Thomas 05 March 2015 (has links)
Routine clinical distortion product otoacoustic emission (DPOAE) tests use monaural sequential presentation of primary tones. To reduce testing time, multiple DPOAEs (mDPOAEs) can be measured by stimulating the ear with two tone pairs simultaneously. Moreover, both ears can be tested at the same time with a portable device, Sentiero (PATH medical GmbH), equipped with two sound probes. The purpose of the study was to evaluate whether mDPOAE measurements can be done in both ears simultaneously without mutual influence of primary tone pairs in the ipsilateral and the contralateral ear. Data were collected in 20 normal-hearing young adults. The DP-grams were obtained for seven f2- frequencies varying in the 1.5-8-kHz range with the level L2 set at 65 and 45 dB SPL, whereas the level L1 was adjusted according to the scissor paradigm. For each subject, a set of DP-gram data was collected using single- and multi-frequency presentations of the primaries for both monaural and binaural conditions. The mean DPOAE and noise levels collected with mDPOAE and binaural presentation conditions were highly reproducible when compared to those obtained with the single-frequency monaural paradigm. Thus, multi-frequency and binaural measurements could be applied to reduce DPOAE testing time considerably.
4

Test Scheduling with Power and Resource Constraints for IEEE P1687

Asani, Golnaz January 2012 (has links)
IEEE P1687 (IJTAG) is proposed to add more exibility|compared with IEEE 1149.1 JTAG|for accessing on-chip embedded test features called instruments. This exibility makes it possible to include and exclude instruments from the scan path. To reach a minimal test time, all instruments should be accessed concurrently. However, constraints such as power and resource constraints might limit concurrency. There is a need to consider power and resource constraints while developing the test schedule. This thesis consists of two parts. In the rst part, three test time calculation approaches, namely session-based test schedule with a xed scan path, session-based test schedule with a recongurable scan path, and session-less test schedule with a recongurable scan path are proposed. In the second part, three test scheduling approaches, namely session-based test scheduling, optimized session-based test scheduling, and optimized session-less test scheduling are studied and three algorithms are presented for each of the test scheduling approaches. Experiments are carried out using the test scheduling approaches and the results show that optimized sessionless test scheduling can signicantly reduce the test time compared with session-based test scheduling.
5

Uncertainty Estimation in Volumetric Image Segmentation

Park, Donggyun January 2023 (has links)
The performance of deep neural networks and estimations of their robustness has been rapidly developed. In contrast, despite the broad usage of deep convolutional neural networks (CNNs)[1] for medical image segmentation, research on their uncertainty estimations is being far less conducted. Deep learning tools in their nature do not capture the model uncertainty and in this sense, the output of deep neural networks needs to be critically analysed with quantitative measurements, especially for applications in the medical domain. In this work, epistemic uncertainty, which is one of the main types of uncertainties (epistemic and aleatoric) is analyzed and measured for volumetric medical image segmentation tasks (and possibly more diverse methods for 2D images) at pixel level and structure level. The deep neural network employed as a baseline is 3D U-Net architecture[2], which shares the essential structural concept with U-Net architecture[3], and various techniques are applied to quantify the uncertainty and obtain statistically meaningful results, including test-time data augmentation and deep ensembles. The distribution of the pixel-wise predictions is estimated by Monte Carlo simulations and the entropy is computed to quantify and visualize how uncertain (or certain) the predictions of each pixel are. During the estimation, given the increased network training time in volumetric image segmentation, training an ensemble of networks is extremely time-consuming and thus the focus is on data augmentation and test-time dropouts. The desired outcome is to reduce the computational costs of measuring the uncertainty of the model predictions while maintaining the same level of estimation performance and to increase the reliability of the uncertainty estimation map compared to the conventional methods. The proposed techniques are evaluated on publicly available volumetric image datasets, Combined Healthy Abdominal Organ Segmentation (CHAOS, a set of 3D in-vivo images) from Grand Challenge (https://chaos.grand-challenge.org/). Experiments with the liver segmentation task in 3D Computed Tomography (CT) show the relationship between the prediction accuracy and the uncertainty map obtained by the proposed techniques. / Prestandan hos djupa neurala nätverk och estimeringar av deras robusthet har utvecklats snabbt. Däremot, trots den breda användningen av djupa konvolutionella neurala nätverk (CNN) för medicinsk bildsegmentering, utförs mindre forskning om deras osäkerhetsuppskattningar. Verktyg för djupinlärning fångar inte modellosäkerheten och därför måste utdata från djupa neurala nätverk analyseras kritiskt med kvantitativa mätningar, särskilt för tillämpningar inom den medicinska domänen. I detta arbete analyseras och mäts epistemisk osäkerhet, som är en av huvudtyperna av osäkerheter (epistemisk och aleatorisk) för volymetriska medicinska bildsegmenteringsuppgifter (och möjligen fler olika metoder för 2D-bilder) på pixelnivå och strukturnivå. Det djupa neurala nätverket som används som referens är en 3D U-Net-arkitektur [2] och olika tekniker används för att kvantifiera osäkerheten och erhålla statistiskt meningsfulla resultat, inklusive testtidsdata-augmentering och djupa ensembler. Fördelningen av de pixelvisa förutsägelserna uppskattas av Monte Carlo-simuleringar och entropin beräknas för att kvantifiera och visualisera hur osäkra (eller säkra) förutsägelserna för varje pixel är. Under uppskattningen, med tanke på den ökade nätverksträningstiden i volymetrisk bildsegmentering, är träning av en ensemble av nätverk extremt tidskrävande och därför ligger fokus på dataaugmentering och test-time dropouts. Det önskade resultatet är att minska beräkningskostnaderna för att mäta osäkerheten i modellförutsägelserna samtidigt som man bibehåller samma nivå av estimeringsprestanda och ökar tillförlitligheten för kartan för osäkerhetsuppskattning jämfört med de konventionella metoderna. De föreslagna teknikerna kommer att utvärderas på allmänt tillgängliga volymetriska bilduppsättningar, Combined Healthy Abdominal Organ Segmentation (CHAOS, en uppsättning 3D in-vivo-bilder) från Grand Challenge (https://chaos.grand-challenge.org/). Experiment med segmenteringsuppgiften för lever i 3D Computed Tomography (CT) vissambandet mellan prediktionsnoggrannheten och osäkerhetskartan som erhålls med de föreslagna teknikerna.
6

Graded Exercise Stress Testing: Treadmill Protocols Comparison Of Peak Exercise Times In Cardiac Patients

Salameh, Ahlam 05 October 2009 (has links)
No description available.
7

Wall Heat Transfer Effects In The Endwall Region Behind A Reflected Shock Wave At Long Test Times

Frazier, Corey 01 January 2007 (has links)
Shock-tube experiments are typically performed at high temperatures (≥1200K) due to test-time constraints. These test times are usually ~1 ms in duration and the source of this short, test-time constraint is loss of temperature due to heat transfer. At short test times, there is very little appreciable heat transfer between the hot gas and the cold walls of the shock tube and a high test temperature can be maintained. However, some experiments are using lower temperatures (approx. 800K) to achieve ignition and require much longer test times (up to 15 ms) to fully study the chemical kinetics and combustion chemistry of a reaction in a shock-tube experiment. Using mathematical models, analysis was performed studying the effects of temperature, pressure, shock-tube inner diameter, and test-port location at various test times (from 1 - 20 ms) on temperature maintenance. Three models, each more complex than the previous, were used to simulate test conditions in the endwall region behind the reflected shock wave with Ar and N2 as bath gases. Temperature profile, thermal BL thickness, and other parametric results are presented herein. It was observed that higher temperatures and lower pressures contributed to a thicker thermal boundary layer, as did shrinking inner diameter. Thus it was found that a test case such as 800K and 50 atm in a 16.2-cm-diameter shock tube in Argon maintained thermal integrity much better than other cases - pronounced by a thermal boundary layer ≤ 1 mm thick and an average temperature ≥ 799.9 K from 1-20 ms.
8

Improving software testing speed : using combinatorics

Mwanje, Sami January 2023 (has links)
Embedded systems hold immense potential, but their integration into advanced devices comes with significant costs. Malfunctions in these systems can result inequipment failures, posing serious risks and potential accidents. To ensure theirproper functionality, embedded system components undergo rigorous testing phases,which can be time-consuming, especially for components with numerous connections. Therefore, it is crucial to reduce test time while maintaining high-qualitytesting to detect and address failures early in the development cycle, resulting in improved and safer products. This report delves into various techniques and algorithms aimed at expediting testingprocesses, such as machine learning, risk analysis, test parallelization, and combinatorial testing. It examines the practicality of mathematical models and automatedapproaches in real-world companies through experimentation and implementation.In essence, the report tackles the challenges involved in testing embedded systems,explores different approaches to reduce test time, and presents a suitable model formaintaining test quality. The ultimate goal is to present and implement a methodthat effectively reduces test time while upholding an acceptable level of test quality.The obtained results provide valuable insights for future test groups and researchersseeking to optimize their testing processes and deliver safer products
9

Deep Learning Approaches to Low-level Vision Problems

Liu, Huan January 2022 (has links)
Recent years have witnessed tremendous success in using deep learning approaches to handle low-level vision problems. Most of the deep learning based methods address the low-level vision problem by training a neural network to approximate the mapping from the inputs to the desired ground truths. However, directly learning this mapping is usually difficult and cannot achieve ideal performance. Besides, under the setting of unsupervised learning, the general deep learning approach cannot be used. In this thesis, we investigate and address several problems in low-level vision using the proposed approaches. To learn a better mapping using the existing data, an indirect domain shift mechanism is proposed to add explicit constraints inside the neural network for single image dehazing. This allows the neural network to be optimized across several identified neighbours, resulting in a better performance. Despite the success of the proposed approaches in learning an improved mapping from the inputs to the targets, three problems of unsupervised learning is also investigated. For unsupervised monocular depth estimation, a teacher-student network is introduced to strategically integrate both supervised and unsupervised learning benefits. The teacher network is formed by learning under the binocular depth estimation setting, and the student network is constructed as the primary network for monocular depth estimation. In observing that the performance of the teacher network is far better than that of the student network, a knowledge distillation approach is proposed to help improve the mapping learned by the student. For single image dehazing, the current network cannot handle different types of haze patterns as it is trained on a particular dataset. The problem is formulated as a multi-domain dehazing problem. To address this issue, a test-time training approach is proposed to leverage a helper network in assisting the dehazing network adapting to a particular domain using self-supervision. In lossy compression system, the target distribution can be different from that of the source and ground truths are not available for reference. Thus, the objective is to transform the source to target under a rate constraint, which generalizes the optimal transport. To address this problem, theoretical analyses on the trade-off between compression rate and minimal achievable distortion are studied under the cases with and without common randomness. A deep learning approach is also developed using our theoretical results for addressing super-resolution and denoising tasks. Extensive experiments and analyses have been conducted to prove the effectiveness of the proposed deep learning based methods in handling the problems in low-level vision. / Thesis / Doctor of Philosophy (PhD)
10

Label-Efficient Visual Understanding with Consistency Constraints

Zou, Yuliang 24 May 2022 (has links)
Modern deep neural networks are proficient at solving various visual recognition and understanding tasks, as long as a sufficiently large labeled dataset is available during the training time. However, the progress of these visual tasks is limited by the number of manual annotations. On the other hand, it is usually time-consuming and error-prone to annotate visual data, rendering the challenge of scaling up human labeling for many visual tasks. Fortunately, it is easy to collect large-scale, diverse unlabeled visual data from the Internet. And we can acquire a large amount of synthetic visual data with annotations from game engines effortlessly. In this dissertation, we explore how to utilize the unlabeled data and synthetic labeled data for various visual tasks, aiming to replace or reduce the direct supervision from the manual annotations. The key idea is to encourage deep neural networks to produce consistent predictions across different transformations (\eg geometry, temporal, photometric, etc.). We organize the dissertation as follows. In Part I, we propose to use the consistency over different geometric formulations and a cycle consistency over time to tackle the low-level scene geometry perception tasks in a self-supervised learning setting. In Part II, we tackle the high-level semantic understanding tasks in a semi-supervised learning setting, with the constraint that different augmented views of the same visual input maintain consistent semantic information. In Part III, we tackle the cross-domain image segmentation problem. By encouraging an adaptive segmentation model to output consistent results for a diverse set of strongly-augmented synthetic data, the model learns to perform test-time adaptation on unseen target domains with one single forward pass, without model training or optimization at the inference time. / Doctor of Philosophy / Recently, deep learning has emerged as one of the most powerful tools to solve various visual understanding tasks. However, the development of deep learning methods is significantly limited by the amount of manually labeled data. On the other hand, it is usually time-consuming and error-prone to annotate visual data, making the human labeling process not easily scalable. Fortunately, it is easy to collect large-scale, diverse raw visual data from the Internet (\eg search engines, YouTube, Instagram, etc.). And we can acquire a large amount of synthetic visual data with annotations from game engines effortlessly. In this dissertation, we explore how we can utilize the raw visual data and synthetic data for various visual tasks, aiming to replace or reduce the direct supervision from the manual annotations. The key idea behind this is to encourage deep neural networks to produce consistent predictions of the same visual input across different transformations (\eg geometry, temporal, photometric, etc.). We organize the dissertation as follows. In Part I, we propose using the consistency over different geometric formulations and a forward-backward cycle consistency over time to tackle the low-level scene geometry perception tasks, using unlabeled visual data only. In Part II, we tackle the high-level semantic understanding tasks using both a small amount of labeled data and a large amount of unlabeled data jointly, with the constraint that different augmented views of the same visual input maintain consistent semantic information. In Part III, we tackle the cross-domain image segmentation problem. By encouraging an adaptive segmentation model to output consistent results for a diverse set of strongly-augmented synthetic data, the model learns to perform test-time adaptation on unseen target domains.

Page generated in 0.0605 seconds