• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 11
  • 11
  • 5
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 89
  • 89
  • 41
  • 34
  • 33
  • 21
  • 13
  • 11
  • 11
  • 11
  • 10
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

DATA REDUCTION AND PROCESSING SYSTEM FOR FLIGHT TEST OF NEXT GENERATION BOEING AIRPLANES

Cardinal, Robert W. 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1993 / Riviera Hotel and Convention Center, Las Vegas, Nevada / This paper describes the recently developed Loral Instrumentation ground-based equipment used to select and process post-flight test data from the Boeing 777 airplane as it is played back from a digital tape recorder (e.g., the Ampex DCRSi II) at very high speeds. Gigabytes (GB) of data, stored on recorder cassettes in the Boeing 777 during flight testing, are played back on the ground at a 15-30 MB/sec rate into ten multiplexed Loral Instrumentation System 500 Model 550s for high-speed decoding, processing, time correlation, and subsequent storage or distribution. The ten Loral 550s are multiplexed for independent data path processing from ten separate tape sources simultaneously. This system features a parallel multiplexed configuration that allows Boeing to perform critical 777 flight test processing at unprecedented speeds. Boeing calls this system the Parallel Multiplexed Processing Data (PMPD) System. The key advantage of the ground station's design is that Boeing engineers can add their own application-specific control and setup software. The Loral 550 VMEbus allows Boeing to add VME modules when needed, ensuring system growth with the addition of other LI-developed products, Boeing-developed products or purchased VME modules. With hundreds of third-party VME modules available, system expansion is unlimited. The final system has the capability to input data at 15 MB/sec. The present aggregate throughput capability of all ten 24-bit Decoders is 150 MB/sec from ten separate tape sources. A 24-bit Decoder was designed to support the 30 MB/sec DCRSi III so that the system can eventually support a total aggregate throughput of 300 MB/sec. Clearly, such high data selection, rejection, and processing will significantly accelerate flight certification and production testing of today's state-of-the-art aircraft. This system was supplied with low level software interfaces such that the customer would develop their own applications specific code and displays. The Loral 550 lends itself to this kind of applications due to its VME chassis, VxWorks operating system and the modularity of the software.
12

Empirical study on strategy for Regression Testing

Hsu, Pai-Hung 03 August 2006 (has links)
Software testing plays a necessary role in software development and maintenance. This activity is performed to support quality assurance. It is very common to design a number of testing suite to test their programs manually for most test engineers. To design test data manually is an expensive and labor-wasting process. Base on this reason, how to generate software test data automatically becomes a hot issue. Most researches usually use the meta-heuristic search methods like genetic algorithm or simulated annealing to gain the test data. In most circumstances, test engineers will generate the test suite first if they have a new program. When they debug or change some code to become a new one, they still design another new test suite to test it. Nearly no people will reserve the first test data and reuse it. In this research, we want to discuss whether it is useful to store the original test data.
13

Multiple Scan Trees Synthesis for Test Time/Data and Routing Length Reduction under Output Constraint

Hung, Yu-Chen 29 July 2009 (has links)
A synthesis methodology for multiple scan trees that considers output pin limitation, scan chain routing length, test application time and test data compression rate simultaneously is proposed in this thesis. Multiple scan trees, also known as a scan forest, greatly reduce test data volume and test application time in SOC testing. However, previous research on scan tree synthesis rarely considered issues such as routing length and output port limitation, and hence created scan trees with a large number of scan output ports and excessively long routing paths. The proposed algorithm provides a mechanism that effectively reduces test time and test data volume, and routing length under output port constraint. As a result, no output compressors are required, which significantly reduce the hardware overhead.
14

Efficient verification/testing of system-on-chip through fault grading and analog behavioral modeling

Jeong, Jae Hoon 10 February 2014 (has links)
This dissertation presents several cost-effective production test solutions using fault grading and mixed-signal design verification cases enabled by analog behavioral modeling. Although the latest System-on-Chip (SOC) is getting denser, faster, and more complex, the manufacturing technology is dominated by subtle defects that are introduced by small-scale technology. Thus, SOC requires more mature testing strategies. By performing various types of testing, better quality SoC can be manufactured, but test resources are too limited to accommodate all those tests. To create the most efficient production test flow, any redundant or ineffective tests need to be removed or minimized. Chapter 3 proposes new method of test data volume reduction by combining the nonlinear property of feedback shift register (FSR) and dictionary coding. Instead of using the nonlinear FSR for actual hardware implementation, the expanded test set by nonlinear expansion is used as the one-column test sets and provides big reduction ratio for the test data volume. The experimental results show the combined method reduced the total test data volume and increased the fault coverage. Due to the increased number of test patterns, total test time is increased. Chapter 4 addresses a whole process of functional fault grading. Fault grading has always been a ”desire-to-have” flow because it can bring up significant value for cost saving and yield analysis. However, it is very hard to perform the fault grading on the complex large scale SOC. A commercial tool called Z01X is used as a fault grading platform, and whole fault grading process is coordinated and each detailed execution is performed. Simulation- based functional fault grading identifies the quality of the given functional tests against the static faults and transition delay faults. With the structural tests and functional tests, functional fault grading can indicate the way to achieve the same test coverage by spending minimal test time. Compared to the consumed time and resource for fault grading, the contribution to the test time saving might not be acceptable as very promising, but the fault grading data can be reused for yield analysis and test flow optimization. For the final production testing, confident decisions on the functional test selection can be made based on the fault grading results. Chapter 5 addresses the challenges of Package-on-Package (POP) testing. Because POP devices have pins on both the top and the bottom of the package, the increased test pins require more test channels to detect packaging defects. Boundary scan chain testing is used to detect those continuity defects by relying on leakage current from the power supply. This proposed test scheme does not require direct test channels on the top pins. Based on the counting algorithm, minimal numbers of test cycles are generated, and the test achieved full test coverage for any combinations of pin-to-pin shortage defects on the top pins of the POP package. The experimental results show about 10 times increased leakage current from the shorted defect. Also, it can be expanded to multi-site testing with less test channels for high-volume production. Fault grading is applied within different structural test categories in Chapter 6. Stuck-at faults can be considered as TDFs having infinite delay. Hence, the TDF Automatic Test Pattern Generation (ATPG) tests can detect both TDFs and stuck-at faults. By removing the stuck-at faults being detected by the given TDF ATPG tests, the tests that target stuck-at faults can be reduced, and the reduced stuck-at fault set results in fewer stuck-at ATPG patterns. The structural test time is reduced while keeping the same test coverage. This TDF grading is performed with the same ATPG tool used to generate the stuck-at and TDF ATPG tests. To expedite the mixed-signal design verification of complex SoC, analog behavioral modeling methods and strategies are addressed in Chapter 7 and case studies for detailed verification with actual mixed-signal design are ad- dressed in Chapter 8. Analog modeling effort can enhance verification quality for a mixed-signal design with less turnaround time, and it enables compatible integration of the mixed-signal design cores into the SoC. The modeling process may reveal any potential design errors or incorrect testbench setup, and it results in minimizing unnecessary debugging time for quality devices. Two mixed-signal design cases were verified by me using the analog models. A fully hierarchical digital-to-analog converter (DAC) model is implemented and silicon mismatches caused by process variation are modeled and inserted into the DAC model, and the calibration algorithm for the DAC is successfully verified by model-based simulation at the full DAC-level. When the mismatch amount is increased and exceeded the calibration capability of the DAC, the simulation results show the increased calibration error with some outliers. This verification method can identify the saturation range of the DAC and predict the yield of the devices from process variation. A phase-locked loop (PLL) design cases were also verified by me using the analog model. Both open-loop PLL model and closed-loop PLL model cases are presented. Quick bring-up of open-loop PLL model provides low simulation overhead for widely-used PLLs in the SOC and enables early starting of design verification for the upper-level design using the PLL generated clocks. Accurate closed-loop PLL model is implemented for DCO-based PLL design, and the mixed-simulation with analog models and schematic designs enables flexible analog verification. Only focused analog design block is set to the schematic design and the rest of the analog design is replaced by the analog model. Then, this scaled-down SPICE simulation is performed about 10 times to 100 times faster than full-scale SPICE simulation. The analog model of the focused block is compared with the scaled-down SPICE simulation result and the quality of the model is iteratively enhanced. Hence, the analog model enables both compatible integration and flexible analog design verification. This dissertation contributes to reduce test time and to enhance test quality, and helps to set up efficient production testing flows. Depending on the size and performance of CUT, proper testing schemes can maximize the efficiency of production testing. The topics covered in this dissertation can be used in optimizing the test flow and selecting the final production tests to achieve maximum test capability. In addition, the strategies and benefits of analog behavioral modeling techniques that I implemented are presented, and actual verification cases shows the effectiveness of analog modeling for better quality SoC products. / text
15

Data Consistency Checks on Flight Test Data

Mueller, G. 10 1900 (has links)
ITC/USA 2014 Conference Proceedings / The Fiftieth Annual International Telemetering Conference and Technical Exhibition / October 20-23, 2014 / Town and Country Resort & Convention Center, San Diego, CA / This paper reflects the principal results of a study performed internally by Airbus's flight test centers. The purpose of this study was to share the body of knowledge concerning data consistency checks between all Airbus business units. An analysis of the test process is followed by the identification of the process stakeholders involved in ensuring data consistency. In the main part of the paper several different possibilities for improving data consistency are listed; it is left to the discretion of the reader to determine the appropriateness these methods.
16

Evaluation of Test Data Generation Techniques for String Inputs

Li, Junyang, Xing, Xueer January 2017 (has links)
Context. The effective generation of test data is regarded as very important in the software testing. However, mature and effective techniques for generating string test data have seldom been explored due to the complexity and flexibility in the expression form of the string comparing to other data types. Objectives. Based on this problem, this study is to investigate strengths and limitations of existing string test data generation techniques to support future work for exploring an effective technique to generate string test data. This main goal was achieved via two objectives. First is investigating existing techniques for string test data generation; as well as finding out criteria and Classes-Under-Test (CUTs) used for evaluating the ability of string test generation. Second is to assess representative techniques through comparing effectiveness and efficiency. Methods. For the first objective, we used a systematic mapping study to collect data about existing techniques, criteria, and CUTs. With respect to the second objective, a comparison study was conducted to compare representative techniques selected from the results of systematic mapping study. The data from comparison study was analysed in a quantitative way by using statistical methods. Results. The existing techniques, criteria and CUTs which are related to string test generation were identified. A multidimensional categorisation was proposed to classify existing string test data generation techniques. We selected representative techniques from the search-based method, symbolic execution method, and random generation method of categorisation. Meanwhile, corresponding automated test generation tools including EvoSuite, Symbolic PathFinder (SPF), and Randoop, which achieved representative techniques, were selected to assess through comparing effectiveness and efficiency when applied to 21 CUTs. Conclusions. We concluded that: search-based method has the highest effectiveness and efficiency in three selected solution methods; random generation method has a low efficiency, but has a high fault-detecting ability for some specific CUTs; symbolic execution solution achieved by SPF cannot support string test generation well currently due to possibly incomplete string constraint solver or string generator.
17

Model validation for robust control

Davis, Robert Andrew January 1995 (has links)
No description available.
18

Audit v počítačovém prostředí / Auditing in computer enviroment

Heger, Martin January 2009 (has links)
The work shows risks and opportunities of audit of financial statements resulting from a computer environment. First it describes the general principles of auditing and its legislative definition. For basic orientation to the work deals with new trends of IS / ICT in accounting. Following them it describes different approaches of auditors to the computer environment. The main part focuses on computer-assisted audit techniques (CAATTs), which auditors can use depending on their approach to the computer environment.
19

Generation of Software Test Data from the Design Specification Using Heuristic Techniques. Exploring the UML State Machine Diagrams and GA Based Heuristic Techniques in the Automated Generation of Software Test Data and Test Code.

Doungsa-ard, Chartchai January 2011 (has links)
Software testing is a tedious and very expensive undertaking. Automatic test data generation is, therefore, proposed in this research to help testers reduce their work as well as ascertain software quality. The concept of test driven development (TDD) has become increasingly popular during the past several years. According to TDD, test data should be prepared before the beginning of code implementation. Therefore, this research asserts that the test data should be generated from the software design documents which are normally created prior to software code implementation. Among such design documents, the UML state machine diagrams are selected as a platform for the proposed automated test data generation mechanism. Such diagrams are selected because they show behaviours of a single object in the system. The genetic algorithm (GA) based approach has been developed and applied in the process of searching for the right amount of quality test data. Finally, the generated test data have been used together with UML class diagrams for JUnit test code generation. The GA-based test data generation methods have been enhanced to take care of parallel path and loop problems of the UML state machines. In addition the proposed GA-based approach is also targeted to solve the diagrams with parameterised triggers. As a result, the proposed framework generates test data from the basic state machine diagram and the basic class diagram without any additional nonstandard information, while most other approaches require additional information or the generation of test data from other formal languages. The transition coverage values for the introduced approach here are also high; therefore, the generated test data can cover most of the behaviour of the system. / EU Asia-Link project TH/Asia Link/004(91712) East-West and CAMT
20

Framework de geração de dados de teste para programas orientados a objetos / Test data generation framework for object-oriented software

Ferreira, Fernando Henrique Inocêncio Borba 13 December 2012 (has links)
A geração de dados de teste é uma tarefa obrigatória do processo de teste de software. Em geral, é realizada por prossionais de teste, o que torna seu custo elevado e sua automatização necessária. Os frameworks existentes que auxiliam essa atividade são restritos, fornecendo apenas uma única técnica de geração de dados de teste, uma única função de aptidão para avaliação dos indivíduos e apenas um algoritmo de seleção. Este trabalho apresenta o framework JaBTeG (Java Bytecode Test Generation) de geração de dados de teste. A principal característica do framework é permitir o desenvolvimento de métodos de geração de dados de teste por meio da seleção da técnica de geração de dados de teste, da função de aptidão, do algoritmo de seleção e critério de teste estrutural. Utilizando o framework JaBTeG, técnicas de geração de dados de teste podem ser criadas e experimentadas. O framework está associado à ferramenta de teste JaBUTi (Java Bytecode Understanding and Testing) para auxiliar a geração de dados de teste. Quatro técnicas de geração de dados de teste, duas funções de aptidão e quatro algoritmos de seleção foram desenvolvidos para validação da abordagem proposta pelo framework. De maneira complementar, cinco programas com características diferentes foram testados com dados gerados usando os métodos providos pelo framework JaBTeG. / Test data generation is a mandatory activity of the software testing process. In general, it is carried out by testing practitioners, which makes it costly and its automation needed. Existing frameworks to support this activity are restricted, providing only one data generation technique, a single tness function to evaluate individuals, and a unique selection algorithm. This work describes the JaBTeG (Test Java Bytecode Generation) framework for testing data generation. The main characteristc of JaBTeG is to allow the development of data generation methods by selecting the data generation technique, the tness function, the selection algorithm and the structural testing criteria. By using JaBTeG, new methods for testing data generation can be developed and experimented. The framework was associated with JaBUTi (Java Bytecode Understanding and Testing) to support testing data creation. Four data generation techniques, two tness functions, and four selection algorithms were developed to validate the approach proposed by the framework. In addition, ve programs with dierent characteristics were tested with data generated using the methods supported by JaBTeG.

Page generated in 0.0806 seconds