Spelling suggestions: "subject:"test data"" "subject:"est data""
1 |
Scan test data compression using alternate Huffman codingBaltaji, Najad Borhan 13 August 2012 (has links)
Huffman coding is a good method for statistically compressing test data with high compression rates. Unfortunately, the on-‐chip decoder to decompress that encoded test data after it is loaded onto the chip may be too complex. With limited die area, the decoder complexity becomes a drawback. This makes Huffman coding not ideal for use in scan data compression. Selectively encoding test data using Huffman coding can provide similarly high compression rates while reducing the complexity of the decoder. A smaller and less complex decoder makes Alternate Huffman Coding a viable option for compressing and decompressing scan test data. / text
|
2 |
Heuristic generation of software test dataHolmes, Stephen Terry January 1996 (has links)
Incorrect system operation can, at worst, be life threatening or financially devastating. Software testing is a destructive process that aims to reveal software faults. Selection of good test data can be extremely difficult. To ease and assist test data selection, several test data generators have emerged that use a diverse range of approaches. Adaptive test data generators use existing test data to produce further effective test data. It has been observed that there is little empirical data on the adaptive approach. This thesis presents the Heuristically Aided Testing System (HATS), which is an adaptive test data generator that uses several heuristics. A heuristic embodies a test data generation technique. Four heuristics have been developed. The first heuristic, Direct Assignment, generates test data for conditions involving an input variable and a constant. The Alternating Variable heuristic determines a promising direction to modify input variables, then takes ever increasing steps in this direction. The Linear Predictor heuristic performs linear extrapolations on input variables. The final heuristic, Boundary Follower, uses input domain boundaries as a guide to locate hard-to-find solutions. Several Ada procedures have been tested with HATS; a quadratic equation solver, a triangle classifier, a remainder calculator and a linear search. Collectively they present some common and rare test data generation problems. The weakest testing criterion HATS has attempted to satisfy is all branches. Stronger, mutation-based criteria have been used on two of the procedures. HATS has achieved complete branch coverage on each procedure, except where there is a higher level of control flow complexity combined with non-linear input variables. Both branch and mutation testing criteria have enabled a better understanding of the test data generation problems and contributed to the evolution of heuristics and the development of new heuristics. This thesis contributes the following to knowledge: Empirical data on the adaptive heuristic approach to test data generation. How input domain boundaries can be used as guidance for a heuristic. An effective heuristic termination technique based on the heuristic's progress. A comparison of HATS with random testing. Properties of the test software that indicate when HATS will take less effort than random testing are identified.
|
3 |
Development and application of automatic monitoring system for standard penetration test in site investigationYang, Wenwei., 楊文衛. January 2006 (has links)
published_or_final_version / abstract / Civil Engineering / Doctoral / Doctor of Philosophy
|
4 |
An experimental review of some aircraft parameter identification techniquesBaek, Youn Hyeong January 1998 (has links)
No description available.
|
5 |
Examining the utility of a clustering method for analysing psychological test dataDawes, Sharron Elizabeth January 2004 (has links)
The belief that certain disorders will produce specific patterns of cognitive strengths and weaknesses on psychological testing is pervasive and entrenched in the area of clinical neuropsychology, both with respect to expectations regarding the behaviour of individuals and clinical groups. However, there is little support in the literature for such a belief. To the contrary, studies examining patterns of cognitive performance in different clinical samples without exception find more than one pattern of test scores. Lange (2000) in his comprehensive analysis of WAIS-R/WMS-R data for a large sample of mixed clinical cases found that three to five profiles described variations in test performances within clinical diagnoses. Lange went on to show that these profiles occurred with approximately equal frequency in all diagnostic groups. He additionally found four profiles in an exploratory analysis of WAIS-III/WMS-III data from a similar sample. The goals of the current dissertation were to: a) replicate Lange’s findings in a larger clinical sample; b) extend the scope of these findings to a wider array of psychological tests; and c) develop a method to classify individual cases in terms of their psychological test profile. The first study assessed 849 cases with a variety of neurological and psychiatric diagnoses using hierarchical cluster and K-Means analysis. Four WAIS-III/WMS-III profiles were identified that included approximately equal numbers of cases from the sample. Two of these profiles were uniquely related to two of Lange’s profiles, while the remaining two demonstrated relationships with more than one of Lange’s clusters. The second study expanded the neuropsychological test battery employed in the analysis to include the Trail Making Test, Boston Naming Test, Wisconsin Card Sorting Test, Controlled Oral Word Association Test, and Word Lists from the WMS-III reducing the number of clinical cases to 420. In order to compensate for the impact of the reduced number of cases and increased number of variables on potential cluster stability, the number of test score variables was reduced using factor analysis. In this manner the 22 variables were reduced to six factor scores, which were then analysed with hierarchical cluster and K-Means analysis yielding five cognitive profiles. The third study examined the potential clinical utility of the five cognitive profiles by developing a single case methodology for allocating individual cases to cognitive profiles. This was achieved using a combination of a multivariate outlier statistic, the Mahalanobis Distance, and equations derived from a discriminant function analysis. This combination resulted in classification accuracies exceeding 88% when predicting the profile membership based upon the K-Means analysis. The potential utility of this method was illustrated with three age-, education-, gender-, and diagnostically-matched cases that demonstrated different cognitive test profiles. The implications of the small number of cognitive profiles that characterise test performance in a diverse sample of neurological and psychiatric cases as well as the clinical utility of an accurate classification method at the individual case level was discussed. The role of such a classification system in the design of individualised rehabilitation programmes was also highlighted. This research raises the intriguing possibility of developing a typology based on human behaviour rather than a medical nosology. In effect, replacing the medical diagnosis so ill-suited to encompassing the complexities of human behaviour, with a more appropriate “psychological diagnosis” based on cognitive test performance.
|
6 |
A Problem Analysis at Tieto Leading to the Development of a Test-Data-Handler ApplicationHallgren, Ellen January 2012 (has links)
The purpose of this thesis is to provide the Maftaki team at Tieto a proposal of a tool or improve one of the current tools that will support their processes. In order to find a suitable tool a problem analysis model, as described by Goldkuhl and Rostlinger (1988), was used. To find out what kind of problem existed, members from the Maftaki team were interviewed. Out of the problems that were brought up during the interviews, difficulties with finding telephone numbers that can be used in the testing environment at testing was chosen. In order to solve the problem, a tool that handles test data was to be developed. Firstly, a requirement elicitation was performed by interviewing potential users of the system. In this way, use cases and functional requirements were elicited. A framework called Struts2, an Object-relational Mapping framework, Hibernate and an Inversion of Control container, Spring was used during the development. Maven was used for building the application. During the development demos were performed in order to elicit more requirements from the users and to clarify some requirements. Also refactoring was done continuously during the development. When the development of the application was done a couple of test cases were written and some basic testing of the application were performed . / Syftet med examensarbetet är att ge Maftaki teamet vid Tieto ett förslag på ett verktyg eller förbättra ett av de nuvarande verktygen för att ge support till deras processer. För att hitta ett lämpligt verktyg gjordes först en problemanalys, den problemanalysmodell som beskrivs i Goldkuhl och Röstlinger, (1988), bok användes. För att ta reda på vilka problem som kunde finnas genomförde ett antal intervjuer med medlemmar i Maftaki. Ur de problem som hade kommit fram under intervjuerna valdes svårigheten att hitta telefonnummer som kan användas i testmiljön vid testning ut. För att lösa problemet beslutades att ett verktyg som hanterar testdata skulle utvecklas. Först genomfördes en kravfångst genom att intervjua potentiella användare och på så sätt togs användningsfall och funktionella krav fram. För att bygga applikationen användes ett ramverk som heter Struts2, ett Object/Relational Mapping ramverk, Hibernate, och en Inversion of Control container, Spring. För att bygga applikationen användes Maven. Under utvecklingens gång genomfördes demos för att få fler krav ifrån användare och för att få en klarare bild av betydelsen av vissa krav. Också omstrukturering av kod genomfördes kontinuerligt under utvecklingens gång. Sist av allt genomfördes ett antal test på applikationen.
|
7 |
Nejčastější problémy s testovacími daty a možnosti jejich řešení / The most common test data problems and possible solutionsLangrová, Kamila January 2014 (has links)
This thesis is focused od testing, test data, most frequent test data issues and solutions of these issues. The theoretical part of thesis explains testing, test data, test data management. This thesis focuses on categorizing of testing by type, class and a way of testing and cetgorizing test data. There are also introduced differences between manual and automating testing. The practical part of thesis introduces the survey questionnaire and realization of most frequent test data issues and solutions of these issues survey. In this part of thesis is survey description, goals formulation and evaluation od survey included. The contribution of this thesis is integrated view to testing, test data a their importance at whole testing domain and obstacles, which testing workers have to deal with. This thesis also contributes resume of test data issues solutions and ways to prevent or handle these test data issues.
|
8 |
Kartläggning av olika testdatahanteringsverktyg : Jämförelse och utvärdering av olika testdatahanteringsverktygViking, Jakob January 2019 (has links)
Due to new regulation GDPR, a whole industry had to change its way of handling data. This industry is the test data management industry, an industry that bases its products on managing PII (Personally Identifiable Information). This leads to an increased demand to how data is stored, which by its own leads to different solutions and several companies that try their chances to establish themselves in this market. The overall purpose of this study is to extract the good and bad aspects from five different test data management tools. In addition to the collection of fact, tests are performed to gain experience with each program to later summarize them both. The result consists of the result from the test cases and the result from the comparison matrix and together they form the grade on the test data management tool. The conclusion that can be drawn from this mapping is that the programs with the highest flexibility have a greater chance of success, but there are also simple programs that show that simplicity is at least as important. / På grund av den nya förordningen GDPR var en hel bransch tvungen att ändra sitt sätt att hantera data på. Denna bransch är testdatahanteringen, en bransch som baserar sina produkter på att hantera PII (Personally Identifiable Information). Detta leder till ett ökat krav på hur data förvaras leder till olika lösningar och flera företag som tar chansen att etablera sig i marknaden. Det övergripande syftet i denna undersökning är att ta fram de positiva och negativa aspekterna ur fem olika testdatahanterare. Utöver bara faktainsamling utförs tester för att få erfarenhet med varje program och därefter sammanfatta åsikterna med konkreta fakta. Resultatet består av resultatet från testfallen och resultatet från jämförelsematrisen och tillsammans bildar de ett betyg på testdatahanteringsverktyget. Den slutsats som kan dras från denna kartläggning är att de program med högst flexibilitet har större chans att lyckas men även finns det enkla program som visar att simpelhet är minst lika viktigt.
|
9 |
Investigating Metrics that are Good Predictors of Human Oracle Costs An ExperimentKartheek arun sai ram, chilla, Kavya, Chelluboina January 2017 (has links)
Context. Human oracle cost, the cost associated in estimating the correctness of the output for the given test inputs is manually evaluated by humans and this cost is significant and is a concern in the software test data generation field. This study has been designed in the context to assess metrics that might predict human oracle cost. Objectives. The major objective of this study is to address the human oracle cost, for this the study identifies the metrics that are good predictors of human oracle cost and can further help to solve the oracle problem. In this process, the identified suitable metrics from the literature are applied on the test input, to see if they can help in predicting the correctness of the output for the given test input. Methods. Initially a literature review was conducted to find some of the metrics that are relevant to the test data. Besides finding the aforementioned metrics, our literature review also tries to find out some possible code metrics that can be ap- plied on test data. Before conducting the actual experiment two pilot experiments were conducted. To accomplish our research objectives an experiment is conducted in the BTH university with master students as sample population. Further group interviews were conducted to check if the participants perceive any new metrics that might impact the correctness of the output. The data obtained from the experiment and the interviews is analyzed using linear regression model in SPSS suite. Further to analyze the accuracy vs metric data, linear discriminant model using SPSS pro- gram suite was used. Results.Our literature review resulted in 4 metrics that are suitable to our study. As our test input is HTML we took HTML depth, size, compression size, number of tags as our metrics. Also, from the group interviews another 4 metrics are drawn namely number of lines of code and number of <div>, anchor <a> and paragraph <p> tags as each individual metric. The linear regression model which analyses time vs metric data, shows significant results, but with multicollinearity effecting the result, there was no variance among the considered metrics. So, the results of our study are proposed by adjusting the multicollinearity. Besides, the above analysis, linear discriminant model which analyses accuracy vs metric data was conducted to predict the metrics that influences accuracy. The results of our study show that metrics positively correlate with time and accuracy. Conclusions. From the time vs metric data, when multicollinearity is adjusted by applying step-wise regression reduction technique, the program size, compression size and <div> tag are influencing the time taken by sample population. From accuracy vs metrics data number of <div> tags and number of lines of code are influencing the accuracy of the sample population.
|
10 |
A RELATIONAL APPROACH FOR MANAGING LARGE FLIGHT TEST PARAMETER LISTSPenna, Sérgio D., Espeschit, Antônio Magno L. 10 1900 (has links)
ITC/USA 2005 Conference Proceedings / The Forty-First Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2005 / Riviera Hotel & Convention Center, Las Vegas, Nevada / The number of aircraft parameters used in flight-testing has constantly increased over the years and
there is no sign that situation will change in the near future. On the contrary, in modern, software-driven,
digital avionic systems, all sorts of parameters circulate through digital buses and can be
transferred to on-board data acquisition systems more easily than those converted from traditional
analog transducers, facilitating the request for more and more parameters to be acquired, processed,
visualized, stored and retrieved at any given time.
The constant unbalance between what parameter quantity engineers believe to be “sufficient” for
developing and troubleshooting systems in a new aircraft, which tends to grow with aircraft
complexity, and the associated cost of instrumenting a test prototype accordingly, which tends to
grow beyond budget limits, pushes for new creative ways of handling both tendencies without
compromising the ease of performing an engineering analysis directly from flight test data.
This paper presents an alternative for handling large collections of flight test parameters through a
relational approach, particularly in two important scenarios: the very basic creation and
administration of the traditional “Flight Test Parameter List” and the transmission of selected data
over a telemetry link for visualization in a Ground Station.
|
Page generated in 0.0493 seconds