Spelling suggestions: "subject:"risk based desting"" "subject:"risk based ingesting""
1 |
A Risk Identification Technique for Requirements AssessmentSilva, Liliane Sheyla da 01 March 2012 (has links)
Submitted by Pedro Henrique Rodrigues (pedro.henriquer@ufpe.br) on 2015-03-05T18:53:32Z
No. of bitstreams: 2
Dissertacao_Liliane.pdf: 2808869 bytes, checksum: 6681e2b17bafbf1b2fb94a8f0d2ad701 (MD5)
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-05T18:53:32Z (GMT). No. of bitstreams: 2
Dissertacao_Liliane.pdf: 2808869 bytes, checksum: 6681e2b17bafbf1b2fb94a8f0d2ad701 (MD5)
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Previous issue date: 2012-03-01 / CAPES, CNPQ / One recurrent issue in software development is the effective creation of testcases from requirements. Several techniques and methods are applied to minimize the risks associated with test cases building, aiming to meet the requirements specified correctly. Risks identification for requirements assessment is essential to tests cases generation. However, test engineers still face difficulties to apply it in practice due to the lack of solid knowledge about Risk Management activities and tool support for such activities. This work proposes a technique that helps test engineers in risk identification from requirements for software testing. From studies that used the similarity concept to compare software projects in order to reuse previously identified risks, the developed technique uses the same assertion applied to requirements. Within this context, this work aims to: (i) to define a technique based on analogies by categorizing requirements, thus being able to identify risks through a database of similar requirements, and (ii) to reuse risks previously identified at requirements for the evaluation of new requirements.
|
2 |
Optimal Data-driven Methods for Subject Classification in Public Health ScreeningSadeghzadeh, Seyedehsaloumeh 01 July 2019 (has links)
Biomarker testing, wherein the concentration of a biochemical marker is measured to predict the presence or absence of a certain binary characteristic (e.g., a disease) in a subject, is an essential component of public health screening. For many diseases, the concentration of disease-related biomarkers may exhibit a wide range, particularly among the disease positive subjects, in part due to variations caused by external and/or subject-specific factors. Further, a subject's actual biomarker concentration is not directly observable by the decision maker (e.g., the tester), who has access only to the test's measurement of the biomarker concentration, which can be noisy. In this setting, the decision maker needs to determine a classification scheme in order to classify each subject as test negative or test positive. However, the inherent variability in biomarker concentrations and the noisy test measurements can increase the likelihood of subject misclassification.
We develop an optimal data-driven framework, which integrates optimization and data analytics methodologies, for subject classification in disease screening, with the aim of minimizing classification errors. In particular, our framework utilizes data analytics methodologies to estimate the posterior disease risk of each subject, based on both subject-specific and external factors, coupled with robust optimization methodologies to derive an optimal robust subject classification scheme, under uncertainty on actual biomarker concentrations. We establish various key structural properties of optimal classification schemes, show that they are easily implementable, and develop key insights and principles for classification schemes in disease screening.
As one application of our framework, we study newborn screening for cystic fibrosis in the United States. Cystic fibrosis is one of the most common genetic diseases in the United States. Early diagnosis of cystic fibrosis can substantially improve health outcomes, while a delayed diagnosis can result in severe symptoms of the disease, including fatality. We demonstrate our framework on a five-year newborn screening data set from the North Carolina State Laboratory of Public Health. Our study underscores the value of optimization-based approaches to subject classification, and show that substantial reductions in classification error can be achieved through the use of the proposed framework over current practices. / Doctor of Philosophy / A biomarker is a measurable characteristic that is used as an indicator of a biological state or condition, such as a disease or disorder. Biomarker testing, where a biochemical marker is used to predict the presence or absence of a disease in a subject, is an essential tool in public health screening. For many diseases, related biomarkers may have a wide range of concentration among subjects, particularly among the disease positive subjects. Furthermore, biomarker levels may fluctuate based on external factors (e.g., temperature, humidity) or subject-specific characteristics (e.g., weight, race, gender). These sources of variability can increase the likelihood of subject misclassification based on a biomarker test.
We develop an optimal data-driven framework, which integrates optimization and data analytics methodologies, for subject classification in disease screening, with the aim of minimizing classification errors. We establish various key structural properties of optimal classification schemes, show that they are easily implementable, and develop key insights and principles for classification schemes in disease screening.
As one application of our framework, we study newborn screening for cystic fibrosis in the United States. Cystic fibrosis is one of the most common genetic diseases in the United States. Early diagnosis of cystic fibrosis can substantially improve health outcomes, while a delayed diagnosis can result in severe symptoms of the disease, including fatality. As a result, newborn screening for cystic fibrosis is conducted throughout the United States. We demonstrate our framework on a five-year newborn screening data set from the North Carolina State Laboratory of Public Health. Our study underscores the value of optimization-based approaches to subject classification, and show that substantial reductions in classification error can be achieved through the use of the proposed framework over current practices.
|
3 |
Optimal Risk-based Pooled Testing in Public Health Screening, with Equity and Robustness ConsiderationsAprahamian, Hrayer Yaznek Berg 03 May 2018 (has links)
Group (pooled) testing, i.e., testing multiple subjects simultaneously with a single test, is essential for classifying a large population of subjects as positive or negative for a binary characteristic (e.g., presence of a disease, genetic disorder, or a product defect). While group testing is used in various contexts (e.g., screening donated blood or for sexually transmitted diseases), a lack of understanding of how an optimal grouping scheme should be designed to maximize classification accuracy under a budget constraint hampers screening efforts.
We study Dorfman and Array group testing designs under subject-specific risk characteristics, operational constraints, and imperfect tests, considering classification accuracy-, efficiency-, robustness-, and equity-based objectives, and characterize important structural properties of optimal testing designs. These properties provide us with key insights and allow us to model the testing design problems as network flow problems, develop efficient algorithms, and derive insights on equity and robustness versus accuracy trade-off. One of our models reduces to a constrained shortest path problem, for a special case of which we develop a polynomial-time algorithm. We also show that determining an optimal risk-based Dorfman testing scheme that minimizes the expected number of tests is tractable, resolving an open conjecture.
Our case studies, on chlamydia screening and screening of donated blood, demonstrate the value of optimal risk-based testing designs, which are shown to be less expensive, more accurate, more equitable, and more robust than current screening practices. / PHD / Group (pooled) testing, i.e., testing multiple subjects simultaneously with a single test, is essential for classifying a large population of subjects as positive or negative for a binary characteristic (e.g., presence of a disease, genetic disorder, or a product defect). While group testing is used in various contexts (e.g., screening donated blood or for sexually transmitted diseases), a lack of understanding of how an optimal grouping scheme should be designed to maximize classification accuracy under a budget constraint hampers screening efforts.
We study Dorfman and Array group testing designs under subject-specific risk characteristics, operational constraints, and imperfect tests, considering classification accuracy-, efficiency-, robustness-, and equity-based objectives, and characterize important structural properties of optimal testing designs. These properties provide us with key insights and allow us to model the testing design problems as network flow problems, develop efficient algorithms, and derive insights on equity and robustness versus accuracy trade-off. We also show that determining an optimal risk-based Dorfman testing scheme that minimizes the expected number of tests is tractable, resolving an open conjecture.
Our case studies, on chlamydia screening and screening of donated blood, demonstrate the value of optimal risk-based testing designs, which are shown to be less expensive, more accurate, more equitable, and more robust than current screening practices.
|
4 |
A Tool Prototype Supporting Risk-Based Testing in Agile Embedded Software DevelopmentJasem, Saef January 2022 (has links)
Risk-Based Testing is a testing approach in software development that involves identifying, analyzing, controlling, testing, and reporting risks. The strategy provides several benefits and helps companies control risks and manage them effectively. However, the testing strategy may become challenging with new technologies, increased deployment and development of new features, and larger projects. Westermo is a manufacturer and vendor of industrial ethernet networks and data communications products for mission-critical systems in harsh environments. Risk-based testing is a critical component of their software development process to maintain high-quality deployments. Westermo's current approach to documenting and monitoring risks is done through spreadsheets. Over time, as new features are implemented and deployed, these spreadsheets become more complex and challenging to manage. As such, Westermo is currently seeking to replace them with a new risk management tool supporting risk-based testing. In this thesis, I investigated how one can prototype a risk management tool to support the risk-based testing process at Westermo. To this end, a deeper understanding of how current risk-based testing is performed and managed during software development was required. I also had to identify the challenges with the current approach for documenting and monitoring risks and the requirements for a new tool. I investigated these issues using a combination of qualitative research strategies and divided the work into three phases. In the first phase, I observed internal process documentation and three risk analysis workshops with a total of 14 participants held by Westermo. This was followed by interviewing two software developers and one project manager to identify requirements for a new tool. The next step was to develop a prototype and in the final phase, I evaluated the utility of the design with two focus groups for a total of six participants. Ideally, according to the requirements I identified, the risk management tool should facilitate the documenting and monitoring of the risks and provide functions to add, manage and visualize the risks from a larger release perspective and a smaller feature perspective in a simple and efficient manner. / Riskbaserad testning är en testmetod inom mjukvaruutveckling som innebär att identifiera, analysera, kontrollera, testa och rapportera risker. Teststrategin ger flera fördelar och hjälper företag att kontrollera risker och hantera dem effektivt. Med ny teknik, ökad distribution och utveckling av nya funktioner och större projekt kan processen istället bli utmanande. Westermo är en tillverkare och leverantör av industriella Ethernet-nätverks- och datakommunikationsprodukter för verksamhetskritiska system i tuffa miljöer. Riskbaserad testning är en viktig teststrategi i deras mjukvaruutvecklingsprocess för att upprätthålla högkvalitativa distributioner. Westermos nuvarande tillvägagångssätt för att dokumentera och övervaka risker är genom att använda kalkylblad. Med tiden, när nya funktioner implementeras och distribueras, blir dessa kalkylblad mer komplexa och utmanande att hantera. Därför försöker Westermo för närvarande ersätta dem med ett nytt riskhanteringsverktyg för att stödja riskbaserad testning. I det här examensarbete undersökte jag hur man kan prototypa ett riskhanteringsverktyg för att stödja den riskbaserade testprocessen på Westermo. För detta ändamål krävdes en djupare förståelse för hur aktuell riskbaserad testning utförs och hanteras under mjukvaruutveckling. Jag behövde också identifiera utmaningarna med det nuvarande tillvägagångssättet för att dokumentera och övervaka risker och kraven på ett nytt verktyg. Jag undersökte dessa frågor genom en kombination av kvalitativa forskningsstrategier och delade upp arbetet i tre faser. I den första fasen observerade jag intern processdokumentation och tre riskanalysmöten med totalt 14 deltagare i Westermo. Detta följdes av intervjuer med två mjukvaruutvecklare och en projektledare för att identifiera krav på ett nytt verktyg. Nästa steg var att ta fram en prototyp och i slutfasen utvärderade jag användbarheten av designen med två fokusgrupper med totalt sex deltagare. Enligt de krav jag identifierade ska riskhanteringsverktyget kunna underlätta dokumentationen och övervakningen av riskerna och tillhandahålla funktioner för att lägga till, hantera och visualisera riskerna ur ett större releaseperspektiv och ett mindre funktionsperspektiv på ett enkelt och effektivt sätt.
|
5 |
Huvudaspekter att Överväga för Mjukvarutestning i Komplexa Inbyggda System : En Fallstudie av Mjukvaruutveckling i Bilindustrin / Key Aspects to Consider for Software Testingin Complex Embedded Systems : A Case Study of Software Development in the Automotive IndustryHaglund El Gaidi, Gabriel January 2016 (has links)
Software development in the complex environment in the automotive industry puts high pressureon developers to develop high quality and robust software aligned to customers’ requirements. High quality software is foremost ensured by conducting software testing of the product under development. However, software testing in the automotive industry poses challenges of testing early in the development process, due to the limits of conducting tests in full-scaled vehicle environments. This challenge needs to be addressed for software development teams developing software that communicates with the complex on-board embedded system in vehicles. This study has been conducted in a case study approach at Scania CV AB in Södertälje in order to understand drivers to defects that emerge in finalized software products. Defects and drivers to defects found in finalized software products have been identified by conducting interviews with the SCPT-team responsible for the development of the product Escape. Escape is delivered to the production department and enables functions such as calibrating, set parameters, and run quality assurance tests on the on-board embedded system in vehicles. The identified defects and drivers have subsequently been discussed with experienced professionals and researchers within software testing. This provided applicable testing techniques and activities to undertake in order to address the identified drivers causing defects in finalized software products. The contribution of this study highlights the importance of incorporating software testing in early development phases in complex embedded systems as defects are more costly to correct the later they are identified. Static analysis tools have further been found to provide a suitable support to address the immense number of possible parameter combinations in vehicles. Furthermore, Software in the Loop environments have been found to be an applicable way of incorporating integration testing and system testing earlier in the development phase enabling identification of interoperability defects generally found late in the development process. Including persons responsible for testing the software in early requirements discussion has further been found to be of great importance as it minimizes the risk of misunderstandings between customers and developers. / Mjukvaruutveckling i den komplexa miljön bilindustrin befinner sig i sätter hög press på mjukvaruutvecklare att utveckla robusta mjukvaruprogram av hög kvalitet som uppfyller kundernas krav. Mjukvaruprogram av hög kvalitet är först och främst säkerhetsställd genom mjukvarutestning av produkten under utveckling. Däremot finns det en del utmaningar när det kommer till mjukvarutestning av mjukvaruprogram i bilindustrin på grund av den begränsade möjligheten till att testa programvaran i helbilsmiljöer. Team som utvecklar mjukvaruprogram som kommunicerar med det komplexa inbyggda systemet i fordon måste ta itu med denna utmaning. För att undersöka anledningar till att defekter identifieras i mjukvaruslutprodukter har denna studies tillvägagångssätt varit en fallstudie på Scania CV AB i Södertälje. Anledningar till defekter identifierade i slutprodukter har undersökts genom intervjuer med SPCT-teamet som ansvarar för att utveckla och testa produkten Escape. Escape är en produkt som används av produktionsavdelningen och erbjuder funktioner så som parametersättning, kalibrering och att köra kvalitetstester av det inbyggda systemet i fordon. De identifierade anledningarna till defekter har därefter diskuterats med erfarna mjukvarutestare inom både industrin och akademin. Det har bidragit till användbara testtekniker och testaktiviteter att ta sig an för att ta i tu med dem identifierade defekterna och dess anledningar som bidrar till defekter i slutprodukter. Forskningsbidraget från denna studie betonar hur viktigt det är att inkorporera mjukvarutestning tidigt i utvecklingsprocessen av komplexa inbyggda system eftersom defekter är dyrare att rätta till ju senare de upptäcks. Statiska analysverktyg har visat sig utgöra en användbar hjälp för att ta i tu med den stora mängden möjliga parameterkombinationer i fordon. Dessutom har Software in the Loop miljöer visat sig vara ett användbart sätt att möjliggöra integrationstestning och systemtestning tidigt i utvecklingsprocessen vilket kan identifiera defekter som vanligtvis först identifieras sent i utvecklingsprocessen. Involvera personer som är ansvariga för mjukvarutestning av produkten tidigt i kravdiskussioner har också visat sig vara viktigt för att minimera risken för missförstånd mellan kunder och utvecklare.
|
Page generated in 0.0765 seconds