• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 31
  • 9
  • 2
  • 1
  • 1
  • Tagged with
  • 52
  • 52
  • 36
  • 17
  • 14
  • 13
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Probabilistic Program Analysis for Software Component Reliability

Mason, Dave January 2002 (has links)
Components are widely seen by software engineers as an important technology to address the "software crisis''. An important aspect of components in other areas of engineering is that system reliability can be estimated from the reliability of the components. We show how commonly proposed methods of reliability estimation and composition for software are inadequate because of differences between the models and the actual software systems, and we show where the assumptions from system reliability theory cause difficulty when applied to software. This thesis provides an approach to reliability that makes it possible, if not currently plausible, to compose component reliabilities so as to accurately and safely determine system reliability. Firstly, we extend previous work on input sub-domains, or partitions, such that our sub-domains can be sampled in a statistically sound way. We provide an algorithm to generate the most important partitions first, which is particularly important when there are an infinite number of input sub-domains. We combine analysis and testing to provide useful reliabilities for the various input sub-domains of a system, or component. This provides a methodology for calculating true reliability for a software system for any accurate statistical distribution of input values. Secondly, we present a calculus for probability density functions that permits accurately modeling the input distribution seen by each component in the system - a critically important issue in dealing with reliability of software components. Finally, we provide the system structuring calculus that allows a system designer to take components from component suppliers that have been built according to our rules and to determine the resulting system reliability. This can be done without access to the actual components. This work raises many issues, particularly about scalability of the proposed techniques and about the ability of the system designer to know the input profile to the level and kind of accuracy required. There are also large classes of components where the techniques are currently intractable, but we see this work as an important first step.
22

Personalized Defect Prediction

Jiang, Tian January 2013 (has links)
Academia and industry expend much effort to predict software defects. Researchers proposed many defect prediction algorithms and metrics. While previous defect prediction techniques often take the author of the code into consideration, none of these techniques build a separate prediction model for each developer. Different developers have different coding styles, commit frequencies, and experience levels, which would result in different defect patterns. When the defects of different developers are combined, such differences are obscured, hurting the prediction performance. This thesis proposes two techniques to improve defect prediction performance: personalized defect prediction and confidence-based hybrid defect prediction. Personalized defect prediction builds a separate prediction model for each developer to predict software defects. Confidence-based hybrid defect prediction combines different models by picking the prediction from the model with the highest confidence. As a proof of concept, we apply the two techniques to classify defects at the file change level. We implement the state-of-the-art change classification as the baseline and compare with the personalized defect prediction approach. Confidence-based defect prediction combines these two models. We evaluate on six large and popular software projects written in C and Java—the Linux kernel, PostgreSQL, Xorg, Eclipse, Lucene and Jackrabbit.
23

Developing and Evaluating Methods for Mitigating Sample Selection Bias in Machine Learning

Pelayo Ramirez, Lourdes Unknown Date
No description available.
24

Software Reliability Assessment

Kaya, Deniz 01 September 2005 (has links) (PDF)
In spite of the fact that software reliability studies have attracted great deal of attention from different disciplines in 1970s, applications of the subject have rarely been involved in the software industry. With the rise of technological advances especially in the military electronics field, reliability of software systems gained importance. In this study, a company in the defense industries is inspected for their abilities and needs regarding software reliability, and an improvement proposal with metrics measurement system is formed. A computer tool is developed for the evaluation of the performance of the improvement proposal. Results obtained via this tool indicate improved abilities in the development of reliable software products.
25

A formal application of safety and risk assessmen in software systems /

Williamson, Christopher Loyal. January 2004 (has links) (PDF)
Thesis (Ph. D. in Software Engineering)--Naval Postgraduate School, Sept. 2004. / Thesis Advisor(s): Luqi. Includes bibliographical references. Also available online.
26

Sensibilidade a variações de perfil operacional de dois modelos de confiabilidade de software baseados em cobertura / Sensitivity to variations in the operational profile of two software reliability models based on coverage

Silva, Odair Jacinto da, 1967- 25 August 2018 (has links)
Orientadores: Mario Jino, Adalberto Nobiato Crespo / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-25T04:04:06Z (GMT). No. of bitstreams: 1 Silva_OdairJacintoda_M.pdf: 2238493 bytes, checksum: 120710575da3bbe9052b22a2df5a3a07 (MD5) Previous issue date: 2014 / Resumo: Diversos estudos publicados indicam que a capacidade preditiva dos modelos de confiabilidade de software, que utilizam a informação da cobertura observada durante os testes, é melhor do que a capacidade preditiva dos modelos baseados no domínio do tempo. E, por isso, têm sido propostos por pesquisadores da área como uma alternativa aos modelos baseados no domínio do tempo. Entretanto, para chegar a uma conclusão sobre a superioridade desta classe de modelos é necessário avaliar a sua sensibilidade a variações do perfil operacional. Uma qualidade desejável dos modelos de confiabilidade de software é a de que sua capacidade preditiva não seja afetada por variações no perfil operacional de um software. Esta dissertação avalia, por meio de um experimento, o comportamento de dois modelos de confiabilidade de software que se baseiam na informação de cobertura do código: "Modelo Binomial Baseado em Cobertura" e "Modelo de Falhas Infinitas Baseado em Cobertura". O experimento aplica os modelos nos dados de falhas observados durante a execução de um programa em três perfis operacionais estatisticamente distintos. Adicionalmente, seis modelos de confiabilidade de software tradicionais são utilizados para estimar a confiabilidade do software utilizando os mesmos dados de falhas. Os modelos escolhidos foram: Musa-Okumoto, Musa Básico, Littlewood-Verral Linear, Littlewood-Verral Quadrático, Jelinski-Moranda e Geométrico. Os resultados mostram que a capacidade preditiva dos modelos "Modelo Binomial Baseado em Cobertura" e "Modelo de Falhas Infinitas Baseado em Cobertura" não é afetada com a variação do perfil operacional do software. O mesmo resultado não foi observado nos modelos de confiabilidade de software baseados no domínio do tempo, ou seja, a alteração do perfil operacional influencia a capacidade preditiva desses modelos. Um resultado observado, por exemplo, é de que nenhum dos modelos tradicionais pôde ser utilizado para estimar a confiabilidade do software aplicando os dados de falhas gerados por um dos perfis operacionais / Abstract: Several published studies indicate that the predictive ability of the software reliability models using test coverage information observed during the tests is better than the predictive ability of models based on time domain. And, therefore, have been proposed by researchers as an alternative to models based on time domain. However, to reach a conclusion about the superiority of this class of models is necessary to evaluate their sensitivity to variations in operational profile. A desirable quality of software reliability models is that their predictive ability is not affected by variations in the operational profile of a program. This dissertation analyzes by means of an experiment, the sensitivity of two software reliability models based on code coverage information: "Binomial Model Based on Coverage" and "Infinite Failure Model Based on Coverage". The experiment applies the models to data failures observed during the execution of a program according to three statistically distinct operational profiles. Additionally, six traditional software reliability models were used to estimate the reliability using the same software failure data. The models selected were: Musa-Okumoto, Musa Basic, Littlewood-Verrall Linear, Quadratic Littlewood-Verrall, Jelinski-Moranda and Geometric. The results show that the predictive ability of the models "Binomial Model Based on Coverage" and "Infinite Failure Model Based on Coverage" is not affected by varying the operational profile of the software. The same result was not observed in software reliability models based on time domain, i.e., changing the operational profile influences the predictive ability of these models. A result observed for example is that none of the traditional models could be used to estimate the software reliability using the fault data set generated by one of the operational profiles / Mestrado / Engenharia de Computação / Mestre em Engenharia Elétrica
27

System Support for Improving the Reliability of MPI Applications and Libraries

Chen, Zhezhe 19 December 2013 (has links)
No description available.
28

Runtime Support for Improving Reliability in System Software

Gao, Qi 23 August 2010 (has links)
No description available.
29

Formalization Of Input And Output In Modern Operating Systems: The Hadley Model

Gerber, Matthew 01 January 2005 (has links)
We present the Hadley model, a formal descriptive model of input and output for modern computer operating systems. Our model is intentionally inspired by the Open Systems Interconnection model of networking; I/O as a process is defined as a set of translations between a set of computer-sensible forms, or layers, of information. To illustrate an initial application domain, we discuss the utility of the Hadley model and a potential associated I/O system as a tool for digital forensic investigators. To illustrate practical uses of the Hadley model we present the Hadley Specification Language, an essentially functional language designed to allow the translations that comprise I/O to be written in a concise format allowing for relatively easy verifiability. To further illustrate the utility of the language we present a read/write Microsoft DOS FAT12 and read-only Linux ext2 file system specification written in the new format. We prove the correctness of the read-only side of these descriptions. We present test results from operation of our HSL-driven system both in user mode on stored disk images and as part of a Linux kernel module allowing file systems to be read. We conclude by discussing future directions for the research.
30

Compiler-Assisted Software Fault Tolerance for Microcontrollers

Bohman, Matthew Kendall 01 March 2018 (has links)
Commercial off-the-shelf (COTS) microcontrollers can be useful for non-critical processing on spaceborne platforms. Many of these microprocessors are inexpensive and consume little power. However, the software running on these processors is vulnerable to radiation upsets, which can cause unpredictable program execution or corrupt data. Space missions cannot allow these errors to interrupt functionality or destroy gathered data. As a result, several techniques have been developed to reduce the effect of these upsets. Some proposed techniques involve altering the processor hardware, which is impossible for a COTS device. Alternately, the software running on the microcontroller can be modified to detect or correct data corruption. There have been several proposed approaches for software mitigation. Some take advantage of advanced architectural features, others modify software by hand, and still others focus their techniques on specific microarchitectures. However, these approaches do not consider the limited resources of microcontrollers and are difficult to use across multiple platforms. This thesis explores fully automated software-based mitigation to improve the reliability of microcontrollers and microcontroller software in a high radiation environment. Several difficulties associated with automating software protection in the compilation step are also discussed. Previous mitigation techniques are examined, resulting in the creation of COAST (COmpiler-Assisted Software fault Tolerance), a tool that automatically applies software protection techniques to user code. Hardened code has been verified by a fault injection campaign; the mean work to failure increased, on average, by 21.6x. When tested in a neutron beam, the neutron cross sections of programs decreased by an average of 23x, and the average mean work to failure increased by 5.7x.

Page generated in 0.0605 seconds