• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1036
  • 365
  • 176
  • 137
  • 81
  • 75
  • 53
  • 47
  • 38
  • 30
  • 24
  • 20
  • 19
  • 17
  • 9
  • Tagged with
  • 2487
  • 1079
  • 793
  • 312
  • 254
  • 253
  • 249
  • 246
  • 223
  • 220
  • 207
  • 175
  • 155
  • 153
  • 145
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

An independent evaluation of subspace facial recognition algorithms

Surajpal, Dhiresh Ramchander 23 December 2008 (has links)
In traversing the diverse field of biometric security and face recognition techniques, this investigation explores a rather rare comparative study of three of the most popular Appearance-based Face Recognition projection classes, these being the methodologies of Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Independent Component Analysis (ICA). Both the linear and kernel alternatives are investigated along with the four most widely accepted similarity measures of City Block (L1), Euclidean (L2), Cosine and the Mahalanobis metrics. Although comparisons between these classes can become fairly complex given the different task natures, the algorithm architectures and the distance metrics that must be taken into account, an important aspect of this study is the completely equal working conditions that are provided in order to facilitate fair and proper comparative levels of evaluation. In doing so, one is able to realise an independent study that significantly contributes to prior literary findings, either by verifying previous results, offering further insight into why certain conclusions were made or by providing a better understanding as to why certain claims should be disputed and under which conditions they may hold true. The experimental procedure examines ten algorithms in the categories of expression, illumination, occlusion and temporal delay; the results are then evaluated based on a sequential combination of assessment tools that facilitate both intuitive and statistical decisiveness among the intra and inter-class comparisons. In a bid to boost the overall efficiency and accuracy levels of the identification system, the ‘best’ categorical algorithms are then incorporated into a hybrid methodology, where the advantageous effects of fusion strategies are considered. This investigation explores the weighted-sum approach, which by fusion at a matching score level, effectively harnesses the complimentary strengths of the component algorithms and in doing so highlights the improved performance levels that can be provided by hybrid implementations. In the process, by firstly exploring previous literature with respect to each other and secondly by relating the important findings of this paper to previous works one is also able to meet the primary objective in providing an amateur with a very insightful understanding of publicly available subspace techniques and their comparable application status within the environment of face recognition.
182

Total Synthesis of (+)-Discodermolide by Catalytic Stereoselective Borylation Reactions

Yu, Zhiyong January 2014 (has links)
Thesis advisor: James P. Morken / (+)-Discodermolide is a marine natural product and is one of the most potent microtubule stabilizers in human cell lines. Because of its unique linear structure and important properties, a number of total syntheses of (+)-discodermolide and its derivatives have been reported. Herein, an efficient, highly convergent, and stereocontrolled total synthesis is presented (Chapter 2). The synthesis relied on the development of three catalytic and stereoselective processes: platinum-catalyzed asymmetric diene diboration, nickel-catalyzed diastereoselective hydroboration of chiral dienes (Chapter 1), and nickel-catalyzed borylative diene-aldehyde coupling (see Chapter 4). Combination of these reactions allows preparation of the target in a short sequence. Moreover, the development of rhodium-catalyzed asymmetric hydroformylation (Chapter 3) makes this approach the first Roche ester free (+)-discodermolide synthesis. / Thesis (PhD) — Boston College, 2014. / Submitted to: Boston College. Graduate School of Arts and Sciences. / Discipline: Chemistry.
183

Ativação de componentes de software com a utilização de uma ontologia de componentes / Component loading with utilization of a components ontology

Lorza, Augusto Carbol 16 July 2007 (has links)
Atualmente, existem muitos estudos para agregar mais valor às informações disponíveis na Web visando melhorar os resultados da interação dos usuários com a Web; uma das linhas de estudo é a Web Semântica, que propõe a adição de informação semântica à Web atual por meio de ontologias. A organização internacional que define os padrões para a Web (W3C) já propôs vários padrões para tornar a Web Semântica viável, porém, além de padrões, também é preciso criar ou adaptar ferramentas que explorem as suas potencialidades. Uma ferramenta que dá um suporte significativo para a Web atual e que pode ser adaptada para trabalhar com a Web Semântica é o Servidor de Aplicações. Com adição de informações semânticas, na forma de ontologias, tem-se um Servidor de Aplicações Baseado em Ontologias (OBAS). Neste trabalho foi desenvolvido um sistema protótipo para oferecer as características mínimas de um OBAS, e desta forma, foram investigadas as tecnologias para a Web Semântica que viabilizassem uma solução de acordo com os padrões recomendados pela W3C. Os componentes de software de um OBAS têm suas propriedades e comportamentos relacionados de forma semântica usando-se ontologias. Como uma ontologia é um modelo conceitual explícito, suas descrições dos componentes podem ser consultadas e inferidas, melhorando o desempenho do servidor através da combinação dos componentes mais apropriados a uma tarefa, da simplificação da programação, pois não é mais necessário saber todos os detalhes de um componente para ativá-lo / Many studies have been carried out to add more value to the available information in the Web with a view to improving the results of the users\' interaction with the Web. Semantic Web is one line of research with focus on this issue and proposes the insertion of semantic information to the current Web through ontologies. Several patterns have been proposed by W3C, the international organization that defines patterns to the Web as an attempt to make the Semantic Web viable. However, besides patterns, it is also necessary to create or adapt tools to explore their potentialities. Application Server is a tool which gives significant support to the current Web and could be adapted to work with the Semantic Web. By adding semantic information, in the ontology form, we have an Ontology-Based Application Server (OBAS). This study develops a protoptype system which aims to offer the minimum characteristics of an OBAS. We have therefore investigated the semantic web which could provide a solution according to the patterns recommended by W3C. Properties and behaviors of software components of OBAS are semantically related by means of ontologies. Given that ontology is an explicit conceptual model, its component descriptions can be consulted and inferred, and hence improve the performance of the server. This is done by applying the most appropriate components to a given task and simplifying programming since components can be activated with no need to know all their details
184

Conception et mise en oeuvre d'un langage réflexif de modélisation et programmation par composants / Design and Implementation of a Reflective Component-Oriented Programming and Modeling Language

Spacek, Petr 17 December 2013 (has links)
L'ingénierie des logiciels à base de composants, produisant du logiciel en assemblant des composants sur « étagère » et « prêts-a-l’usage », promet la réduction des coûts au cours du développement, la maintenance et l'évolution d'un logiciel. La période récente a vu la production d'un ensemble très important de nouveaux résultats dans ce domaine. Comme le terme «composant» a un sens assez général, cet ensemble englobe de nombreuses recherches ayant des objectifs différents et offrant divers types d'abstractions et mécanismes. Cependant, une idée générale communément admise consiste a modéliser les logiciels avec des composants organisés en architectures, puis générer du code a partir de ces descriptions abstraites. Ceci est une bonne idée, mais la question qui se pose consiste a savoir quel langage est le meilleur candidat pour le code généré. Dans la pratique actuelle, la phase de conception se déroule dans le monde des composants alors que la phase de programmation se produit dans le monde des objets. Il semble aussi que les langages et technologies utilisés dans le développement a base de composants ne sont que partiellement à base de composants.Notre première revendication consiste à dire qu'il est important d'utiliser les langages à composants pour écrire du code exécutable, simplement parce que les artefacts à base de composants d'origine (comme, les besoins ou les architectures) ne disparaissent pas au moment de l'exécution, rendant les programmes plus compréhensibles et réversibles. En faisant cela, il est alors possible d'imaginer que la conception (modélisation) et la programmation peuvent être effectuées au même niveau conceptuel et pourquoi pas en utilisant le même langage. Généralement, les objets sont presque toujours choisis pour implémenter les conceptions à base de composants. Par ailleurs, il est vrai que c'est sans surprise les objets qui sont utilisés pour implémenter des conceptions à base de composants ; un objet étant certainement l'entité exécutable la plus proche d'un composant tel que c'est compris aujourd'hui. Par contre, ils sont proches mais il ne sont pas exactement les mêmes. Notre deuxième revendication est qu'il est possible d'atteindre des langages de programmation par composants en apportant des modifications souples aux langages à objets.Suivant ces idées, nous présentons dans cette thèse un exemple d'un nouveau langage pur de modélisation et de programmation par composants, nommé Compo intégrant d'une manière simple et uniforme, les concepts de base pour la description et l'implémentation des composants et des architectures à composants: composants, ports, services et connexions, et les mécanismes nécessaires suivants: l'instanciation, l'invocation de service, la composition et la substitution. Nous soutenons également que la description des composants, leurs architectures (structures) et leurs services (comportement) gagneraient (comme le font les descriptions d'objets) à utiliser des descriptions différentielles qui se basent sur un mécanisme d'héritage. En conséquence, nous proposons une spécification et une implémentation d'un système d'héritage en prenant en compte une politique de spécialisation covariante et un mécanisme de substitution dédié. Nous affirmons enfin que faire un tel langage totalement réflexif ouvrira une nouvelle alternative intéressante (dans le contexte des composants) pour n'importe quel genre de modèle ou de programme de vérification ou de transformation d'architecture. Nous revisitons quelques solutions standards pour obtenir une réification à composants originale pour construire un méta-modèle exécutable conçu sur l'idée du «tout est un composant». Une implémentation complète du prototype du langage Compo a été réalisée et est décrite dans cette thèse. / Component-based Software Engineering (CBSE), to produce software by connecting of the shelf ready-to-use components, promises costs reduction during the development, the maintenance and the evolution of a software.The recent period has seen the production of a very important set of new results in this field.As the term "component" is very general, it encompasses many researches having different objectives and offering various kind of abstractions and mechanisms.However one main overall accepted idea is to model software with components organized into architectures and to generate code from such abstract descriptions.This is a good idea but the question arise to know which languages are good candidate for the generated code.In the current practice the design phase happens in the component world and the programming phase occurs in the object-oriented world.It appears that languages and technologies used to achieve component-based development are only partially component-based.Our first claim is that to use component-based languages to write the executable code is primarily important just because the original component-based designs (eg requirements, architectures) do not vanish at run-time, making programs more understandable and reversible. By doing this, it is then possible to imagine that design (modeling) and programming can be done at the same conceptual level and why not using the same language.Usually, objects are most always chosen to implements component-based designs.It is true that an object is certainly the existing executable thing the closest to a component as they are understood today; close but not exactly the same.Our second claim is then that it is possible to achieve component-programming languages by smoothly modifying object-oriented ones.Following these ideas, we present in this thesis an example of a new pure component-based programming and modeling language, named Compo incorporating, in a simple and uniform way, core concepts and mechanisms necessary for the description and implementation of components and of component-based architectures: component, port, service, connection and the following mechanisms: instantiation, service invocation, composition and substitution.We also claim that describing components, their architectures (structures) and their services (behavior) would benefit (as objects descriptions do) from an inheritance-based differential description.In consequence we propose a specification and implementation of an inheritance system taking requirements into account on a covariant specialization policy base and with a corresponding dedicated substitution mechanism.We finally claim that making such a language fully reflective will open an interesting new alternative (in the component's context) for any king of model or program checking or transformation.We revisit some standard solutions to achieve an original component-oriented reification of concepts to build up an executable meta-model designed on the idea of "everything is a component".A complete prototype implementation of the Compo language has been achieved and is described in this thesis.
185

Extração de características de imagens de faces humanas através de wavelets, PCA e IMPCA / Features extraction of human faces images through wavelets, PCA and IMPCA

Bianchi, Marcelo Franceschi de 10 April 2006 (has links)
Reconhecimento de padrões em imagens é uma área de grande interesse no mundo científico. Os chamados métodos de extração de características, possuem as habilidades de extrair características das imagens e também de reduzir a dimensionalidade dos dados gerando assim o chamado vetor de características. Considerando uma imagem de consulta, o foco de um sistema de reconhecimento de imagens de faces humanas é pesquisar em um banco de imagens, a imagem mais similar à imagem de consulta, de acordo com um critério dado. Este trabalho de pesquisa foi direcionado para a geração de vetores de características para um sistema de reconhecimento de imagens, considerando bancos de imagens de faces humanas, para propiciar tal tipo de consulta. Um vetor de características é uma representação numérica de uma imagem ou parte dela, descrevendo seus detalhes mais representativos. O vetor de características é um vetor n-dimensional contendo esses valores. Essa nova representação da imagem propicia vantagens ao processo de reconhecimento de imagens, pela redução da dimensionalidade dos dados. Uma abordagem alternativa para caracterizar imagens para um sistema de reconhecimento de imagens de faces humanas é a transformação do domínio. A principal vantagem de uma transformação é a sua efetiva caracterização das propriedades locais da imagem. As wavelets diferenciam-se das tradicionais técnicas de Fourier pela forma de localizar a informação no plano tempo-freqüência; basicamente, têm a capacidade de mudar de uma resolução para outra, o que as fazem especialmente adequadas para análise, representando o sinal em diferentes bandas de freqüências, cada uma com resoluções distintas correspondentes a cada escala. As wavelets foram aplicadas com sucesso na compressão, melhoria, análise, classificação, caracterização e recuperação de imagens. Uma das áreas beneficiadas onde essas propriedades tem encontrado grande relevância é a área de visão computacional, através da representação e descrição de imagens. Este trabalho descreve uma abordagem para o reconhecimento de imagens de faces humanas com a extração de características baseado na decomposição multiresolução de wavelets utilizando os filtros de Haar, Daubechies, Biorthogonal, Reverse Biorthogonal, Symlet, e Coiflet. Foram testadas em conjunto as técnicas PCA (Principal Component Analysis) e IMPCA (Image Principal Component Analysis), sendo que os melhores resultados foram obtidos utilizando a wavelet Biorthogonal com a técnica IMPCA / Image pattern recognition is an interesting area in the scientific world. The features extraction method refers to the ability to extract features from images, reduce the dimensionality and generates the features vector. Given a query image, the goal of a features extraction system is to search the database and return the most similar to the query image according to a given criteria. Our research addresses the generation of features vectors of a recognition image system for human faces databases. A feature vector is a numeric representation of an image or part of it over its representative aspects. The feature vector is a n-dimensional vector organizing such values. This new image representation can be stored into a database and allow a fast image retrieval. An alternative for image characterization for a human face recognition system is the domain transform. The principal advantage of a transform is its effective characterization for their local image properties. In the past few years researches in applied mathematics and signal processing have developed practical wavelet methods for the multi scale representation and analysis of signals. These new tools differ from the traditional Fourier techniques by the way in which they localize the information in the time-frequency plane; in particular, they are capable of trading on type of resolution for the other, which makes them especially suitable for the analysis of non-stationary signals. The wavelet transform is a set basis function that represents signals in different frequency bands, each one with a resolution matching its scale. They have been successfully applied to image compression, enhancement, analysis, classification, characterization and retrieval. One privileged area of application where these properties have been found to be relevant is computer vision, especially human faces imaging. In this work we describe an approach to image recognition for human face databases focused on feature extraction based on multiresolution wavelets decomposition, taking advantage of Biorthogonal, Reverse Biorthogonal, Symlet, Coiflet, Daubechies and Haar. They were tried in joint the techniques together the PCA (Principal Component Analysis) and IMPCA (Image Principal Component Analysis)
186

Structural characterisation of Histidine Kinase 2

Wang, Liang January 2018 (has links)
Two-component systems (TCS) are the predominant signal transduction pathways in prokaryotes, being present also in eukaryotic organisms, such as algae, fungi and yeast, and higher plants. TCSs play an important role in environmental signal perception and response, essentially implementing adaptation to the surrounding environment. Histidine Kinase 2 (Hik2) in cyanobacteria is a typical sensor histidine kinase, one component of a TCS, and has been identified to be a homologue protein of Arabidopsis Chloroplast Sensor Kinase (CSK). Previous research has elucidated Hik2 to regulate photosynthetic gene transcription with two response regulators, Rre1 and RppA via phosphorylation. A typical histidine kinase contains a variable sensor domain and a conserved kinase domain. It usually functions as a homodimer. This thesis describes the structural characterisation of Hik2, probing particularly its discovered oligomeric states. Results obtained from size exclusion chromatography, native-PAGE, chemical cross-linking analyses and mass spectrometry, amongst others, have shown a variety of Hik2 structural populations exist, further validated by negative stain transmission electron microscopy coupled to single particle analysis. Hik2 protein exists predominantly as a hexamer in low salt conditions, and adding NaCl dissociates hexamers into tetramers, critical for the autophosphorylation activity of Hik2. Thus, a model is proposed for the constitution change of Hik2 oligomers when salt concentration differs. In addition, the sensor domain is typically responsible for detecting environmental input, however, it is not yet clear how Hik2 and CSK sense signals. In this thesis, the structures of Hik2 and CSK sensor domains were analysed and discussed, to aid our understanding of their mechanism of signal perception and transduction.
187

Hypothesis formulation in medical records space

Ba-Dhfari, Thamer Omer Faraj January 2017 (has links)
Patient medical records are a valuable resource that can be used for many purposes including managing and planning for future health needs as well as clinical research. Health databases such as the clinical practice research datalink (CPRD) and many other similar initiatives can provide researchers with a useful data source on which they can test their medical hypotheses. However, this can only be the case when researchers have a good set of hypotheses to test on the data. Conversely, the data may have other equally important areas that remain unexplored. There is a chance that some important signals in the data could be missed. Therefore, further analysis is required to make such hidden areas become more obvious and attainable for future exploration and investigation. Data mining techniques can be effective tools in discovering patterns and signals in large-scale patient data sets. These techniques have been widely applied to different areas in medical domain. Therefore, analysing patient data using such techniques has the potential to explore the data and to provide a better understanding of the information in patient records. However, the heterogeneity and complexity of medical data can be an obstacle in applying data mining techniques. Much of the potential value of this data therefore goes untapped. This thesis describes a novel methodology that reduces the dimensionality of primary care data, to make it more amenable to visualisation, mining and clustering. The methodology involves employing a combination of ontology-based semantic similarity and principal component analysis (PCA) to map the data into an appropriate and informative low dimensional space. The aim of this thesis is to develop a novel methodology that provides a visualisation of patient records. This visualisation provides a systematic method that allows the formulation of new and testable hypotheses which can be fed to researchers to carry out the subsequent phases of research. In a small-scale study based on Salford Integrated Record (SIR) data, I have demonstrated that this mapping provides informative views of patient phenotypes across a population and allows the construction of clusters of patients sharing common diagnosis and treatments. The next phase of the research was to develop this methodology and explore its application using larger patient cohorts. This data contains more precise relationships between features than small-scale data. It also leads to the understanding of distinct population patterns and extracting common features. For such reasons, I applied the mapping methodology to patient records from the CPRD database. The study data set consisted of anonymised patient records for a population of 2.7 million patients. The work done in this analysis shows that methodology scales as O(n) in ways that did not require large computing resources. The low dimensional visualisation of high dimensional patient data allowed the identification of different subpopulations of patients across the study data set, where each subpopulation consisted of patients sharing similar characteristics such as age, gender and certain types of diseases. A key finding of this research is the wealth of data that can be produced. In the first use case of looking at the stratification of patients with falls, the methodology gave important hypotheses; however, this work has barely scratched the surface of how this mapping could be used. It opens up the possibility of applying a wide range of data mining strategies that have not yet been explored. What the thesis has shown is one strategy that works, but there could be many more. Furthermore, there is no aspect of the implementation of this methodology that restricts it to medical data. The same methodology could equally be applied to the analysis and visualisation of many other sources of data that are described using terms from taxonomies or ontologies.
188

Reverse engineering encapsulated components from legacy code

Arshad, Rehman January 2018 (has links)
Component-based development is an approach that revolves around the construction of systems form pre-built modular units (components). If legacy code can be reverse engineered to extract components, the extracted components can provide architectural re-usability across multiple systems of the same domain. Current component directed reverse engineering approaches are based on component models that belong to architecture description languages (ADLs). ADL-based components cannot be reused without configurational changes at code level and binding every required and provided service. Moreover, these component models neither support code-independent composition after extraction of components nor the re-deposition of a composed configuration of components for future reuse. This thesis presents a reverse engineering approach that extracts components and addresses the limitations of current approaches, together with a tool called RX-MAN. Unlike ADL-based approaches, the presented approach is based on an encapsulated component model called X-MAN. X-MAN components are encapsulated because computation cannot go outside of a component. X-MAN components cannot interact directly but only exogenously (composition is defined outside of a component). Our approach offers code-independent composition after extracting components and does not need binding of all the services like ADLs. The evaluation of our approach shows that it can facilitate the re-usability of legacy code by providing code-independent composition and re-deposition of composed configurations of components for further reuse and composition.
189

Intensity mapping : a new approach to probe the large-scale structure of the Universe

Collis Olivari, Lucas January 2018 (has links)
Intensity mapping (IM) is a new observational technique to survey the large-scale structure of matter using emission lines, such as the 21 cm emission line of atomic hydrogen (HI) and the rotational lines of the carbon monoxide molecule (CO). Sensitive radio surveys have the potential to detect the HI power spectrum at low redshifts (z <1) in order to constrain the properties of dark energy and massive neutrinos. Observations of the HI signal will be contaminated by instrumental noise and, more significantly, by astrophysical foregrounds, such as the Galactic synchrotron emission, which is at least four orders of magnitude brighter than the HI signal. In this thesis, we study the ability of the Generalized Needlet Internal Linear Combination (GNILC) method to subtract radio foregrounds and to recover the cosmological HI signal for HI IM experiments. The GNILC method is a new technique that uses both frequency and spatial information to separate the components of the observed data. For simulated radio observations including HI emission, Galactic synchrotron, Galactic free-free, extragalactic point sources and thermal noise, we find that it can reconstruct the HI plus noise power spectrum with 7.0% accuracy for 0.13 <z <0.48 (960 - 1260 MHz) and l <400. In this work, GNILC is also applied to a particular CO IM experiment: the CO Mapping Array Pathfinder (COMAP). In this case, the simulated radio observations include CO emission, Galactic synchrotron, Galactic free-free, Galactic anomalous microwave emission, extragalactic point sources and thermal noise. We find that GNILC can reconstruct the CO plus noise power spectra with 7.3% accuracy for COMAP phase 1 (l <1800) and 6.3% for phase 2 (l <3000). In both cases, we have 2.4 <z <3.4 (26 - 34 GHz). In this work, we also forecast the uncertainties on cosmological parameters for the upcoming HI IM experiments BINGO (BAO from Integrated Neutral Gas Observations) and SKA (Square Kilometre Array) phase-1 dish array operating in auto-correlation mode. For the optimal case of BINGO with no foregrounds, the combination of the HI angular power spectra with Planck results allows w to be measured with a precision of 4%, while the combination of the BAO acoustic scale with Planck gives a precision of 7%. We consider a number of potentially complicating effects, including foregrounds and redshift dependent bias, which increase the uncertainty on w but not dramatically; in all cases the final uncertainty is found to be less than 8% for BINGO. For the combination of SKA-MID in auto-correlation mode (total-power) with Planck, we find that, in ideal conditions, w can be measured with a precision of 4% for the redshift range 0.35 <z <3 (350 - 1050 MHz) and 2% for 0 <z <0.49 (950 - 1421 MHz). Extending the model to include the sum of neutrino masses yields a 95% upper limit of less than 0.30 eV for BINGO and less than 0.12 eV for SKA phase 1, competitive with the current best constraints in the case of BINGO and significantly better in the case of SKA.
190

Coverage-based testing strategies and reliability modeling for fault-tolerant software systems. / CUHK electronic theses & dissertations collection

January 2006 (has links)
Finally, we formulate the relationship between code coverage and fault detection. Although our two current models are in simple mathematical formats, they can predict the percentage of fault detected by the code coverage achieved for a certain test set. We further incorporate such formulation into traditional reliability growth models, not only for fault-tolerant software, but also for general software system. Our empirical evaluations show that our new reliability model can achieve more accurate reliability assessment than the traditional Non-homogenous Poisson model. / Furthermore, to investigate some "variants" as well as "invariants" of fault-tolerant software, we perform an empirical investigation on evaluating reliability features by a comprehensive comparison between two projects: our project and NASA 4-University project. Based on the same specification for program development, these two projects encounter some common as well as different features. The testing results of two comprehensive operational testing procedures involving hundreds of thousands test cases are collected and compared. Similar as well as dissimilar faults are observed and analyzed, indicating common problems related to the same application in both projects. The small number of coincident failures in the two projects, nevertheless, provide a supportive evidence for N-version programming, while the observed reliability improvement implies some trends in the software development in the past twenty years. / Motivated by the lack of real-world project data for investigation on software testing and fault tolerance techniques together, we conduct a real-world project and engage multiple programming teams to independently develop program versions based on an industry-scale avionics application. Detailed experimentations are conducted to study the nature, source, type, detectability, and effect of faults uncovered in the program versions, and to learn the relationship among these faults and the correlation of their resulting failures. Coverage-based testing as well as mutation testing techniques are adopted to reproduce mutants with real faults, which facilitate the investigation on the effectiveness of data flow coverage, mutation coverage, and fault coverage for design diversity. / Next, we investigate the effect of code coverage on fault detection which is the underlying intuition of coverage-based testing strategies. From our experimental data, we find that code coverage is a moderate indicator for the capability of fault detection on the whole test set. But the effect of code coverage on fault detection varies under different testing profiles. The correlation between the two measures is high with exceptional test cases, but weak in normal testing. Moreover, our study shows that code coverage can be used as a good filter to reduce the size of the effective test set, although it is more evident for exceptional test cases. / Software permeates our modern society, and its complexity and criticality is ever increasing. Thus the capability to tolerate software faults, particularly for critical applications, is evident. While fault-tolerant software is seen as a necessity, it also remains as a controversial technique and there is a lack of conclusive assessment about its effectiveness. / Then, based on the preliminary experimental data, further experimentation and detailed analyses on the correlations among these faults and the relation to their resulting failures are studied. The results are further applied to the current reliability modeling techniques for fault-tolerant software to examine their effectiveness and accuracy. / This thesis aims at providing a quantitative assessment scheme for a comprehensive evaluation of fault-tolerant software including reliability model comparisons and trade-off studies with software testing techniques. First of all, we propose a comprehensive procedure in assessing fault-tolerant software for software reliability engineering, which is composed of four tasks: modeling, experimentation, evaluation and economics. Our ultimate objective is to construct a systematic approach to predicting the achievable reliability based on the software architecture and testing evidences, through an investigation of testing and modeling techniques for fault-tolerant software. / Cai Xia. / "September 2006." / Adviser: Rung Tsong Michael Lyu. / Source: Dissertation Abstracts International, Volume: 68-03, Section: B, page: 1715. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (p. 165-181). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.

Page generated in 0.0358 seconds