• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1034
  • 365
  • 176
  • 137
  • 81
  • 75
  • 53
  • 47
  • 38
  • 30
  • 24
  • 20
  • 19
  • 17
  • 9
  • Tagged with
  • 2484
  • 1077
  • 792
  • 312
  • 254
  • 253
  • 248
  • 246
  • 223
  • 220
  • 207
  • 175
  • 155
  • 153
  • 145
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Modelo de qualidade para componentes de software / Software component quality model

Darley Rosa Peres 18 December 2006 (has links)
Dentre as tecnologias de desenvolvimento de software que promovem o reuso com o objetivo de construir sistemas com prazos e custos menores, sem sacrificar a qualidade dos produtos, está o Desenvolvimento Baseado em Componentes (DBC). O Desenvolvimento Baseado em Componentes consiste na construção de sistemas pela composição de componentes de software de acordo com um processo de desenvolvimento específico. Para garantir a qualidade desses sistemas, é importante garantir a qualidade de seus componentes. A falta da garantia da qualidade dos componentes de software destinados à reutilização é um dos fatores de inibição do DBC, e existe certa carência de pesquisas sobre a qualidade de componentes de software. Desta maneira, o principal objetivo deste trabalho foi a definição de um modelo de qualidade específico para componentes de software, fornecendo a base para a especificação de requisitos de qualidade e para a avaliação de qualidade dos mesmos. O Modelo está embasado nas normas ISO/IEC 9126 e ISO/IEC 12119, e também na literatura especializada. Uma ferramenta para apoiar avaliações de componentes (e de produtos de software de forma geral) também foi desenvolvida. Foram realizadas ainda quatro avaliações de componentes através de estudos de casos para verificar a aplicabilidade e utilidade do modelo de qualidade e da ferramenta desenvolvida. Dois questionários foram respondidos pelos avaliadores responsáveis pelas avaliações coletando assim, suas considerações sobre o modelo de qualidade e sobre a ferramenta / Among the software development technologies that promote the reuse aiming to build systems with periods and smaller costs, without sacrificing the quality of products, is the Component-Based Development (CBD). The Component-Based Development consists on the construction of systems by the composition of software components according to a specific development process. To guarantee the quality of those systems, it is important to guarantee the quality of their components. The lack of warranty of the quality of the software components destined to reuse is one of the inhibition factors of CBD, and there is certain lack of researches about quality of software components. This way, the main objective of this work was the formalization of a specific quality model for software components, supplying the base for the specification of quality requirements and for the quality evaluation of the same ones. The model is based on the norms ISO/IEC 9126 and ISO/IEC 12119, and also in the specialized literature. A tool to support components evaluations (and software products in a general way) was also developed. It was realized four components evaluations through case studies to verify the applicability and usefulness of the quality model and the developed tool. Two questionnaires were answered by the appraisers responsible for the evaluations collecting thereby, their considerations on the quality model and the tool
72

Introducing Mode Switch in Component-Based Software Development

Yin, Hang January 2015 (has links)
Self-adaptivity, characterized by the ability to dynamically adjust behavior at runtime, is a growing trend in the evolution of modern embedded systems. While self-adaptive systems tend to be flexible and autonomous, self-adaptivity may inevitably complicate software design, test and analysis. A strategy for taming the growing software complexity of self-adaptive systems is to partition system behaviors into different operational modes specified at design time. Such a multi-mode system can change behavior by switching between modes at runtime under certain circumstances. Multi-mode systems can benefit from a complementary approach to the software development of complex systems: Component-Based Software Engineering (CBSE), which fosters reuse of independently developed software components. However, the state-of-the-art component-based development of multi-mode systems does not take full advantage of CBSE, as reuse of modes at component level is barely addressed. Modes are often treated as system properties, while mode switches are handled by a global mode manager. This centralized mode management entails global information of all components, whereas the global information may be inaccessible in component-based systems. Another potential problem is that a single mode manager does not scale well, particularly at design time,  for a large number of components and modes.   In this thesis we propose a distributed solution to the component-based development of multi-mode systems, aiming for a more efficient and scalable mode management. Our goal is to fully incorporate modes in software component reuse, supporting reuse of multi-mode components, i.e., components able to run in multiple modes. We have developed a generic framework, the Mode-Switch Logic (MSL), which not only supports reuse of multi-mode components but also provides runtime mechanisms for handling mode switch. MSL includes three fundamental elements: (1) a mode-aware component model with the formal specification of reusable multi-mode software components; (2) a mode mapping mechanism for the seamless composition of multi-mode components; and (3) a mode-switch runtime mechanism which is executed by each component in isolation from its functional execution and coordinates the mode switches of different components without the need of global mode information. The mode-switch runtime mechanism has been verified by model checking in conjunction with mathematical proofs. We also provide a mode-switch timing analysis for the runtime mechanism to respect real-time requirements.   MSL is dedicated to the mode aspect of a system irrespective of component execution semantics, thus independent of the choice of component models. We have integrated MSL in the ProCom component model with the extension of support for reuse of multi-mode components and distributed mode-switch handling. Although the distributed mode-switch handling of MSL is more flexible and scalable than the conventional centralized approach, when components are deployed on a single hardware platform and global mode information is available, centralized mode-switch handling is more efficient in terms of runtime overhead and mode-switch time. Hence, MSL is supplemented with a mode transformation technique to enhance runtime mode-switch efficiency by converting the distributed mechanism to a centralized mechanism. MSL together with the mode transformation technique has been implemented in a prototype tool where one can build multi-mode systems by reusing multi-mode components. The applicability of MSL is demonstrated in two proof-of-concept case studies. / ARROWS - Design Techniques for Adaptive Embedded Systems
73

Bezpečnost a důvěra v komponentovém modelu DEECo / Security and Trust in the DEECo Component Model

Štumpf, Ondřej January 2015 (has links)
DEECo represents an example of a Cyber-Physical System (CPS) consisting of potentially vast number of components able to share data with each other. So far, access to data was not restricted, thus enabling components to exploit sensitive data owned by other components. The goal of this work is to analyze security threats in distributed environments such as DEECo and propose a security solution which would provide both physical security of component data and introduce an access control mechanism. However, while confidentiality may be critical to certain applications, data integrity is crucial to almost every one. This work therefore also proposes a trust model, which prevents components operating with defective or malicious data. Both proposed models are realized in jDEECo, a Java implementation of DEECo.
74

Implementation of the DEECo component framework for embedded systems / Implementation of the DEECo component framework for embedded systems

Matěna, Vladimír January 2014 (has links)
Recent development in the field of distributed and decentralized cyber-physical systems led to emerge of DEECo model. As many DEECo use cases are embedded applications it is interesting to evaluate DEECo on embedded hardware. Currently there is only reference DEECo implementation which is written in Java thus cannot be used for embedded applications. As part of this thesis C++ DEECo mapping and embedded CDEECo++ framework were designed using FreeRTOS operating system for task scheduling and synchronization. An example application designed for the STM32F4 board demonstrates usability of the framework. This thesis contains description of the DEECo mapping into the C++ language, source codes of the CDEECo++ framework, documentation and example application including basic measurement of its real- time properties. Powered by TCPDF (www.tcpdf.org)
75

Quality prediction for component-based software development: techniques and a generic environment.

January 2002 (has links)
Cai Xia. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (leaves 105-110). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Component-Based Software Development and Quality Assurance Issues --- p.1 / Chapter 1.2 --- Our Main Contributions --- p.5 / Chapter 1.3 --- Outline of This Thesis --- p.6 / Chapter 2 --- Technical Background and Related Work --- p.8 / Chapter 2.1 --- Development Framework for Component-based Software --- p.8 / Chapter 2.1.1 --- Common Object Request Broker Architecture (CORBA) --- p.9 / Chapter 2.1.2 --- Component Object Model (COM) and Distributed COM (DCOM) --- p.12 / Chapter 2.1.3 --- Sun Microsystems's JavaBeans and Enterprise JavaBeans --- p.14 / Chapter 2.1.4 --- Comparison among Different Frameworks --- p.17 / Chapter 2.2 --- Quality Assurance for Component-Based Systems --- p.199 / Chapter 2.2.1 --- Traditional Quality Assurance Issues --- p.199 / Chapter 2.2.2 --- The Life Cycle of Component-based Software Systems --- p.255 / Chapter 2.2.3 --- Differences between components and objects --- p.266 / Chapter 2.2.4 --- Quality Characteristics of Components --- p.27 / Chapter 2.3 --- Quality Prediction Techniques --- p.32 / Chapter 2.3.1 --- ARMOR: A Software Risk Analysis Tool --- p.333 / Chapter 3 --- A Quality Assurance Model for CBSD --- p.35 / Chapter 3.1 --- Component Requirement Analysis --- p.38 / Chapter 3.2 --- Component Development --- p.39 / Chapter 3.3 --- Component Certification --- p.40 / Chapter 3.4 --- Component Customization --- p.42 / Chapter 3.5 --- System Architecture Design --- p.43 / Chapter 3.6 --- System Integration --- p.44 / Chapter 3.7 --- System Testing --- p.45 / Chapter 3.8 --- System Maintenance --- p.46 / Chapter 4 --- A Generic Quality Assessment Environment: ComPARE --- p.48 / Chapter 4.1 --- Objective --- p.50 / Chapter 4.2 --- Metrics Used in ComPARE --- p.53 / Chapter 4.2.1 --- Metamata Metrics --- p.55 / Chapter 4.2.2 --- JProbe Metrics --- p.57 / Chapter 4.2.3 --- Application of Metamata and Jprobe Metrics --- p.58 / Chapter 4.3 --- Models Definition --- p.61 / Chapter 4.3.1 --- Summation Model --- p.61 / Chapter 4.3.2 --- Product Model --- p.62 / Chapter 4.3.3 --- Classification Tree Model --- p.62 / Chapter 4.3.4 --- Case-Based Reasoning Model --- p.64 / Chapter 4.3.5 --- Bayesian Network Model --- p.65 / Chapter 4.4 --- Operations in ComPARE --- p.66 / Chapter 4.5 --- ComPARE Prototype --- p.68 / Chapter 5 --- Experiments and Discussions --- p.70 / Chapter 5.1 --- Data Description --- p.71 / Chapter 5.2 --- Experiment Procedures --- p.73 / Chapter 5.3 --- Modeling Methodology --- p.75 / Chapter 5.3.1 --- Classification Tree Modeling --- p.75 / Chapter 5.3.2 --- Bayesian Belief Network Modeling --- p.80 / Chapter 5.4 --- Experiment Results --- p.83 / Chapter 5.3.1 --- Classification Tree Results Using CART --- p.83 / Chapter 5.3.2 --- BBN Results Using Hugin --- p.86 / Chapter 5.5 --- Comparison and Discussion --- p.90 / Chapter 6 --- Conclusion --- p.92 / Chapter A --- Classification Tree Report of CART --- p.95 / Chapter B --- Publication List --- p.104 / Bibliography --- p.105
76

Variable selection in principal component analysis : using measures of multivariate association.

Sithole, Moses M. January 1992 (has links)
This thesis is concerned with the problem of selection of important variables in Principal Component Analysis (PCA) in such a way that the selected subsets of variables retain, as much as possible, the overall multivariate structure of the complete data. Throughout the thesis, the criteria used in order to meet this requirement are collectively referred to as measures of Multivariate Association (MVA). Most of the currently available selection methods may lead to inappropriate subsets, while Krzanowskis (1987) M(subscript)2-Procrustes criterion successfully identifies structure-bearing variables particularly when groups are present in the data. Our major objective, however, is to utilize the idea of multivariate association to select subsets of the original variables which preserve any (unknown) multivariate structure that may be present in the data.The first part of the thesis is devoted to a study of the choice of the number of components (say, k) to be used in the variable selection process. Various methods that exist in the literature for choosing k are described, and comparative studies on these methods are reviewed. Currently available methods based exclusively on the eigenvalues of the covariance or correlation matrices, and those based on cross-validation are unsatisfactory. Hence, we propose a new technique for choosing k based on the bootstrap methodology. A full comparative study of this new technique and the cross-validatory choice of k proposed by Eastment and Krzanowski (1982) is then carried out using data simulated from Monte Carlo experiment.The remainder of the thesis focuses on variable selection in PCA using measures of MVA. Various existing selection methods are described, and comparative studies on these methods available in the literature are reviewed. New methods for selecting variables, based of measures of MVA are then proposed and compared ++ / among themselves as well as with the M(subscript)2-procrustes criterion. This comparison is based on Monte Carlo simulation, and the behaviour of the selection methods is assessed in terms of the performance of the selected variables.In summary, the Monte Carlo results suggest that the proposed bootstrap technique for choosing k generally performs better than the cross-validatory technique of Eastment and Krzanowski (1982). Similarly, the Monte Carlo comparison of the variable selection methods shows that the proposed methods are comparable with or better than Krzanowskis (1987) M(subscript)2-procrustes criterion. These conclusions are mainly based on data simulated by means of Monte Carlo experiments. However, these techniques for choosing k and the various variable selection techniques are also evaluated on some real data sets. Some comments on alternative approaches and suggestions for possible extensions conclude the thesis.
77

Optimisation of the Biocatalytic Component in a Ferricyanide Mediated Approach to Rapid Biochemical Oxygen Demand Analysis

Morris, Kristy, n/a January 2005 (has links)
A novel rapid method for the determination of biochemical oxygen demand (BOD) has been developed. By replacing oxygen, the terminal electron acceptor in the microbial oxidation of organic substrate, with the ferricyanide ion, a significant increase in the rate of the biochemical reaction could be achieved. This arises from the high solubility of the ferricyanide ion (compared to oxygen); therefore allowing for elevated microbial populations without rapid depletion of the electron acceptor. Therefore, the BOD of a sample can be determined within 1-3 hours compared to 5-days with the standard BOD5 assay. A range of microorganisms were shown to be able to use the ferricyanide ion as an alternative electron acceptor for the biodegradation of a range of organic compounds in the ferricyanide mediated BOD (FM-BOD) assay. The most suitable biocatalyst in the FM-BOD method, however, was shown to be a mixture of microorganisms that was capable of degrading large amounts and types of compounds. These mixed consortia of microorganisms included a synthetic mixture prepared in our laboratory and two commercially available consortia, BODseedTM and Bi-ChemTM. When these seed materials were employed in the FM-BOD assay, the method was shown to closely estimate the BOD5 values of real wastewater samples. The linear dynamic working range of the FM-BOD method was also greatly extended compared to the standard BOD5 assay (nearly 50 times greater) and other oxygen based BOD biosensors. The immobilisation of the microbial consortia by both gel entrapment and freeze-drying methods was shown to greatly reduce the preparation and handling time of the mixed consortia for use in the FM-BOD method. Immobilisation of the mixed microbial consortium in LentiKats®, a PVA hydrogel, resulted in a marked increase in the stability of the biocatalyst. Diffusion limitations resulting from the gel matrix, however, reduced the rate and extent of the bioreaction as well as the linear dynamic working range of the method. Freeze-drying techniques were shown to circumvent some of the limitations identified with gel entrapment for the immobilisation of the mixed consortia. The freeze-dried consortia could be used off-the-shelf and demonstrated reduced diffusional restrictions. A marked decrease in the viability of the microorganisms was observed directly following the freeze-drying process and in subsequent storage. Carrageenan, however, was shown to afford a significant degree a protection to the cells during the freeze-drying process.
78

The ontological evaluation of the requirements model when shifting from a traditional to a component-based paradigm in information systems re-engineering

Valverde, Raul January 2008 (has links)
[Abstract]: The vast majority of present legacy information systems were implemented using the traditional paradigm. The traditional paradigm consists of modeling techniques used by system analysts such as System Flow Charts and Data Flow Diagrams (DFD) to capture, during the analysis phase, the activities within a system.However, with recent developments, particularly trends towards e-Commerce applications, platform independence, reusability of pre-built components, capacity for reconfiguration and higher reliability, many organizations are realizing they will need to re-engineer their systems into new component based systems that meet thesetrends given the limitations of legacy systems to adapt to these new technical requirements.There is a high degree of interest and concern in establishing whether or not a full migration to a more portable and scalable component-based architecture will be able to represent the legacy business requirements in the underlying requirements model of the re-engineered information systems.As a result, this study poses the question: Is the resulting component-based requirements model ontological equivalent to the legacy requirements model whenshifting paradigms in the re-engineering process?After a literature review, the research study is justified given the differences in requirements modeling between component-based and traditional paradigms, whichgive an indication that the resulting component model might not represent the same business requirements represented in the legacy system requirements model.The study evaluated the requirements models generated by the component-based and traditional approaches when shifting paradigms in the re-engineering process inorder to verify that the re-engineered component-based requirements model was capable of representing the same business requirements of the legacy system. Design science and an ontological evaluation using the Bunge-Wand-Weber(BWW) model were the central research methodologies for this study.A legacy system was selected as part of the case study and re-engineered by using the component-based paradigm with the help of UML diagrams. The requirements model of the legacy system was recovered using reverse engineering and compared to the component-based requirements model using normalized reference models generated with the help of BWW transformation maps. These maps revealed that there-engineered requirements models were capable of representing the same business requirements of the legacy system. A set of rules was suggested when reengineeringlegacy into component-based information systems to ensure the same representation of legacy system’s requirements in the re-engineered requirements model.Finally, this research included directions of future research that put emphasis on the development of automated software tools for systems re-engineering that couldimplement the rules suggested in this study and the ontological methodology approach used.
79

An integrated methodology for assessing physical and technological life of products for reuse

Rugrungruang, Fatida, Mechanical & Manufacturing Engineering, Faculty of Engineering, UNSW January 2008 (has links)
Strategies for reuse of components are important in order to create a closed loop manufacturing system. Over decades, the notion has been gaining ground due to environmental and legislative reasons. Reuse of components is desirable and in many cases might be economically beneficial. However, the implementation of reuse strategies has been hindered by the requirement of reliable methodologies to assess the remaining life and reuse potential of used components. The estimation of the remaining life is problematic as the useful life of a component is affected by several causes of obsolescence. The common causes are due to physical and technological issues. So far, little research has attempted to address these issues simultaneously, and integrating them. This thesis seeks to develop methodologies that aid in predicting the integrated remaining lifetime of used components. There are three core parts of this research. First, the methodology determines the remaining life of used components from the physical lifetime perspective. This was derived from the estimation of physical failure using failure rate data, and the statistical analysis of usage intensity age as obtained from customers survey. Second, the research presents the use of the technological forecasting technique to predict the remaining technological life. As it is influenced by the technology progress, the forecast was developed on the basis of product technology clusters and market trend extrapolation analysis. Finally, the resulting estimations from the two aspects were combined to obtain an integrated assessment for estimating the remaining life of components. The potential for components in a product to be reused is justified when the remaining life is greater than the average expected lifespan of the product. Two cases of domestic appliances – televisions and washing machines were used to highlight and demonstrate the validity of the proposed methodology. The results show that the proposed method provides the practitioners with a promising tool for end-of-life decision making. This is in particularly attractive when used as a preliminary decision support tool prior to the time consuming and costly processes such as disassembly and quality testing.
80

Quasi-objective Nonlinear Principal Component Analysis and applications to the atmosphere

Lu, Beiwei 05 1900 (has links)
NonLinear Principal Component Analysis (NLPCA) using three-hidden-layer feed-forward neural networks can produce solutions that over-fit the data and are non-unique. These problems have been dealt with by subjective methods during the network training. This study shows that these problems are intrinsic due to the three-hidden-layer architecture. A simplified two-hidden-layer feed-forward neural network that has no encoding layer and no bottleneck and output biases is proposed. This new, compact NLPCA model alleviates these problems without employing the subjective methods and is called quasi-objective. The compact NLPCA is applied to the zonal winds observed at seven pressure levels between 10 and 70 hPa in the equatorial stratosphere to represent the Quasi-Biennial Oscillation (QBO) and investigate its variability and structure. The two nonlinear principal components of the dataset offer a clear picture of the QBO. In particular, their structure shows that the QBO phase consists of a predominant 28.4-month cycle that is modulated by an 11-year cycle and a longer-period cycle. The significant difference in variability of the winds between cold and warm seasons and the tendency for a seasonal synchronization of the QBO phases are well captured. The one-dimensional NLPCA approximation of the dataset provides a better representation of the QBO than the classical principal component analysis and a better description of the asymmetry of the QBO between westerly and easterly shear zones and between their transitions. The compact NLPCA is then applied to the Arctic Oscillation (AO) index and aforementioned zonal winds to investigate the relationship of the AO with the QBO. The NLPCA of the AO index and zonal-winds dataset shows clearly that, of covariation of the two oscillations, the phase defined by the two nonlinear principal components progresses with a predominant 28.4-month periodicity, plus the 11-year and longer-period modulations. Large positive values of the AO index occur when westerlies prevail near the middle and upper levels of the equatorial stratosphere. Large negative values of the AO index arise when easterlies occupy over half the layer of the equatorial stratosphere.

Page generated in 0.0524 seconds