• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 25
  • 14
  • 12
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 142
  • 52
  • 35
  • 28
  • 27
  • 27
  • 24
  • 23
  • 23
  • 22
  • 20
  • 19
  • 18
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Dependability aspects of COM+ and EJB in multi-tiered distributed systems

Karásson, Robert January 2002 (has links)
COM+ and Enterprise JavaBeans are two component-based technologies that can be used to build enterprise systems. These are two competing technologies in the software industry today and choosing which technology a company should use to build their enterprise system is not an easy task. There are many factors to consider and in this project we evaluate these two technologies with focus on scalability and the dependability aspects security, availability and reliability. Independently, these technologies are theoretically evaluated with the criteria in mind. We use a 4-tier architecture for the evaluation and the center of attention is a persistence layer, which typically resides in an application server, and how it can be realized using the technologies. This evaluation results in a recommendation about which technology is a better approach to build a scalable and dependable distributed system. The results are that COM+ is considered a better approach to build this kind of multi-tier distributed systems.
12

A Maintainability Analysis of Dependability Evaluation of an Avionic System using  AADL to PNML Transformation

Mehmood, Qaiser January 2016 (has links)
Context.In the context of Software Architecture, AADL (ArchitectureAnalysis and Design Language) is one of the latest standards (SAE StandardAS5506) used for analyzing and designing of architectures of software sys-tems. Dependability evaluation of an avionic system, modeled in AADL, isconducted using petri nets standard PNML (ISO standard ISO/IEC15909-2).A maintainability analysis of PNML dependability model is also con-ducted. Objectives. In this study we investigate maintainability analysis of PNMLdependability model of an avionic system designed in AADL. Structural,functional, fault-tolerance and recovery dependencies are modeled, imple-mented, simulated and validated in PNML. Maintainability analysis withrespect to ‘changeability’ factor is also conducted. Methods.This study is a semi-combination of ’case-study’ and ’implemen-tation’ research methodologies. The implementation of case-study systemis conducted by modeling the case-study system in AADL using OSATE2tool and simulating the dependability models in PNML using Wolfgangtool. PNML dependability models are validated by comparing with GSPNdependability models of previously published research. Results. As a result of this research, PNML dependability model was ob-tained. The difficulties that influenced the research in AADL Error ModelAnnex and the OSATE2 tool are also analyzed and documented. PNMLand GSPN are compared for complexity. And maintainability analysis forPNML dependability model w.r.t ‘changeability’ factor is also an outcomeof this research. This research is recommended for software testing at ar-chitecture level as a standardized way for testing the software componentsfor faults and errors and their impact on dependable components. Conclusions. We conclude that PNML is an ISO standard and is the al-ternative for GSPN for dependability. Also, AADL Error Model Annex isstill evolving and there is a need of availability of proper literature publiclyfor better understanding. Also, PNML dependability model possesses the‘changeability’ factor of maintainability analysis and therefore it is able toadapt changes in the architecture. Also, dependability factors of a softwarecan be tested at architecture level using the standards; AADL and PNML
13

Systems Dependability Engineering for Wind Power Applications : Challenges, Concepts, and System-Level Methodologies

El-Thalji, Idriss January 2010 (has links)
Complexity and uncertainty have impacted wind power systems and their applications.Commercial wind power asset exhibits complex system behaviour due tostochastic loading characteristics of its installation context. However, differentstakeholders’ practices in whole life cycle processes try to treat multi-disciplinarycomplexity issues. Moreover, wind power system failures, stoppages, faults, supportdelays and human/organizational errors provide a clear proof of increasingthe needs for dependability. Therefore, dependability is considered as an aggregatetheory for RAMS (Reliability, Availability, Maintainability, Safety & Supportability)in order to cope with complex systems (i.e. physical systems and their assetmanagement systems) and their behaviour phenomenon. Consequently, to addresswind power practical problem as one of the modern complex and interdependentsystems, it is worth to enhance both the way of how we look at dependability andthe method of inquiry. Technical system complexity, system interdependency andsystem learn-ability are the main challenges within system dependability field.Therefore, this research work is done to integrate both terotechnology and systemsengineering methodologies to enhance systems dependability theory and practices.In particular, it focuses on three main aspects within systems dependability engineering:challenges, practitioners’ concepts and system-level methodologies.The research methodology of this thesis has utilised the mixed research approachof qualitative and quantitative methods to extract the empirical findings that arerequired to validate the dependability theory developments. Qualitative survey isused to identify the challenges of dependability theory within wind power applications.Grounded theory is used to define the practical understanding of windpower stakeholders concerning to dependability and asset management concepts.Case study is implemented to validate the systems dependability engineering, ascross edge theory of dependability and systems engineering. Moreover, the phenomenographymethod is used to capture the individual experiences and understandingof purposefully selected stakeholders, due to different site-specific circumstancesfor each wind farm.In general, the thesis contributes to the body of knowledge of five fields: dependability,terotechnology, asset management, systems engineering and wind energy.In particular, the focus of thesis contributes with retrospective review to be as referenceline for system dependability theory. Simultaneously, on basis of empiricalfindings, it contributes to be a pivot point for further enhancements from both theacademic contributions and industrial developments.
14

Markov Chains as a Real-time System Monitoring Service : Numerical Repair Rate Optimization (RRO)

Carmegren, Emil January 2022 (has links)
The expansion and increased complexity of technology is undoubtedly consistent and one can intuitively suppose that this trajectory will not deviate from this course in the years to come. On a continuous basis, concepts that started of as some hypothetical or abstract notions without practical relevance gets transferred to the modern state of our current technology. During these times, where a subset of our technology has the responsibility of handling the safety of our being, research within dependability theory must keep up the pace with technology. One cannot emphasize enough the importance of ensuring the validity of system dependability attributes prior and posterior to development. With the objective of aggregating findings to the research field and potentially derive new propositions this paper assesses the stochastic modeling concepts used within dependability theory. In particular, discrete-time-and continuous-time-Markov chains are outlined in detail, searching for possibilities to extend these processes in the context of real-time system monitoring. As an outcome, numerical 'repair rate optimization' (RRO) through CTMC uniformization is introduced. A technique which deduces a proposed allocation of repair rate adjustment based on the models parametric sensitivities (gradient ascent). The theoretical aspects are verified by development of an algorithm in Matlab that utilizes the above. Additionally, an approach of combining dependability attributes into a unified measure is proposed. Where the (bounded) transient probabilities are regarded as vectors in the L2(R, B(R), λ) Hilbert Space. For which a normalized dependability norm can be obtained by using the induced norm and triangle inequality. This serves as a metric to compare distinct architectures in terms of several, quantitative attributes. The results imply that under the hypothesis that the system/company can adapt to an increased demand on maintenance periodicity, reliability/availability can be significantly improved. Mitigating risk of failure while optimally preserving resources in terms of e.g. core capacity, maintenance personnel, budget and/or required redundancy while conditioning on the actual system behaviour.
15

DDI: A Novel Technology And Innovation Model for Dependable, Collaborative and Autonomous Systems

Armengaud, E., Schneider, D., Reich, J., Sorokos, I., Papadopoulos, Y., Zeller, M., Regan, G., Macher, G., Veledar, O., Thalmann, S., Kabir, Sohag 06 April 2022 (has links)
Yes / Digital transformation fundamentally changes established practices in public and private sector. Hence, it represents an opportunity to improve the value creation processes (e.g., “industry 4.0”) and to rethink how to address customers’ needs such as “data-driven business models” and “Mobility-as-a-Service”. Dependable, collaborative and autono-mous systems are playing a central role in this transformation process. Furthermore, the emergence of data-driven approaches combined with autonomous systems will lead to new business models and market dynamics. Innovative approaches to re-organise the value creation ecosystem, to enable distributed engineering of dependable systems and to answer urgent questions such as liability will be required. Consequently, digital transformation requires a comprehensive multi-stakeholder approach which properly balances technology, ecosystem and business innovation. Targets of this paper are (a) to introduce digital transformation and the role of / opportunities provided by autonomous systems, (b) to introduce Digital Depednability Identities (DDI) - a technology for dependability engineering of collaborative, autonomous CPS, and (c) to propose an appropriate agile approach for innovation management based on business model innovation and co-entrepreneurship. / Science Foundation Ireland grant 13/RC/2094, by the Horizon 2020 programme within the OpenInnoTrain project (grant agreement 823971) ; H2020 SESAME project (grant agreement 101017258).
16

Improving the process of analysis and comparison of results in dependability benchmarks for computer systems

Martínez Raga, Miquel 05 November 2018 (has links)
Tesis por compendio / Los dependability benchmarks (o benchmarks de confiabilidad en español), están diseñados para evaluar, mediante la categorización cuantitativa de atributos de confiabilidad y prestaciones, el comportamiento de sistemas en presencia de fallos. En este tipo de benchmarks, donde los sistemas se evalúan en presencia de perturbaciones, no ser capaces de elegir el sistema que mejor se adapta a nuestras necesidades puede, en ocasiones, conllevar graves consecuencias (económicas, de reputación, o incluso de pérdida de vidas). Por esa razón, estos benchmarks deben cumplir ciertas propiedades, como son la no-intrusión, la representatividad, la repetibilidad o la reproducibilidad, que garantizan la robustez y precisión de sus procesos. Sin embargo, a pesar de la importancia que tiene la comparación de sistemas o componentes, existe un problema en el ámbito del dependability benchmarking relacionado con el análisis y la comparación de resultados. Mientras que el principal foco de investigación se ha centrado en el desarrollo y la mejora de procesos para obtener medidas en presencia de fallos, los aspectos relacionados con el análisis y la comparación de resultados quedaron mayormente desatendidos. Esto ha dado lugar a diversos trabajos en este ámbito donde el proceso de análisis y la comparación de resultados entre sistemas se realiza de forma ambigua, mediante argumentación, o ni siquiera queda reflejado. Bajo estas circunstancias, a los usuarios de los benchmarks se les presenta una dificultad a la hora de utilizar estos benchmarks y comparar sus resultados con los obtenidos por otros usuarios. Por tanto, extender la aplicación de los benchmarks de confiabilidad y realizar la explotación cruzada de resultados es una tarea actualmente poco viable. Esta tesis se ha centrado en el desarrollo de una metodología para dar soporte a los desarrolladores y usuarios de benchmarks de confiabilidad a la hora de afrontar los problemas existentes en el análisis y comparación de resultados. Diseñada para asegurar el cumplimiento de las propiedades de estos benchmarks, la metodología integra el proceso de análisis de resultados en el flujo procedimental de los benchmarks de confiabilidad. Inspirada en procedimientos propios del ámbito de la investigación operativa, esta metodología proporciona a los evaluadores los medios necesarios para hacer su proceso de análisis explícito, y más representativo para el contexto dado. Los resultados obtenidos de aplicar esta metodología en varios casos de estudio de distintos dominios de aplicación, mostrará las contribuciones de este trabajo a mejorar el proceso de análisis y comparación de resultados en procesos de evaluación de la confiabilidad para sistemas basados en computador. / Dependability benchmarks are designed to assess, by quantifying through quantitative performance and dependability attributes, the behavior of systems in presence of faults. In this type of benchmarks, where systems are assessed in presence of perturbations, not being able to select the most suitable system may have serious implications (economical, reputation or even lost of lives). For that reason, dependability benchmarks are expected to meet certain properties, such as non-intrusiveness, representativeness, repeatability or reproducibility, that guarantee the robustness and accuracy of their process. However, despite the importance that comparing systems or components has, there is a problem present in the field of dependability benchmarking regarding the analysis and comparison of results. While the main focus in this field of research has been on developing and improving experimental procedures to obtain the required measures in presence of faults, the processes involving the analysis and comparison of results were mostly unattended. This has caused many works in this field to analyze and compare results of different systems in an ambiguous way, as the process followed in the analysis is based on argumentation, or not even present. Hence, under these circumstances, benchmark users will have it difficult to use these benchmarks and compare their results with those from others. Therefore extending the application of these dependability benchmarks and perform cross-exploitation of results among works is not likely to happen. This thesis has focused on developing a methodology to assist dependability benchmark performers to tackle the problems present in the analysis and comparison of results of dependability benchmarks. Designed to guarantee the fulfillment of dependability benchmark's properties, this methodology seamlessly integrates the process of analysis of results within the procedural flow of a dependability benchmark. Inspired on procedures taken from the field of operational research, this methodology provides evaluators with the means not only to make their process of analysis explicit to anyone, but also more representative for the context being. The results obtained from the application of this methodology to several case studies in different domains, will show the actual contributions of this work to improving the process of analysis and comparison of results in dependability benchmarking for computer systems. / Els dependability benchmarks (o benchmarks de confiabilitat, en valencià), són dissenyats per avaluar, mitjançant la categorització quantitativa d'atributs de confiabilitat i prestacions, el comportament de sistemes en presència de fallades. En aquest tipus de benchmarks, on els sistemes són avaluats en presència de pertorbacions, el no ser capaços de triar el sistema que millor s'adapta a les nostres necessitats pot tenir, de vegades, greus conseqüències (econòmiques, de reputació, o fins i tot pèrdua de vides). Per aquesta raó, aquests benchmarks han de complir certes propietats, com són la no-intrusió, la representativitat, la repetibilitat o la reproductibilitat, que garanteixen la robustesa i precisió dels seus processos. Així i tot, malgrat la importància que té la comparació de sistemes o components, existeix un problema a l'àmbit del dependability benchmarking relacionat amb l'anàlisi i la comparació de resultats. Mentre que el principal focus d'investigació s'ha centrat en el desenvolupament i la millora de processos per a obtenir mesures en presència de fallades, aquells aspectes relacionats amb l'anàlisi i la comparació de resultats es van desatendre majoritàriament. Açò ha donat lloc a diversos treballs en aquest àmbit on els processos d'anàlisi i comparació es realitzen de forma ambigua, mitjançant argumentació, o ni tan sols queden reflectits. Sota aquestes circumstàncies, als usuaris dels benchmarks se'ls presenta una dificultat a l'hora d'utilitzar aquests benchmarks i comparar els seus resultats amb els obtinguts per altres usuaris. Per tant, estendre l'aplicació dels benchmarks de confiabilitat i realitzar l'explotació creuada de resultats és una tasca actualment poc viable. Aquesta tesi s'ha centrat en el desenvolupament d'una metodologia per a donar suport als desenvolupadors i usuaris de benchmarks de confiabilitat a l'hora d'afrontar els problemes existents a l'anàlisi i comparació de resultats. Dissenyada per a assegurar el compliment de les propietats d'aquests benchmarks, la metodologia integra el procés d'anàlisi de resultats en el flux procedimental dels benchmarks de confiabilitat. Inspirada en procediments propis de l'àmbit de la investigació operativa, aquesta metodologia proporciona als avaluadors els mitjans necessaris per a fer el seu procés d'anàlisi explícit, i més representatiu per al context donat. Els resultats obtinguts d'aplicar aquesta metodologia en diversos casos d'estudi de distints dominis d'aplicació, mostrarà les contribucions d'aquest treball a millorar el procés d'anàlisi i comparació de resultats en processos d'avaluació de la confiabilitat per a sistemes basats en computador. / Martínez Raga, M. (2018). Improving the process of analysis and comparison of results in dependability benchmarks for computer systems [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/111945 / Compendio
17

A synthesis of logic and bio-inspired techniques in the design of dependable systems

Papadopoulos, Y., Walker, M., Parker, D., Sharvia, S., Bottaci, L., Kabir, Sohag, Azevedo, L., Sorokos, I. 21 October 2019 (has links)
Yes / Much of the development of model-based design and dependability analysis in the design of dependable systems, including software intensive systems, can be attributed to the application of advances in formal logic and its application to fault forecasting and verification of systems. In parallel, work on bio-inspired technologies has shown potential for the evolutionary design of engineering systems via automated exploration of potentially large design spaces. We have not yet seen the emergence of a design paradigm that effectively combines these two techniques, schematically founded on the two pillars of formal logic and biology, from the early stages of, and throughout, the design lifecycle. Such a design paradigm would apply these techniques synergistically and systematically to enable optimal refinement of new designs which can be driven effectively by dependability requirements. The paper sketches such a model-centric paradigm for the design of dependable systems, presented in the scope of the HiP-HOPS tool and technique, that brings these technologies together to realise their combined potential benefits. The paper begins by identifying current challenges in model-based safety assessment and then overviews the use of meta-heuristics at various stages of the design lifecycle covering topics that span from allocation of dependability requirements, through dependability analysis, to multi-objective optimisation of system architectures and maintenance schedules.
18

Quantifying the Reliability of Performance Time and User Perceptions Obtained from Passive Exoskeleton Evaluations

Noll, Alexander Baldrich Benoni 16 August 2024 (has links)
Work-related musculoskeletal disorders (WMSDs) cost US industries billions annually and reduce quality of life for those afflicted. Passive exoskeletons (EXOs) have emerged as a potential intervention to reduce worker exposures to WMSD risk factors. As EXO adoption is rising, EXO manufacturers are designing and producing new EXOs in accordance with growing demand. However, there are no standardized EXO evaluation protocols and EXO use recommendations, due in part to insufficient information on the reliability of EXO evaluation measures. The purpose of this thesis was to quantify the reliability of common EXO evaluation measures, using both traditional approaches a more advanced statistical approach (i.e., Generalizability Theory), while also identifying potential effects of EXO type, work task, and individual differences. This work used data from a recently completed EXO evaluation study, conducted in Virginia Tech's Occupational Ergonomics and Biomechanics Lab. Forty-two total participants completed simulated occupational tasks, in two separate experimental sessions on different days, while using an arm-support EXO (ASE) and a back-support EXO (BSE). Several outcome measures reached excellent within-session reliability within four trials for many tasks considered. Between-session reliability levels were lower than within-session levels, with outcome measures reaching moderate-to-good reliability for most tasks. Interindividual differences accounted for the largest proportion of variance for measurement reliability, followed by the experimental session. For all tasks, outcome measures reached excellent dependability levels, with many achieving excellent levels within five total trials. Inconsistencies observed in between-session reliability levels and dependability levels suggest that additional training and EXO familiarity may affect measurement reliability of outcome measures differently for some tasks, unique to each EXO type. These discrepancies emphasize the importance for additional research into this topic. Overall, the current findings indicate that many of the commonly used EXO evaluation measures are reliable and dependable within five trials and one experimental session, providing a potential foundation for standardized EXO assessment protocols. / Master of Science / Work-related musculoskeletal disorders (WMSDs) are a substantial economic burden and impair the quality of life for affected workers. Passive exoskeletons (EXOs), which use springs or elastic material to distribute the load placed on workers during manual labor, are a possible solution to reduce worker exposure to WMSD risk factors. EXO adoption is rising, but there are no standardized procedures to test the effectiveness of EXOs or standardized recommendations for EXO use. The purpose of this thesis was to determine the reliability of EXO evaluation measures commonly used in prior research, using both traditional reliability calculation methods alongside a more advanced method (i.e., Generalizability Theory). Data from a recently completed study were used, which were collected from 42 participants in two separate experimental sessions on two different days. Participants completed tasks intended to simulate manual work, using either an arm-support exoskeleton – which supported their upper arms during relevant tasks, or a back-support exoskeleton – which supported their lower back during relevant tasks. Many of the tasks and outcome measures reached excellent reliability within four repetitions in a single day. When examining reliability of evaluations across days, we found reliability levels were lower than levels obtained from a single day. All tasks and outcome measures reached excellent dependability levels, with many requiring only five trials to reach excellent levels. Reliability increased with the number of trials in an EXO evaluation experiment. Moreover, our results revealed that the EXO type being used and the biological sex of a participant both influence reliability, but individual participant differences had the greatest effect on measurement reliability. This research reveals possible experimental conditions required for reliable, efficient, and cost-effective EXO research, facilitating the development of a standardized EXO evaluation protocol.
19

Quality of service of crash-recovery failure detectors

Ma, Tiejun January 2007 (has links)
This thesis presents the results of an investigation into the failure detection problem. We consider the specific case of the Quality of Service (QoS) of crash failure detection. In contrast to previous work, we address the crash failure detection problem when the monitored target is resilient and recovers after failure. To the best of our knowledge, this is the first work to provide an analysis of crash-recovery failure detection from the QoS perspective. We develop a probabilistic model of the behavior of a crash-recovery target, i.e. one which has the ability to recover from the crash state. We show that the fail-free run and the crash-stop run are special cases of the crash-recovery run with mean time to failure (MTTF) approaching to infinity and mean time to recovery (MTTR) approaching to infinity, respectively. We extend the previously published QoS metrics to allow the measurement of the recovery speed, and the definition of the completeness property of a failure detector. Then, the impact of the dependability of the crash-recovery target on the QoS bounds for such a crash-recovery failure detector is analyzed using general dependability metrics, such as MTTF and MTTR, based on an approximate probabilistic model of the two-process failure detection system. Then according to our approximate model, we show how to estimate the failure detector’s parameters to achieve a required QoS, based on Chen et al.’s NFD-S algorithm analytically, and how to execute the configuration procedure of this crash-recovery failure detector. In order to make the failure detector adaptive to the target’s crash-recovery behavior and enable the autonomy of the monitoring procedure, we propose two types of recovery detection protocols. One is a reliable recovery detection protocol, which can guarantee to detect each occurring failure and recovery by adopting persistent storage. The other is a lightweight recovery detection protocol, which does not guarantee to detect every failure and recovery but which reduces the system overhead. Both of these recovery detection protocols improve the completeness without reducing the other QoS aspects of a failure detector. In addition, we also demonstrate how to estimate the inputs, such as the dependability metrics, using the failure detector itself. In order to evaluate our analytical work, we simulate the following failure detection algorithms: the simple heartbeat timeout algorithm, the NFD-S algorithm and the NFDS algorithm with the lightweight recovery detection protocol, for various values of MTTF and MTTR. The simulation results show that the dependability of a recoverable monitored target could have significant impact on the QoS of such a failure detector. This conforms well to our models and analysis. We show that in the case of reasonable long MTTF, the NFD-S algorithm with the lightweight recovery detection protocol exhibits better QoS than the NFD-S algorithm for the completeness of a crash-recovery failure detector, and similarly for other QoS metrics.
20

Autonomic Failure Identification and Diagnosis for Building Dependable Cloud Computing Systems

Guan, Qiang 05 1900 (has links)
The increasingly popular cloud-computing paradigm provides on-demand access to computing and storage with the appearance of unlimited resources. Users are given access to a variety of data and software utilities to manage their work. Users rent virtual resources and pay for only what they use. In spite of the many benefits that cloud computing promises, the lack of dependability in shared virtualized infrastructures is a major obstacle for its wider adoption, especially for mission-critical applications. Virtualization and multi-tenancy increase system complexity and dynamicity. They introduce new sources of failure degrading the dependability of cloud computing systems. To assure cloud dependability, in my dissertation research, I develop autonomic failure identification and diagnosis techniques that are crucial for understanding emergent, cloud-wide phenomena and self-managing resource burdens for cloud availability and productivity enhancement. We study the runtime cloud performance data collected from a cloud test-bed and by using traces from production cloud systems. We define cloud signatures including those metrics that are most relevant to failure instances. We exploit profiled cloud performance data in both time and frequency domain to identify anomalous cloud behaviors and leverage cloud metric subspace analysis to automate the diagnosis of observed failures. We implement a prototype of the anomaly identification system and conduct the experiments in an on-campus cloud computing test-bed and by using the Google datacenter traces. Our experimental results show that our proposed anomaly detection mechanism can achieve 93% detection sensitivity while keeping the false positive rate as low as 6.1% and outperform other tested anomaly detection schemes. In addition, the anomaly detector adapts itself by recursively learning from these newly verified detection results to refine future detection.

Page generated in 0.06 seconds