• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 7
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 88
  • 88
  • 66
  • 28
  • 26
  • 22
  • 15
  • 15
  • 14
  • 12
  • 12
  • 12
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Dynamic Fault Tree Analysis: State-of-the-Art in Modeling, Analysis, and Tools

Aslansefat, K., Kabir, Sohag, Gheraibia, Y., Papadopoulos, Y. 04 August 2020 (has links)
Yes / Safety and reliability are two important aspects of dependability that are needed to be rigorously evaluated throughout the development life-cycle of a system. Over the years, several methodologies have been developed for the analysis of failure behavior of systems. Fault tree analysis (FTA) is one of the well-established and widely used methods for safety and reliability engineering of systems. Fault tree, in its classical static form, is inadequate for modeling dynamic interactions between components and is unable to include temporal and statistical dependencies in the model. Several attempts have been made to alleviate the aforementioned limitations of static fault trees (SFT). Dynamic fault trees (DFT) were introduced to enhance the modeling power of its static counterpart. In DFT, the expressiveness of fault tree was improved by introducing new dynamic gates. While the introduction of the dynamic gates helps to overcome many limitations of SFT and allows to analyze a wide range of complex systems, it brings some overhead with it. One such overhead is that the existing combinatorial approaches used for qualitative and quantitative analysis of SFTs are no longer applicable to DFTs. This leads to several successful attempts for developing new approaches for DFT analysis. The methodologies used so far for DFT analysis include, but not limited to, algebraic solution, Markov models, Petri Nets, Bayesian Networks, and Monte Carlo simulation. To illustrate the usefulness of modeling capability of DFTs, many benchmark studies have been performed in different industries. Moreover, software tools are developed to aid in the DFT analysis process. Firstly, in this chapter, we provided a brief description of the DFT methodology. Secondly, this chapter reviews a number of prominent DFT analysis techniques such as Markov chains, Petri Nets, Bayesian networks, algebraic approach; and provides insight into their working mechanism, applicability, strengths, and challenges. These reviewed techniques covered both qualitative and quantitative analysis of DFTs. Thirdly, we discussed the emerging trends in machine learning based approaches to DFT analysis. Fourthly, the research performed for sensitivity analysis in DFTs has been reviewed. Finally, we provided some potential future research directions for DFT-based safety and reliability analysis.
2

Variable ordering heuristics for binary decision diagrams

Bartlett, Lisa Marie January 2000 (has links)
Fault tree analysis, FTA, is one of the most commonly used techniques for safety system assessment. Over the past five years the Binary Decision Diagram (BDD) methodology has been introduced which significantly aids the analysis of the fault tree diagram. The approach has been shown to improve both the efficiency of determining the minimal cut sets of the fault tree, and also the accuracy of the calculation procedure used to quantifY the top event parameters. To utilise the BDD technique the fault tree structure needs to be converted into the BDD format. Converting the fault tree is relatively straightforward but requires the basic events of the tree to be placed in an ordering. The ordering of the basic events is -critical to the resulting size of the BDD, and ultimately affects the performance and benefits of this technique. There are a number of variable ordering heuristics in the literature, however the performance of each depends on the tree structure being analysed. These heuristic approaches do not always yield a minimal BDD structure for all trees, some approaches generate orderings that are better for some trees but worse for others. Within this thesis three pattern recognition approaches, that of machine learning classifier systems, multi-layer perceptron networks and radial basis function neural networks, have been investigated to try and select a variable ordering heuristic for a given fault tree from a set of alternatives. In addition a completely new heuristic based on component structural importance measures has been suggested with significant improvement in producing the smallest BDD over those methods currently in the literature.
3

Constructing Decision Tree Using Learners¡¦ Portfolio for Supporting e-Learning

Liao, Shen-Jai 01 July 2003 (has links)
In recent years, with the development of electronic media, e-learning has begun to replace traditional teaching and learning with Internet service. With the availability of newly developed technology, opportunities have risen for the teacher of e-learning to using students¡¦ learning logs that recorded via Web site to understanding the learning state of students. This research will address an analytical mechanism that integrated multidimensional logs to let teachers observe students all learning behaviors and learning status immediately, and used decision tree analysis to detect when and what students may have a learning bottleneck. Finally, teachers can use those results to give the right student with the right remedial instruction at the right time. Summary, we have four conclusions: (1) the decision rules are different from course to course, for example instruction method and assessment method, assignment is a basis to assess student¡¦s learning effectiveness, as well those attributes cooperate with learning effectiveness are related to student¡¦s learning behaviors. (2) To accumulate those learning behavior attributes with the time point actually detect learners probably learning effectiveness early. The variation of effectiveness with different time interval is not clearly, but all time intervals can detect learning effectiveness early. (3) To detect students¡¦ learning effectiveness with different grade level classifications, every grade level classifications can describe decision rules very well, but not to detect all students¡¦ learning effectiveness. (4) Although to detect high-grade students¡¦ learning effectiveness are very difficult, but we can detect lower-grade students¡¦ learning effectiveness. Finally, this research can really observe student¡¦s leaning states immediately, and early detect students¡¦ learning effectiveness. Therefore, teachers can make decisions to manage learning activities to promote learning effect.
4

Identifying responders to melphalan and dexamethasone for newly diagnosed multiple myeloma patients

Esmaeili, Abbas 22 July 2008 (has links)
Background: MY7 clinical trial compared dexamethasone plus melphalan (MD) vs. prednisone plus melphalan (MP) in multiple myeloma treatment and found no statistically significant difference in overall survival (OS) between the two groups. But, patients reacted to treatment differently. We aimed to identify patients who might have benefited from dexamethasone, and characterize them by their baseline demographic and clinical factors. Methods: First, the prognostic model for OS was developed on the MP arm. The estimated coefficients and baseline hazard were applied to the MD arm to derive martingale residuals (MR). Classification and regression tree analysis was done to identify independent predictive factors for OS and MR was used as response variable. All covariates in categorical shape were used as independent variables to develop the predictive model in MD arm. MP arm was divided accordingly. Subgroups with negative mean MR (survived > expected) were candidates for positive responders while those with positive mean MR (survived < expected) were candidates of negative responders. Mean MR in each subgroup and p values from comparison of OS (log rank test stratified by subgroups) were used to combine the appropriate subgroups as the positive responders or negative responders. Results: A total of 97 patients (42%) in MD arm were identified as positive responders and their OS (median of 44.5 months) was significantly longer than that (median of 33 months) in the corresponding subgroups in MP arm (HR = 0.56, 95% CI 0.4-0.8; p = 0.0014). All positive responders had three common baseline characteristics: aged ≤75 years, calcium concentration ≤2.6 mmol/L and Durie-Salmon stages 2 or 3. Among patients with ECOG performance status<2 those with either HGB≥100 mg/dl or HGB<100 mg/dl and WBC≥4,000 and <4 lytic bone lesions were categorized as positive responders. Also, among the patients with ECOG performance status≥2, males with >3 lytic bone lesions were positive responders. Negative responders (HR = 1.56, 95% confidence interval 1.1 – 2.2; p = 0.006) included patients aged >75 or aged ≤75 with calcium concentration >2.6 mmol/L or aged ≤75 with calcium concentration ≤2.6 mmol/L but had Durie-Salmon stage 1. Conclusions: Evaluation of the hypotheses validity warrants further studies. / Thesis (Master, Community Health & Epidemiology) -- Queen's University, 2008-07-21 13:46:53.748
5

The Application of Mineral Processing Techniques to the Scrap Recycling Industry

Koermer, Scott Carl 09 November 2015 (has links)
The scrap metal recycling industry is a growing industry that plays an important role in the sustainability of a large global metal supply. Unfortunately, recycling lacks many standards, and test procedures in place for mineral processing. These standards and practices, if used in recycling, could aid recyclers in determining and achieving optimal separations for their plant.. New regulations for scrap imports into China make it difficult to obtain the metal recoveries that have been achieved in the past. In order to help scrap yards adhere to the new regulations the Eriez RCS eddy current separator system was tested in full scale. The principles this system uses, called circuit analysis, have been used by the mining industry for years, and can be used with any separation system. The Eriez RCS system surpassed the requirements of the Chinese regulations, while simultaneously increasing the recovery of metals. In order to further analyze eddy current separator circuits, tree analysis was attempted for single eddy current separators, as well as more complex circuits mimicked using locked cycle tests. The circuits used in the locked cycle test were a rougher-cleaner, a rougher-scavenger, and a rougher-cleaner-scavenger. It was found that it is possible to use tree analysis to compare different eddy current separator circuits using the same settings, however standards for this practice need to be established for it to be useful. Using the data analysis methods developed for this particular tree analysis, the rougher-cleaner-scavenger test had the best performance overall. This is the same result as the full scale testing done on the Eriez RCS system, but more testing should be conducted to confirm the data analysis techniques of calculating theoretical efficiency, recovery efficiency, and rejection efficiency. / Master of Science
6

UAS Risk Analysis using Bayesian Belief Networks: An Application to the VirginiaTech ESPAARO

Kevorkian, Christopher George 27 September 2016 (has links)
Small Unmanned Aerial Vehicles (SUAVs) are rapidly being adopted in the National Airspace (NAS) but experience a much higher failure rate than traditional aircraft. These SUAVs are quickly becoming complex enough to investigate alternative methods of failure analysis. This thesis proposes a method of expanding on the Fault Tree Analysis (FTA) method to a Bayesian Belief Network (BBN) model. FTA is demonstrated to be a special case of BBN and BBN can allow for more complex interactions between nodes than is allowed by FTA. A model can be investigated to determine the components to which failure is most sensitive and allow for redundancies or mitigations against those failures. The introduced method is then applied to the Virginia Tech ESPAARO SUAV. / Master of Science
7

Computing Most Probable Sequences of State Transitions in Continuous-time Markov Systems.

Levin, Pavel 22 June 2012 (has links)
Continuous-time Markov chains (CTMC's) form a convenient mathematical framework for analyzing random systems across many different disciplines. A specific research problem that is often of interest is to try to predict maximum probability sequences of state transitions given initial or boundary conditions. This work shows how to solve this problem exactly through an efficient dynamic programming algorithm. We demonstrate our approach through two different applications - ranking mutational pathways of HIV virus based on their probabilities, and determining the most probable failure sequences in complex fault-tolerant engineering systems. Even though CTMC's have been used extensively to realistically model many types of complex processes, it is often a standard practice to eventually simplify the model in order to perform the state evolution analysis. As we show here, simplifying approaches can lead to inaccurate and often misleading solutions. Therefore we expect our algorithm to find a wide range of applications across different domains.
8

Computing Most Probable Sequences of State Transitions in Continuous-time Markov Systems.

Levin, Pavel 22 June 2012 (has links)
Continuous-time Markov chains (CTMC's) form a convenient mathematical framework for analyzing random systems across many different disciplines. A specific research problem that is often of interest is to try to predict maximum probability sequences of state transitions given initial or boundary conditions. This work shows how to solve this problem exactly through an efficient dynamic programming algorithm. We demonstrate our approach through two different applications - ranking mutational pathways of HIV virus based on their probabilities, and determining the most probable failure sequences in complex fault-tolerant engineering systems. Even though CTMC's have been used extensively to realistically model many types of complex processes, it is often a standard practice to eventually simplify the model in order to perform the state evolution analysis. As we show here, simplifying approaches can lead to inaccurate and often misleading solutions. Therefore we expect our algorithm to find a wide range of applications across different domains.
9

CONTRAST: A conceptual reliability growth approach for comparison of launch vehicle architectures

Zwack, Mathew R. 12 January 2015 (has links)
In 2004, the NASA Astronaut Office produced a memo regarding the safety of next generation launch vehicles. The memo requested that these vehicles have a probability of loss of crew of at most 1 in 1000 flights, which represents nearly an order of magnitude decrease from current vehicles. The goal of LOC of 1 in 1000 flights has since been adopted by the launch vehicle design community as a requirement for the safety of future vehicles. This research addresses the gap between current vehicles and future goals by improving the capture of vehicle architecture effects on reliability and safety. Vehicle architecture pertains to the physical description of the vehicle itself, which includes manned or unmanned, number of stages, number of engines per stage, engine cycle types, redundancy, etc. During the operations phase of the vehicle life-cycle it is clear that each of these parameters will have an inherent effect on the reliability and safety of the vehicle. However, the vehicle architecture is typically determined during the early conceptual design phase when a baseline vehicle is selected. Unless a great amount of money and effort is spent, the architecture will remain relatively constant from conceptual design through operations. Due to the fact that the vehicle architecture is essentially “locked-in” during early design, it is expected that much of the vehicle's reliability potential will also be locked-in. This observation leads to the conclusion that improvement of vehicle reliability and safety in the area of vehicle architecture must be completed during early design. Evaluation of the effects of different architecture decisions must be performed prior to baseline selection, which helps to identify a vehicle that is most likely to meet the reliability and safety requirements when it reaches operations. Although methods exist for evaluating reliability and safety during early design, weaknesses exist when trying to evaluate all architecture effects simultaneously. The goal of this research was therefore to formulate and implement a method that is capable of quantitatively evaluating vehicle architecture effects on reliability and safety during early conceptual design. The ConcepTual Reliability Growth Approach for CompariSon of Launch Vehicle ArchiTectures (CONTRAST) was developed to meet this goal. Using the strengths of existing techniques a hybrid approach was developed, which utilizes a reliability growth projection to evaluate the vehicles. The growth models are first applied at the subsystem level and then a vehicle level projection is generated using a simple system level fault tree. This approach allows for the capture of all trades of interest at the subsystem level as well as many possible trades at the assembly level. The CONTRAST method is first tested on an example problem, which compares the method output to actual data from the Space Transportation System (STS). This example problem illustrates the ability of the CONTRAST method to capture reliability growth trends seen during vehicle operations. It also serves as a validation for the development of the reliability growth model assumptions for future applications of the method. The final chapter of the thesis applies the CONTRAST method to a relevant launch vehicle, the Space Launch System (SLS), which is currently under development. Within the application problem, the output of the method is first used to check that the primary research objective has been met. Next, the output is compared to a state-of-the-art tool in order to demonstrate the ability of the CONTRAST method to alleviate one of the primary consequences of using existing techniques. The final section within this chapter presents an analysis of the booster and upper stage block upgrade options for the SLS vehicle. A study of the upgrade options was carried out because the CONTRAST method is uniquely suited to look at the effects of such strategies. The results from the study of SLS block upgrades give interesting observations regarding the desired development order and upgrade strategy. Ultimately this application problem demonstrates the merits of applying the CONTRAST method during early design. This approach provides the designer with more information in regard to the expected reliability of the vehicle, which will ultimately enable the selection of a vehicle baseline that is most likely to meet the future requirements.
10

Computing Most Probable Sequences of State Transitions in Continuous-time Markov Systems.

Levin, Pavel January 2012 (has links)
Continuous-time Markov chains (CTMC's) form a convenient mathematical framework for analyzing random systems across many different disciplines. A specific research problem that is often of interest is to try to predict maximum probability sequences of state transitions given initial or boundary conditions. This work shows how to solve this problem exactly through an efficient dynamic programming algorithm. We demonstrate our approach through two different applications - ranking mutational pathways of HIV virus based on their probabilities, and determining the most probable failure sequences in complex fault-tolerant engineering systems. Even though CTMC's have been used extensively to realistically model many types of complex processes, it is often a standard practice to eventually simplify the model in order to perform the state evolution analysis. As we show here, simplifying approaches can lead to inaccurate and often misleading solutions. Therefore we expect our algorithm to find a wide range of applications across different domains.

Page generated in 0.2996 seconds