• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 89
  • 21
  • 13
  • 9
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 175
  • 175
  • 31
  • 28
  • 24
  • 23
  • 22
  • 19
  • 18
  • 16
  • 15
  • 15
  • 14
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Experiment Design and Reliability Analysis of Accelerated Degradation Test

Zhang, Xiao 22 October 2013 (has links)
No description available.
12

Discrete-Time Bayesian Networks Applied to Reliability of Flexible Coping Strategies of Nuclear Power Plants

Sahin, Elvan 11 June 2021 (has links)
The Fukushima Daiichi accident prompted the nuclear community to find a new solution to reduce the risky situations in nuclear power plants (NPPs) due to beyond-design-basis external events (BDBEEs). An implementation guide for diverse and flexible coping strategies (FLEX) has been presented by Nuclear Energy Institute (NEI) to manage the challenge of BDBEEs and to enhance reactor safety against extended station blackout (SBO). To assess the effectiveness of FLEX strategies, probabilistic risk assessment (PRA) methods can be used to calculate the reliability of such systems. Due to the uniqueness of FLEX systems, these systems can potentially carry dependencies among components not commonly modeled in NPPs. Therefore, a suitable method is needed to analyze the reliability of FLEX systems in nuclear reactors. This thesis investigates the effectiveness and applicability of Bayesian networks (BNs) and Discrete-Time Bayesian Networks (DTBNs) in the reliability analysis of FLEX equipment that is utilized to reduce the risk in nuclear power plants. To this end, the thesis compares BNs with two other reliability assessment methods: Fault Tree (FT) and Markov chain (MC). Also, it is shown that these two methods can be transformed into BN to perform the reliability analysis of FLEX systems. The comparison of the three reliability methods is shown and discussed in three different applications. The results show that BNs are not only a powerful method in modeling FLEX strategies, but it is also an effective technique to perform reliability analysis of FLEX equipment in nuclear power plants. / Master of Science / Some external events like earthquakes, flooding, and severe wind, may cause damage to the nuclear reactors. To reduce the consequences of these damages, the Nuclear Energy Institute (NEI) has proposed mitigating strategies known as FLEX (Diverse and Flexible Coping Strategies). After the implementation of FLEX in nuclear power plants, we need to analyze the failure or success probability of these engineering systems through one of the existing methods. However, the existing methods are limited in analyzing the dependencies among components in complex systems. Bayesian networks (BNs) are a graphical and quantitative technique that is utilized to model dependency among events. This thesis shows the effectiveness and applicability of BNs in the reliability analysis of FLEX strategies by comparing it with two other reliability analysis tools, known as Fault Tree Analysis and Markov Chain. According to the reliability analysis results, BN is a powerful and promising method in modeling and analyzing FLEX strategies.
13

A Bayesian network based study on determining the relationship between job stress and safety climate factors in occurrence of accidents.

Khoshakhlagh, A.H., Yazdanirad, S., Kashani, M.M., Khatooni, E., Hatamnegad, Y., Kabir, Sohag 06 April 2022 (has links)
Yes / Job stress and safety climate have been recognized as two crucial factors that can increase the risk of occupational accidents. This study was performed to determine the relationship between job stress and safety climate factors in the occurrence of accidents using the Bayesian network model. This cross-sectional study was performed on 1530 male workers of Asaluyeh petrochemical company in Iran. The participants were asked to complete the questionnaires, including demographical information and accident history questionnaire, NIOSH generic job stress questionnaire, and Nordic safety climate questionnaire. Also, work experience and the accident history data were inquired from the petrochemical health unit. Finally, the relationships between the variables were investigated using the Bayesian network model. A high job stress condition could decrease the high safety climate from 53 to 37% and increase the accident occurrence from 72 to 94%. Moreover, a low safety climate condition could increase the accident occurrence from 72 to 93%. Also, the concurrent high job stress and low safety climate could raise the accident occurrence from 72 to 93%. Among the associations between the job stress factor and safety climate dimensions, the job stress and worker's safety priority and risk non-acceptance (0.19) had the highest mean influence value. The adverse effect of high job stress conditions on accident occurrence is twofold. It can directly increase the accident occurrence probability and in another way, it can indirectly increase the accident occurrence probability by causing the safety climate to go to a lower level.
14

Safety of Flight Prediction for Small Unmanned Aerial Vehicles Using Dynamic Bayesian Networks

Burns, Meghan Colleen 23 May 2018 (has links)
This thesis compares three variations of the Bayesian network as an aid for decision-making using uncertain information. After reviewing the basic theory underlying probabilistic graphical models and Bayesian estimation, the thesis presents a user-defined static Bayesian network, a static Bayesian network in which the parameter values are learned from data, and a dynamic Bayesian network with learning. As a basis for the comparison, these models are used to provide a prior assessment of the safety of flight of a small unmanned aircraft, taking into consideration the state of the aircraft and weather. The results of the analysis indicate that the dynamic Bayesian network is more effective than the static networks at predicting safety of flight. / Master of Science / This thesis used probabilities to aid decision-making using uncertain information. This thesis presents three models in the form of networks that use probabilities to aid the assessment of flight safety for a small unmanned aircraft. All three methods are forms of Bayesian networks, graphs that map causal relationships between random variables. Each network models the flight conditions and state of the aircraft; two of the networks are static and one varies with time. The results of the analysis indicate that the dynamic Bayesian network is more effective than the static networks at predicting safety of flight.
15

Inner Ensembles: Using Ensemble Methods in Learning Step

Abbasian, Houman 16 May 2014 (has links)
A pivotal moment in machine learning research was the creation of an important new research area, known as Ensemble Learning. In this work, we argue that ensembles are a very general concept, and though they have been widely used, they can be applied in more situations than they have been to date. Rather than using them only to combine the output of an algorithm, we can apply them to decisions made inside the algorithm itself, during the learning step. We call this approach Inner Ensembles. The motivation to develop Inner Ensembles was the opportunity to produce models with the similar advantages as regular ensembles, accuracy and stability for example, plus additional advantages such as comprehensibility, simplicity, rapid classification and small memory footprint. The main contribution of this work is to demonstrate how broadly this idea can be applied, and highlight its potential impact on all types of algorithms. To support our claim, we first provide a general guideline for applying Inner Ensembles to different algorithms. Then, using this framework, we apply them to two categories of learning methods: supervised and un-supervised. For the former we chose Bayesian network, and for the latter K-Means clustering. Our results show that 1) the overall performance of Inner Ensembles is significantly better than the original methods, and 2) Inner Ensembles provide similar performance improvements as regular ensembles.
16

Inner Ensembles: Using Ensemble Methods in Learning Step

Abbasian, Houman January 2014 (has links)
A pivotal moment in machine learning research was the creation of an important new research area, known as Ensemble Learning. In this work, we argue that ensembles are a very general concept, and though they have been widely used, they can be applied in more situations than they have been to date. Rather than using them only to combine the output of an algorithm, we can apply them to decisions made inside the algorithm itself, during the learning step. We call this approach Inner Ensembles. The motivation to develop Inner Ensembles was the opportunity to produce models with the similar advantages as regular ensembles, accuracy and stability for example, plus additional advantages such as comprehensibility, simplicity, rapid classification and small memory footprint. The main contribution of this work is to demonstrate how broadly this idea can be applied, and highlight its potential impact on all types of algorithms. To support our claim, we first provide a general guideline for applying Inner Ensembles to different algorithms. Then, using this framework, we apply them to two categories of learning methods: supervised and un-supervised. For the former we chose Bayesian network, and for the latter K-Means clustering. Our results show that 1) the overall performance of Inner Ensembles is significantly better than the original methods, and 2) Inner Ensembles provide similar performance improvements as regular ensembles.
17

An extended Bayesian network approach for analyzing supply chain disruptions

Donaldson Soberanis, Ivy Elizabeth 01 January 2010 (has links)
Supply chain management (SCM) is the oversight of materials, information, and finances as they move in a process from supplier to manufacturer to wholesaler to retailer to consumer. Supply chain management involves coordinating and integrating these flows both within and among companies as efficiently as possible. The supply chain consists of interconnected components that can be complex and dynamic in nature. Therefore, an interruption in one subnetwork of the system may have an adverse effect on another subnetworks, which will result in a supply chain disruption. Disruptions from an event or series of events can have costly and widespread ramifications. When a disruption occurs, the speed at which the problem is discovered becomes critical. There is an urgent need for efficient monitoring of the supply chain. By examining the vulnerability of the supply chain network, supply chain managers will be able to mitigate risk and develop quick response strategies in order to reduce supply chain disruption. However, modeling these complex supply chain systems is a challenging research task. This research is concerned with developing an extended Bayesian Network approach to analyze supply chain disruptions. The aim is to develop strategies that can reduce the adverse effects of the disruptions and hence improve overall system reliability. The supply chain disruptions is modeled using Bayesian Networks-a method of modeling the cause of current and future events, which has the ability to model the large number of variables in a supply chain and has proven to be a powerful tool under conditions of uncertainty. Two impact factors are defined. These are the Bayesian Impact Factor (BIF) and the Node Failure Impact Factor (NFIF). An industrial example is used to illustrate the application proposed to make the supply chain more reliable.
18

The role of classifiers in feature selection : number vs nature

Chrysostomou, Kyriacos January 2008 (has links)
Wrapper feature selection approaches are widely used to select a small subset of relevant features from a dataset. However, Wrappers suffer from the fact that they only use a single classifier when selecting the features. The problem of using a single classifier is that each classifier is of a different nature and will have its own biases. This means that each classifier will select different feature subsets. To address this problem, this thesis aims to investigate the effects of using different classifiers for Wrapper feature selection. More specifically, it aims to investigate the effects of using different number of classifiers and classifiers of different nature. This aim is achieved by proposing a new data mining method called Wrapper-based Decision Trees (WDT). The WDT method has the ability to combine multiple classifiers from four different families, including Bayesian Network, Decision Tree, Nearest Neighbour and Support Vector Machine, to select relevant features and visualise the relationships among the selected features using decision trees. Specifically, the WDT method is applied to investigate three research questions of this thesis: (1) the effects of number of classifiers on feature selection results; (2) the effects of nature of classifiers on feature selection results; and (3) which of the two (i.e., number or nature of classifiers) has more of an effect on feature selection results. Two types of user preference datasets derived from Human-Computer Interaction (HCI) are used with WDT to assist in answering these three research questions. The results from the investigation revealed that the number of classifiers and nature of classifiers greatly affect feature selection results. In terms of number of classifiers, the results showed that few classifiers selected many relevant features whereas many classifiers selected few relevant features. In addition, it was found that using three classifiers resulted in highly accurate feature subsets. In terms of nature of classifiers, it was showed that Decision Tree, Bayesian Network and Nearest Neighbour classifiers caused signficant differences in both the number of features selected and the accuracy levels of the features. A comparison of results regarding number of classifiers and nature of classifiers revealed that the former has more of an effect on feature selection than the latter. The thesis makes contributions to three communities: data mining, feature selection, and HCI. For the data mining community, this thesis proposes a new method called WDT which integrates the use of multiple classifiers for feature selection and decision trees to effectively select and visualise the most relevant features within a dataset. For the feature selection community, the results of this thesis have showed that the number of classifiers and nature of classifiers can truly affect the feature selection process. The results and suggestions based on the results can provide useful insight about classifiers when performing feature selection. For the HCI community, this thesis has showed the usefulness of feature selection for identifying a small number of highly relevant features for determining the preferences of different users.
19

Causal modeling and prediction over event streams

Acharya, Saurav 01 January 2014 (has links)
In recent years, there has been a growing need for causal analysis in many modern stream applications such as web page click monitoring, patient health care monitoring, stock market prediction, electric grid monitoring, and network intrusion detection systems. The detection and prediction of causal relationships help in monitoring, planning, decision making, and prevention of unwanted consequences. An event stream is a continuous unbounded sequence of event instances. The availability of a large amount of continuous data along with high data throughput poses new challenges related to causal modeling over event streams, such as (1) the need for incremental causal inference for the unbounded data, (2) the need for fast causal inference for the high throughput data, and (3) the need for real-time prediction of effects from the events seen so far in the continuous event streams. This dissertation research addresses these three problems by focusing on utilizing temporal precedence information which is readily available in event streams: (1) an incremental causal model to update the causal network incrementally with the arrival of a new batch of events instead of storing the complete set of events seen so far and building the causal network from scratch with those stored events, (2) a fast causal model to speed up the causal network inference time, and (3) a real-time top-k predictive query processing mechanism to find the most probable k effects with the highest scores by proposing a run-time causal inference mechanism which addresses cyclic causal relationships. In this dissertation, the motivation, related work, proposed approaches, and the results are presented in each of the three problems.
20

Statistical models for prediction of mechanical property and manufacturing process parameters for gas pipeline steels

January 2018 (has links)
abstract: Pipeline infrastructure forms a vital aspect of the United States economy and standard of living. A majority of the current pipeline systems were installed in the early 1900’s and often lack a reliable database reporting the mechanical properties, and information about manufacturing and installation, thereby raising a concern for their safety and integrity. Testing for the aging pipe strength and toughness estimation without interrupting the transmission and operations thus becomes important. The state-of-the-art techniques tend to focus on the single modality deterministic estimation of pipe strength and do not account for inhomogeneity and uncertainties, many others appear to rely on destructive means. These gaps provide an impetus for novel methods to better characterize the pipe material properties. The focus of this study is the design of a Bayesian Network information fusion model for the prediction of accurate probabilistic pipe strength and consequently the maximum allowable operating pressure. A multimodal diagnosis is performed by assessing the mechanical property variation within the pipe in terms of material property measurements, such as microstructure, composition, hardness and other mechanical properties through experimental analysis, which are then integrated with the Bayesian network model that uses a Markov chain Monte Carlo (MCMC) algorithm. Prototype testing is carried out for model verification, validation and demonstration and data training of the model is employed to obtain a more accurate measure of the probabilistic pipe strength. With a view of providing a holistic measure of material performance in service, the fatigue properties of the pipe steel are investigated. The variation in the fatigue crack growth rate (da/dN) along the direction of the pipe wall thickness is studied in relation to the microstructure and the material constants for the crack growth have been reported. A combination of imaging and composition analysis is incorporated to study the fracture surface of the fatigue specimen. Finally, some well-known statistical inference models are employed for prediction of manufacturing process parameters for steel pipelines. The adaptability of the small datasets for the accuracy of the prediction outcomes is discussed and the models are compared for their performance. / Dissertation/Thesis / Doctoral Dissertation Materials Science and Engineering 2018

Page generated in 0.0605 seconds