• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 61
  • 26
  • 7
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 138
  • 138
  • 76
  • 30
  • 18
  • 15
  • 15
  • 14
  • 14
  • 13
  • 12
  • 12
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A knowledge-driven model to assess inherent safety in process infrastructure

Gholamizadeh, K., Zarei, E., Kabir, Sohag, Mamudu, A., Aala, Y., Mohammadfam, I. 09 August 2023 (has links)
Yes / Process safety has drawn increasing attention in recent years and has been investigated from different perspectives, such as quantitative risk analysis, consequence modeling, and regulations. However, rare attempts have been made to focus on inherent safety design assessment, despite being the most cost-effective safety tactic and its vital role in sustainable development and safe operation of process infrastructure. Accordingly, the present research proposed a knowledge-driven model to assess inherent safety in process infrastructure under uncertainty. We first developed a holistic taxonomy of contributing factors into inherent safety design considering chemical, reaction, process, equipment, human factors, and organizational concerns associated with process plants. Then, we used subject matter experts, content validity ratio (CVR), and content validity index (CVI) to validate the taxonomy and data collection tools. We then employed a fuzzy inference system and the Extent Analysis (EA) method for knowledge acquisition under uncertainty. We tested the proposed model on a steam methane-reforming plant that produces hydrogen as renewable energy. The findings revealed the most contributing factors and indicators to improve the inherent safety design in the studied plant and effectively support the decision-making process to assign proper safety countermeasures.
22

Comparing Probabilistic and Fuzzy Set Approaches for Designing in the Presence of Uncertainty

Chen, Qinghong 18 September 2000 (has links)
Probabilistic models and fuzzy set models describe different aspects of uncertainty. Probabilistic models primarily describe random variability in parameters. In engineering system safety, examples are variability in material properties, geometrical dimensions, or wind loads. In contrast, fuzzy set models of uncertainty primarily describe vagueness, such as vagueness in the definition of safety. When there is only limited information about variability, it is possible to use probabilistic models by making suitable assumptions on the statistics of the variability. However, it has been repeatedly shown that this can entail serious errors. Fuzzy set models, which require little data, appear to be well suited to use with designing for uncertainty, when little is known about the uncertainty. Several studies have compared fuzzy set and probabilistic methods in analysis of safety of systems under uncertainty. However, no study has compared the two approaches systematically as a function of the amount of available information. Such a comparison, in the context of design against failure, is the objective of this dissertation. First, the theoretical foundations of probability and possibility theories are compared. We show that a major difference between probability and possibility is in the axioms about the union of events. Because of this difference, probability and possibility calculi are fundamentally different and one cannot simulate possibility calculus using probabilistic models. We also show that possibility-based methods tend to be more conservative than probability-based methods in systems that fail only if many unfavorable events occur simultaneously. Based on these theoretical observations, two design problems are formulated to demonstrate the strength and weakness of probabilistic and fuzzy set methods. We consider the design of tuned damper system and the design and construction of domino stacks. These problems contain narrow failure zones in their uncertain variables and are tailored to demonstrate the pitfalls of probabilistic methods when little information is available for uncertain variables. Using these design problems we demonstrate that probabilistic methods are better than possibility-based methods if sufficient information is available. Just as importantly, we show possibility-based methods can be better if little information is available. Our conclusion is that when there is little information available about uncertainties, a hybrid method should be used to ensure a safe design. / Ph. D.
23

Mathematical Modeling for Data Envelopment Analysis with Fuzzy Restrictions on Weights

Kabnurkar, Amit 01 May 2001 (has links)
Data envelopment analysis (DEA) is a relative technical efficiency measurement tool, which uses operations research techniques to automatically calculate the weights assigned to the inputs and outputs of the production units being assessed. The actual input/output data values are then multiplied with the calculated weights to determine the efficiency scores. Recent variants of the DEA model impose upper and lower bounds on the weights to eliminate certain drawbacks associated with unrestricted weights. These variants are called weight restriction DEA models. Most weight restriction DEA models suffer from a drawback that the weight bound values are uncertain because they are determined based on either incomplete information or the subjective opinion of the decision-makers. Since the efficiency scores calculated by the DEA model are sensitive to the values of the bounds, the uncertainty of the bounds gets passed onto the efficiency scores. The uncertainty in the efficiency scores becomes unacceptable when we consider the fact that the DEA results are used for making important decisions like allocating funds and taking action against inefficient units. In order to minimize the effect of the uncertainty in bound values on the decision-making process, we propose to explicitly incorporate the uncertainty in the modeling process using the concepts of fuzzy set theory. Modeling the imprecision involves replacing the bound values by fuzzy numbers because fuzzy numbers can capture the intuitive conception of approximate numbers very well. Amongst the numerous types of weight restriction DEA models developed in the research, two are more commonly used in real-life applications compared to the others. Therefore, in this research, we focus on these two types of models for modeling the uncertainty in bound values. These are the absolute weight restriction DEA models and the Assurance Region (AR) DEA models. After developing the fuzzy models, we provide implementation roadmaps for illustrating the development and solution methodology of those models. We apply the fuzzy weight restriction models to the same data sets as those used by the corresponding crisp weight restriction models in the literature and compare the results using the two-sample paired t-test for means. We also use the fuzzy AR model developed in the research to measure the performance of a newspaper preprint insertion line. / Master of Science
24

Multiple Attributes Group Decision Making by Type-2 Fuzzy Sets and Systems

Jaffal, Hussein, Tao, Cheng January 2011 (has links)
We are living in a world full of uncertainty and ambiguity. We usually ask ourselves questions that we are uncertain about their answers. Is it going to rain tomorrow? What will be the exchange rate of euro next month? Why, where and how should I invest? Type-1 Fuzzy sets are characterized by the membership function whose value for a given element x is said to be the grade of membership having a value in the interval [0, 1]. In addition, type-1 fuzzy sets have limited capabilities to deal with uncertainty. In our thesis, we study another concept of a fuzzy description of uncertainty which is called Type-2 fuzzy sets. According to this concept, for any given element x, we can’t speak of an unambiguously specified value of the membership function. Moreover, Type-2 fuzzy sets constitute a powerful tool for handling uncertainty. The aim of our thesis is to examine the potential of the Type-2 fuzzy sets especially in decision making. So, we present basic definitions concerning Type-2 fuzzy sets, and operations on these sets are to be discussed too. Then, Type-2 fuzzy relations and methods of transformation of Type-2 fuzzy sets will be introduced. Also, the theory of Type-2 Fuzzy sets will serve for the construction of the fuzzy inference system. Finally, we utilize interval type-2 fuzzy sets in the application of Multiple Attributes Group Decision Making which is called TOPSIS.
25

Type-2 fuzzy logic : circumventing the defuzzification bottleneck

Greenfield, Sarah January 2012 (has links)
Type-2 fuzzy inferencing for generalised, discretised type-2 fuzzy sets has been impeded by the computational complexity of the defuzzification stage of the fuzzy inferencing system. Indeed this stage is so complex computationally that it has come to be known as the defuzzification bottleneck. The computational complexity derives from the enormous number of embedded sets that have to be individually processed in order to effect defuzzification. Two new approaches to type-2 defuzzification are presented, the sampling method and the Greenfield-Chiclana Collapsing Defuzzifier. The sampling method and its variant, elite sampling, are techniques for the defuzzification of generalised type-2 fuzzy sets. In these methods a relatively small sample of the totality of embedded sets is randomly selected and processed. The small sample size drastically reduces the computational complexity of the defuzzification process, so that it may be speedily accomplished. The Greenfield-Chiclana Collapsing Defuzzifier relies upon the concept of the representative embedded set, which is an embedded set having the same defuzzified value as the type-2 fuzzy set that is to be defuzzified. By a process termed collapsing the type-2 fuzzy set is converted into a type-1 fuzzy set which, as an approximation to the representative embedded set, is known as the representative embedded set approximation. This type-1 fuzzy set is easily defuzzified to give the defuzzified value of the original type-2 fuzzy set. By this method the computational complexity of type-2 defuzzification is reduced enormously, since the representative embedded set approximation replaces the entire collection of embedded sets. The strategy was conceived as a generalised method, but so far only the interval version has been derived mathematically. The grid method of discretisation for type-2 fuzzy sets is also introduced in this thesis. Work on the defuzzification of type-2 fuzzy sets began around the turn of the millennium. Since that time a number of investigators have contributed methods in this area. These different approaches are surveyed, and the major methods implemented in code prior to their experimental evaluation. In these comparative experiments the grid method of defuzzification is employed. The experimental results show beyond doubt that the collapsing method performs the best of the interval alternatives. However, though the sampling method performs well experimentally, the results do not demonstrate it to be the best performing generalised technique.
26

An Information Security Control Assessment Methodology for Organizations

Otero, Angel Rafael 01 January 2014 (has links)
In an era where use and dependence of information systems is significantly high, the threat of incidents related to information security that could jeopardize the information held by organizations is more and more serious. Alarming facts within the literature point to inadequacies in information security practices, particularly the evaluation of information security controls in organizations. Research efforts have resulted in various methodologies developed to deal with the information security controls assessment problem. A closer look at these traditional methodologies highlights various weaknesses that can prevent an effective information security controls assessment in organizations. This dissertation develops a methodology that addresses such weaknesses when evaluating information security controls in organizations. The methodology, created using the Fuzzy Logic Toolbox of MATLAB based on fuzzy theory and fuzzy logic, uses fuzzy set theory which allows for a more accurate assessment of imprecise criteria than traditional methodologies. It is argued and evidenced that evaluating information security controls using fuzzy set theory addresses existing weaknesses found in the literature for traditional evaluation methodologies and, thus, leads to a more thorough and precise assessment. This, in turn, results in a more effective selection of information security controls and enhanced information security in organizations. The main contribution of this research to the information security literature is the development of a fuzzy set theory-based assessment methodology that provides for a thorough evaluation of ISC in organizations. The methodology just created addresses the weaknesses or limitations identified in existing information security control assessment methodologies, resulting in an enhanced information security in organizations. The methodology can also be implemented in a spreadsheet or software tool, and promote usage in practical scenarios where highly complex methodologies for ISC selection are impractical. Moreover, the methodology fuses multiple evaluation criteria to provide a holistic view of the overall quality of information security controls, and it is easily extended to include additional evaluation criteria factor not considered within this dissertation. This is one of the most meaningful contributions from this dissertation. Finally, the methodology provides a mechanism to evaluate the quality of information security controls in various domains. Overall, the methodology presented in this dissertation proved to be a feasible technique for evaluating information security controls in organizations.
27

Data envelopment analysis with sparse data

Gullipalli, Deep Kumar January 1900 (has links)
Master of Science / Department of Industrial & Manufacturing Systems Engineering / David H. Ben-Arieh / Quest for continuous improvement among the organizations and issue of missing data for data analysis are never ending. This thesis brings these two topics under one roof, i.e., to evaluate the productivity of organizations with sparse data. This study focuses on Data Envelopment Analysis (DEA) to determine the efficiency of 41 member clinics of Kansas Association of Medically Underserved (KAMU) with missing data. The primary focus of this thesis is to develop new reliable methods to determine the missing values and to execute DEA. DEA is a linear programming methodology to evaluate relative technical efficiency of homogenous Decision Making Units, using multiple inputs and outputs. Effectiveness of DEA depends on the quality and quantity of data being used. DEA outcomes are susceptible to missing data, thus, creating a need to supplement sparse data in a reliable manner. Determining missing values more precisely improves the robustness of DEA methodology. Three methods to determine the missing values are proposed in this thesis based on three different platforms. First method named as Average Ratio Method (ARM) uses average value, of all the ratios between two variables. Second method is based on a modified Fuzzy C-Means Clustering algorithm, which can handle missing data. The issues associated with this clustering algorithm are resolved to improve its effectiveness. Third method is based on interval approach. Missing values are replaced by interval ranges estimated by experts. Crisp efficiency scores are identified in similar lines to how DEA determines efficiency scores using the best set of weights. There exists no unique way to evaluate the effectiveness of these methods. Effectiveness of these methods is tested by choosing a complete dataset and assuming varying levels of data as missing. Best set of recovered missing values, based on the above methods, serves as a source to execute DEA. Results show that the DEA efficiency scores generated with recovered values are close within close proximity to the actual efficiency scores that would be generated with the complete data. As a summary, this thesis provides an effective and practical approach for replacing missing values needed for DEA.
28

Bushing diagnosis using artificial intelligence and dissolved gas analysis

Dhlamini, Sizwe Magiya 20 June 2008 (has links)
This dissertation is a study of artificial intelligence for diagnosing the condition of high voltage bushings. The techniques include neural networks, genetic algorithms, fuzzy set theory, particle swarm optimisation, multi-classifier systems, factor analysis, principal component analysis, multidimensional scaling, data-fusion techniques, automatic relevance determination and autoencoders. The classification is done using Dissolved Gas Analysis (DGA) data based on field experience together with criteria from IEEEc57.104 and IEC60599. A review of current literature showed that common methods for the diagnosis of bushings are: partial discharge, DGA, tan- (dielectric dissipation factor), water content in oil, dielectric strength of oil, acidity level (neutralisation value), visual analysis of sludge in suspension, colour of the oil, furanic content, degree of polymerisation (DP), strength of the insulating paper, interfacial tension or oxygen content tests. All the methods have limitations in terms of time and accuracy in decision making. The fact that making decisions using each of these methods individually is highly subjective, also the huge size of the data base of historical data, as well as the loss of skills due to retirement of experienced technical staff, highlights the need for an automated diagnosis tool that integrates information from the many sensors and recalls the historical decisions and learns from new information. Three classifiers that are compared in this analysis are radial basis functions (RBF), multiple layer perceptrons (MLP) and support vector machines (SVM). In this work 60699 bushings were classified based on ten criteria. Classification was done based on a majority vote. The work proposes the application of neural networks with particle swarm optimisation (PSO) and genetic algorithms (GA) to compensate for missing data in classifying high voltage bushings. The work also proposes the application of fuzzy set theory (FST) to diagnose the condition of high voltage bushings. The relevance and redundancy detection methods were able to prune the redundant measured variables and accurately diagnose the condition of the bushing with fewer variables. Experimental results from bushings that were evaluated in the field verified the simulations. The results of this work can help to develop real-time monitoring and decision making tools that combine information from chemical, electrical and mechanical measurements taken from bushings.
29

Proposta de um processo sistemático baseado em métricas não-dicotômicas para avaliação de predição de links em redes de coautoria. / Proposal of a systematic process based on non-dichotomic metrics for evaluation of link prediction in co-authorship networks.

Silva, Elisandra Aparecida Alves da 17 March 2011 (has links)
Predição de Links é uma área de pesquisa importante no contexto de Análise de Redes Sociais tendo em vista que predizer sua evolução é um mecanismo útil para melhorar e propiciar a comunicação entre usuários. Nas redes de coautoria isso pode ser utilizado para recomendação de usuários com interesses de pesquisa comuns. Este trabalho propõe um processo sistemático baseado em métricas não-dicotômicas para avaliação de predição de links em redes de coautoria, sendo considerada a definição de métodos para as seguintes tarefas identificadas: seleção de dados, determinação de novos links e avaliação dos resultados. Para seleção de dados definiu-se um sensor fuzzy baseado em atributos dos nós. O uso de composições fuzzy foi considerado para determinação de novos links _ponderados_ entre dois autores, adotando-se não apenas atributos dos nós, mas também a combinação de atributos de outros links observados. O link ponderado é denominado _qualidade da relação_ e é obtido pelo uso de propriedades estruturais da rede. Para avaliação dos resultados foi proposta a curva ROC fuzzy, que permite explorar os pesos dos links não apenas para ordenação dos exemplos. / Link prediction is an important research line in the Social Network Analysis context, as predicting the evolution of such nets is a useful mechanism to improve and encourage communication among users. In co-authorship networks, it can be used for recommending users with common research interests. This work proposes a systematic process based on non-dichotomic metrics for evaluation of link prediction in co-authorship networks considering the definition of methods for the following tasks: data selection, new link determination and result evaluation. Fuzzy sensor based on node attributes is adopted for data selection. Fuzzy compositions are used to predict new link weights between two authors, adopting not only attributes nodes, but also the combination of attributes of other observed links. The link weight called _relation quality_ is obtained by using structural features of the social network. The fuzzy roc curve is used for results evaluation, allowing us to consider the weights of the links and not only the ordering of examples.
30

Proposta de um processo sistemático baseado em métricas não-dicotômicas para avaliação de predição de links em redes de coautoria. / Proposal of a systematic process based on non-dichotomic metrics for evaluation of link prediction in co-authorship networks.

Elisandra Aparecida Alves da Silva 17 March 2011 (has links)
Predição de Links é uma área de pesquisa importante no contexto de Análise de Redes Sociais tendo em vista que predizer sua evolução é um mecanismo útil para melhorar e propiciar a comunicação entre usuários. Nas redes de coautoria isso pode ser utilizado para recomendação de usuários com interesses de pesquisa comuns. Este trabalho propõe um processo sistemático baseado em métricas não-dicotômicas para avaliação de predição de links em redes de coautoria, sendo considerada a definição de métodos para as seguintes tarefas identificadas: seleção de dados, determinação de novos links e avaliação dos resultados. Para seleção de dados definiu-se um sensor fuzzy baseado em atributos dos nós. O uso de composições fuzzy foi considerado para determinação de novos links _ponderados_ entre dois autores, adotando-se não apenas atributos dos nós, mas também a combinação de atributos de outros links observados. O link ponderado é denominado _qualidade da relação_ e é obtido pelo uso de propriedades estruturais da rede. Para avaliação dos resultados foi proposta a curva ROC fuzzy, que permite explorar os pesos dos links não apenas para ordenação dos exemplos. / Link prediction is an important research line in the Social Network Analysis context, as predicting the evolution of such nets is a useful mechanism to improve and encourage communication among users. In co-authorship networks, it can be used for recommending users with common research interests. This work proposes a systematic process based on non-dichotomic metrics for evaluation of link prediction in co-authorship networks considering the definition of methods for the following tasks: data selection, new link determination and result evaluation. Fuzzy sensor based on node attributes is adopted for data selection. Fuzzy compositions are used to predict new link weights between two authors, adopting not only attributes nodes, but also the combination of attributes of other observed links. The link weight called _relation quality_ is obtained by using structural features of the social network. The fuzzy roc curve is used for results evaluation, allowing us to consider the weights of the links and not only the ordering of examples.

Page generated in 0.0591 seconds