• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 37
  • 37
  • 13
  • 11
  • 10
  • 9
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A Family of Dominance Filters for Multiple Criteria Decision Making: Choosing the Right Filter for a Decision Situation

Iyer, Naresh Sundaram 17 December 2001 (has links)
No description available.
12

A Sequential Design for Approximating the Pareto Front using the Expected Pareto Improvement Function

Bautista, Dianne Carrol Tan 26 June 2009 (has links)
No description available.
13

Interpretation, identification and reuse of models : theory and algorithms with applications in predictive toxicology

Palczewska, Anna Maria January 2014 (has links)
This thesis is concerned with developing methodologies that enable existing models to be effectively reused. Results of this thesis are presented in the framework of Quantitative Structural-Activity Relationship (QSAR) models, but their application is much more general. QSAR models relate chemical structures with their biological, chemical or environmental activity. There are many applications that offer an environment to build and store predictive models. Unfortunately, they do not provide advanced functionalities that allow for efficient model selection and for interpretation of model predictions for new data. This thesis aims to address these issues and proposes methodologies for dealing with three research problems: model governance (management), model identification (selection), and interpretation of model predictions. The combination of these methodologies can be employed to build more efficient systems for model reuse in QSAR modelling and other areas. The first part of this study investigates toxicity data and model formats and reviews some of the existing toxicity systems in the context of model development and reuse. Based on the findings of this review and the principles of data governance, a novel concept of model governance is defined. Model governance comprises model representation and model governance processes. These processes are designed and presented in the context of model management. As an application, minimum information requirements and an XML representation for QSAR models are proposed. Once a collection of validated, accepted and well annotated models is available within a model governance framework, they can be applied for new data. It may happen that there is more than one model available for the same endpoint. Which one to chose? The second part of this thesis proposes a theoretical framework and algorithms that enable automated identification of the most reliable model for new data from the collection of existing models. The main idea is based on partitioning of the search space into groups and assigning a single model to each group. The construction of this partitioning is difficult because it is a bi-criteria problem. The main contribution in this part is the application of Pareto points for the search space partition. The proposed methodology is applied to three endpoints in chemoinformatics and predictive toxicology. After having identified a model for the new data, we would like to know how the model obtained its prediction and how trustworthy it is. An interpretation of model predictions is straightforward for linear models thanks to the availability of model parameters and their statistical significance. For non linear models this information can be hidden inside the model structure. This thesis proposes an approach for interpretation of a random forest classification model. This approach allows for the determination of the influence (called feature contribution) of each variable on the model prediction for an individual data. In this part, there are three methods proposed that allow analysis of feature contributions. Such analysis might lead to the discovery of new patterns that represent a standard behaviour of the model and allow additional assessment of the model reliability for new data. The application of these methods to two standard benchmark datasets from the UCI machine learning repository shows a great potential of this methodology. The algorithm for calculating feature contributions has been implemented and is available as an R package called rfFC.
14

Or Best Offer: A Privacy Policy Negotiation Protocol

Walker, Daniel David 12 July 2007 (has links) (PDF)
Users today are concerned about how their information is collected, stored and used by Internet sites. Privacy policy languages, such as the Platform for Privacy Preferences (P3P), allow websites to publish their privacy practices and policies in machine readable form. Currently, software agents designed to protect users' privacy follow a "take it or leave it" approach when evaluating these privacy policies. This approach is inflexible and gives the server ultimate control over the privacy of web transactions. Privacy policy negotiation is one approach to leveling the playing field by allowing a client to negotiate with a server to determine how that server collects and uses the client's data. We present a privacy policy negotiation protocol, "Or Best Offer", that includes a formal model for specifying privacy preferences and reasoning about privacy policies. The protocol is guaranteed to terminate within three rounds of negotiation while producing policies that are Pareto-optimal, and thus fair to both parties. That is, it remains fair to both the client and the server.
15

Using Pareto points for model identification in predictive toxicology

Palczewska, Anna Maria, Neagu, Daniel, Ridley, Mick J. January 2013 (has links)
no / Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology.
16

Interpretation, Identification and Reuse of Models. Theory and algorithms with applications in predictive toxicology.

Palczewska, Anna Maria January 2014 (has links)
This thesis is concerned with developing methodologies that enable existing models to be effectively reused. Results of this thesis are presented in the framework of Quantitative Structural-Activity Relationship (QSAR) models, but their application is much more general. QSAR models relate chemical structures with their biological, chemical or environmental activity. There are many applications that offer an environment to build and store predictive models. Unfortunately, they do not provide advanced functionalities that allow for efficient model selection and for interpretation of model predictions for new data. This thesis aims to address these issues and proposes methodologies for dealing with three research problems: model governance (management), model identification (selection), and interpretation of model predictions. The combination of these methodologies can be employed to build more efficient systems for model reuse in QSAR modelling and other areas. The first part of this study investigates toxicity data and model formats and reviews some of the existing toxicity systems in the context of model development and reuse. Based on the findings of this review and the principles of data governance, a novel concept of model governance is defined. Model governance comprises model representation and model governance processes. These processes are designed and presented in the context of model management. As an application, minimum information requirements and an XML representation for QSAR models are proposed. Once a collection of validated, accepted and well annotated models is available within a model governance framework, they can be applied for new data. It may happen that there is more than one model available for the same endpoint. Which one to chose? The second part of this thesis proposes a theoretical framework and algorithms that enable automated identification of the most reliable model for new data from the collection of existing models. The main idea is based on partitioning of the search space into groups and assigning a single model to each group. The construction of this partitioning is difficult because it is a bi-criteria problem. The main contribution in this part is the application of Pareto points for the search space partition. The proposed methodology is applied to three endpoints in chemoinformatics and predictive toxicology. After having identified a model for the new data, we would like to know how the model obtained its prediction and how trustworthy it is. An interpretation of model predictions is straightforward for linear models thanks to the availability of model parameters and their statistical significance. For non linear models this information can be hidden inside the model structure. This thesis proposes an approach for interpretation of a random forest classification model. This approach allows for the determination of the influence (called feature contribution) of each variable on the model prediction for an individual data. In this part, there are three methods proposed that allow analysis of feature contributions. Such analysis might lead to the discovery of new patterns that represent a standard behaviour of the model and allow additional assessment of the model reliability for new data. The application of these methods to two standard benchmark datasets from the UCI machine learning repository shows a great potential of this methodology. The algorithm for calculating feature contributions has been implemented and is available as an R package called rfFC. / BBSRC and Syngenta (International Research Centre at Jealott’s Hill, Bracknell, UK).
17

Advances in aircraft design: multiobjective optimization and a markup language

Deshpande, Shubhangi Govind 23 January 2014 (has links)
Today's modern aerospace systems exhibit strong interdisciplinary coupling and require a multidisciplinary, collaborative approach. Analysis methods that were once considered feasible only for advanced and detailed design are now available and even practical at the conceptual design stage. This changing philosophy for conducting conceptual design poses additional challenges beyond those encountered in a low fidelity design of aircraft. This thesis takes some steps towards bridging the gaps in existing technologies and advancing the state-of-the-art in aircraft design. The first part of the thesis proposes a new Pareto front approximation method for multiobjective optimization problems. The method employs a hybrid optimization approach using two derivative free direct search techniques, and is intended for solving blackbox simulation based multiobjective optimization problems with possibly nonsmooth functions where the analytical form of the objectives is not known and/or the evaluation of the objective function(s) is very expensive (very common in multidisciplinary design optimization). A new adaptive weighting scheme is proposed to convert a multiobjective optimization problem to a single objective optimization problem. Results show that the method achieves an arbitrarily close approximation to the Pareto front with a good collection of well-distributed nondominated points. The second part deals with the interdisciplinary data communication issues involved in a collaborative mutidisciplinary aircraft design environment. Efficient transfer, sharing, and manipulation of design and analysis data in a collaborative environment demands a formal structured representation of data. XML, a W3C recommendation, is one such standard concomitant with a number of powerful capabilities that alleviate interoperability issues. A compact, generic, and comprehensive XML schema for an aircraft design markup language (ADML) is proposed here to provide a common language for data communication, and to improve efficiency and productivity within a multidisciplinary, collaborative environment. An important feature of the proposed schema is the very expressive and efficient low level schemata. As a proof of concept the schema is used to encode an entire Convair B58. As the complexity of models and number of disciplines increases, the reduction in effort to exchange data models and analysis results in ADML also increases. / Ph. D.
18

A decision support system for multi-objective programming problems

Rangoaga, Moeti Joseph 11 1900 (has links)
Many concrete problems may be cast in a multi-objective optimisation framework. The redundancy of existing methods for solving multi-objective programming problems susceptible to inconsistencies, coupled with the necessity for making in- herent assumptions before using a given method, make it hard for a nonspecialist to choose a method that ¯ts the situation at hand well. Moreover, using a method blindly, as suggested by the hammer principle (when you only have a hammer, you want everything in your hand to be a nail) is an awkward approach at best and a caricatural one at worst. This brings challenges to the design, development, implementation and deployment of a Decision Support System able to choose a method that is appropriate for a given problem and to apply the chosen method to solve the problem under consideration. The choice of method should be made according to the structure of the problem and the decision maker's opinion. The aim here is to embed a sample of methods representing the main multi-objective programming techniques and to help the decision maker find the most appropriate method for his problem. / Decisions Sciences / M. Sc. (Operations Research )
19

Advances in simulation: validity and efficiency

Lee, Judy S. 08 June 2015 (has links)
In this thesis, we present and analyze three algorithms that are designed to make computer simulation more efficient, valid, and/or applicable. The first algorithm uses simulation cloning to enhance efficiency in transient simulation. Traditional simulation cloning is a technique that shares some parts of the simulation results when simulating different scenarios. We apply this idea to transient simulation, where multiple replications are required to achieve statistical validity. Computational savings are achieved by sharing some parts of the simulation results among several replications. We improve the algorithm by inducing negative correlation to compensate for the (undesirable) positive correlation introduced by sharing some parts of the simulation. Then we identify how many replications should share the same data, and provide numerical results to analyze the performance of our approach. The second algorithm chooses a set of best systems when there are multiple candidate systems and multiple objectives. We provide three different formulations of correct selection of the Pareto optimal set, where a system is Pareto optimal if it is not inferior in all objectives compared to other competing systems. Then we present our Pareto selection algorithm and prove its validity for all three formulations. Finally, we provide numerical results aimed at understanding how well our algorithm performs in various settings. Finally, we discuss the estimation of input distributions when theoretical distributions do not provide a good fit to existing data. Our approach is to use a quasi-empirical distribution, which is a mixture of an empirical distribution and a distribution for the right tail. We describe an existing approach that involves an exponential tail distribution, and adapt the approach to incorporate a Pareto tail distribution and to use a different cutoff point between the empirical and tail distributions. Then, to measure the impact, we simulate a stable M/G/1 queue with a known inter-arrival and unknown service time distributions, and estimate the mean and tail probabilities of the waiting time in queue using the different approaches. The results suggest that if we know that the system is stable, and suspect that the tail of the service time distribution is not exponential, then a quasi-empirical distribution with a Pareto tail works well, but with a lower bound imposed on the tail index.
20

Risk Measures Constituting Risk Metrics for Decision Making in the Chemical Process Industry

Prem, Katherine 2010 December 1900 (has links)
The occurrence of catastrophic incidents in the process industry leave a marked legacy of resulting in staggering economic and societal losses incurred by the company, the government and the society. The work described herein is a novel approach proposed to help predict and mitigate potential catastrophes from occurring and for understanding the stakes at risk for better risk informed decision making. The methodology includes societal impact as risk measures along with tangible asset damage monetization. Predicting incidents as leading metrics is pivotal to improving plant processes and, for individual and societal safety in the vicinity of the plant (portfolio). From this study it can be concluded that the comprehensive judgments of all the risks and losses should entail the analysis of the overall results of all possible incident scenarios. Value-at-Risk (VaR) is most suitable as an overall measure for many scenarios and for large number of portfolio assets. FN-curves and F$-curves can be correlated and this is very beneficial for understanding the trends of historical incidents in the U.S. chemical process industry. Analyzing historical databases can provide valuable information on the incident occurrences and their consequences as lagging metrics (or lagging indicators) for the mitigation of the portfolio risks. From this study it can be concluded that there is a strong statistical relationship between the different consequence tiers of the safety pyramid and Heinrich‘s safety pyramid is comparable to data mined from the HSEES database. Furthermore, any chemical plant operation is robust only when a strategic balance is struck between optimal plant operations and, maintaining health, safety and sustaining environment. The balance emerges from choosing the best option amidst several conflicting parameters. Strategies for normative decision making should be utilized for making choices under uncertainty. Hence, decision theory is utilized here for laying the framework for choice making of optimum portfolio option among several competing portfolios. For understanding the strategic interactions of the different contributing representative sets that play a key role in determining the most preferred action for optimum production and safety, the concepts of game theory are utilized and framework has been provided as novel application to chemical process industry.

Page generated in 0.0758 seconds