• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 7
  • 7
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

MODELING, STABILITY AND DYNAMICS OF REACTIVE DISTILLATION

MIAO, PEIZHI January 2000 (has links)
No description available.
2

Contributions to ensembles of models for predictive toxicology applications : on the representation, comparison and combination of models in ensembles

Makhtar, Mokhairi January 2012 (has links)
The increasing variety of data mining tools offers a large palette of types and representation formats for predictive models. Managing the models then becomes a big challenge, as well as reusing the models and keeping the consistency of model and data repositories. Sustainable access and quality assessment of these models become limited to researchers. The approach for the Data and Model Governance (DMG) makes easier to process and support complex solutions. In this thesis, contributions are proposed towards ensembles of models with a focus on model representation, comparison and usage. Predictive Toxicology was chosen as an application field to demonstrate the proposed approach to represent predictive models linked to data for DMG. Further analysing methods such as predictive models comparison and predictive models combination for reusing the models from a collection of models were studied. Thus in this thesis, an original structure of the pool of models was proposed to represent predictive toxicology models called Predictive Toxicology Markup Language (PTML). PTML offers a representation scheme for predictive toxicology data and models generated by data mining tools. In this research, the proposed representation offers possibilities to compare models and select the relevant models based on different performance measures using proposed similarity measuring techniques. The relevant models were selected using a proposed cost function which is a composite of performance measures such as Accuracy (Acc), False Negative Rate (FNR) and False Positive Rate (FPR). The cost function will ensure that only quality models be selected as the candidate models for an ensemble. The proposed algorithm for optimisation and combination of Acc, FNR and FPR of ensemble models using double fault measure as the diversity measure improves Acc between 0.01 to 0.30 for all toxicology data sets compared to other ensemble methods such as Bagging, Stacking, Bayes and Boosting. The highest improvements for Acc were for data sets Bee (0.30), Oral Quail (0.13) and Daphnia (0.10). A small improvement (of about 0.01) in Acc was achieved for Dietary Quail and Trout. Important results by combining all the three performance measures are also related to reducing the distance between FNR and FPR for Bee, Daphnia, Oral Quail and Trout data sets for about 0.17 to 0.28. For Dietary Quail data set the improvement was about 0.01 though, but this data set is well known as a difficult learning exercise. For five UCI data sets tested, similar results were achieved with Acc improvement between 0.10 to 0.11, closing more the gaps between FNR and FPR. As a conclusion, the results show that by combining performance measures (Acc, FNR and FPR), as proposed within this thesis, the Acc increased and the distance between FNR and FPR decreased.
3

Contributions to Ensembles of Models for Predictive Toxicology Applications. On the Representation, Comparison and Combination of Models in Ensembles.

Makhtar, Mokhairi January 2012 (has links)
The increasing variety of data mining tools offers a large palette of types and representation formats for predictive models. Managing the models then becomes a big challenge, as well as reusing the models and keeping the consistency of model and data repositories. Sustainable access and quality assessment of these models become limited to researchers. The approach for the Data and Model Governance (DMG) makes easier to process and support complex solutions. In this thesis, contributions are proposed towards ensembles of models with a focus on model representation, comparison and usage. Predictive Toxicology was chosen as an application field to demonstrate the proposed approach to represent predictive models linked to data for DMG. Further analysing methods such as predictive models comparison and predictive models combination for reusing the models from a collection of models were studied. Thus in this thesis, an original structure of the pool of models was proposed to represent predictive toxicology models called Predictive Toxicology Markup Language (PTML). PTML offers a representation scheme for predictive toxicology data and models generated by data mining tools. In this research, the proposed representation offers possibilities to compare models and select the relevant models based on different performance measures using proposed similarity measuring techniques. The relevant models were selected using a proposed cost function which is a composite of performance measures such as Accuracy (Acc), False Negative Rate (FNR) and False Positive Rate (FPR). The cost function will ensure that only quality models be selected as the candidate models for an ensemble. The proposed algorithm for optimisation and combination of Acc, FNR and FPR of ensemble models using double fault measure as the diversity measure improves Acc between 0.01 to 0.30 for all toxicology data sets compared to other ensemble methods such as Bagging, Stacking, Bayes and Boosting. The highest improvements for Acc were for data sets Bee (0.30), Oral Quail (0.13) and Daphnia (0.10). A small improvement (of about 0.01) in Acc was achieved for Dietary Quail and Trout. Important results by combining all the three performance measures are also related to reducing the distance between FNR and FPR for Bee, Daphnia, Oral Quail and Trout data sets for about 0.17 to 0.28. For Dietary Quail data set the improvement was about 0.01 though, but this data set is well known as a difficult learning exercise. For five UCI data sets tested, similar results were achieved with Acc improvement between 0.10 to 0.11, closing more the gaps between FNR and FPR. As a conclusion, the results show that by combining performance measures (Acc, FNR and FPR), as proposed within this thesis, the Acc increased and the distance between FNR and FPR decreased.
4

Construction de modèles réduits pour le calcul des performances des avions / Surrogate modeling construction for aircraft performances computation

Bondouy, Manon 08 February 2016 (has links)
L'objectif de cette thèse est de mettre en place une méthodologie et les outils associés en vue d'harmoniser le processus de construction des modèles de performances et de qualités de vol. Pour ce faire, des techniques de réduction de modèles ont été élaborées afin de satisfaire des objectifs industriels contradictoires de taille mémoire, de précision et de temps de calcul. Après avoir établi une méthodologie de construction de modèles réduits et effectué un état de l'art critique, les Réseaux de Neurones et le High Dimensional Model Representation ont été choisis, puis adaptés et validés sur des fonctions de petite dimension. Pour traiter les problèmes de dimension supérieure, une méthode de réduction basée sur la sélection optimale de sous-modèles réduits a été développée, qui permet de satisfaire les exigences de rapidité, de précision et de taille mémoire. L'efficacité de cette méthode a finalement été démontrée sur un modèle de performances des avions destiné à être embarqué. / The objective of this thesis is to provide a methodology and the associated tools in order to standardize the building process of performance and handling quality models. This typically leads to elaborate surrogate models in order to satisfy industrial contrasting objectives of memory size, accuracy and computation time. After listing the different steps of a construction of surrogates methodology and realizing a critical state of the art, Neural Networks and High Dimensional Model Representation methods have been selected and validated on low dimension functions. For functions of higher dimension, a reduction method based on the optimal selection of submodel surrogates has been developed which allows to satisfy the requirements on accuracy, computation time and memory size. The efficiency of this method has been demonstrated on an aircraft performance model which will be embedded into the avionic systems.
5

Efficient Computational Methods for Structural Reliability and Global Sensitivity Analyses

Zhang, Xufang 25 April 2013 (has links)
Uncertainty analysis of a system response is an important part of engineering probabilistic analysis. Uncertainty analysis includes: (a) to evaluate moments of the response; (b) to evaluate reliability analysis of the system; (c) to assess the complete probability distribution of the response; (d) to conduct the parametric sensitivity analysis of the output. The actual model of system response is usually a high-dimensional function of input variables. Although Monte Carlo simulation is a quite general approach for this purpose, it may require an inordinate amount of resources to achieve an acceptable level of accuracy. Development of a computationally efficient method, hence, is of great importance. First of all, the study proposed a moment method for uncertainty quantification of structural systems. However, a key departure is the use of fractional moment of response function, as opposed to integer moment used so far in literature. The advantage of using fractional moment over integer moment was illustrated from the relation of one fractional moment with a couple of integer moments. With a small number of samples to compute the fractional moments, a system output distribution was estimated with the principle of maximum entropy (MaxEnt) in conjunction with the constraints specified in terms of fractional moments. Compared to the classical MaxEnt, a novel feature of the proposed method is that fractional exponent of the MaxEnt distribution is determined through the entropy maximization process, instead of assigned by an analyst in prior. To further minimize the computational cost of the simulation-based entropy method, a multiplicative dimensional reduction method (M-DRM) was proposed to compute the fractional (integer) moments of a generic function with multiple input variables. The M-DRM can accurately approximate a high-dimensional function as the product of a series low-dimensional functions. Together with the principle of maximum entropy, a novel computational approach was proposed to assess the complete probability distribution of a system output. Accuracy and efficiency of the proposed method for structural reliability analysis were verified by crude Monte Carlo simulation of several examples. Application of M-DRM was further extended to the variance-based global sensitivity analysis of a system. Compared to the local sensitivity analysis, the variance-based sensitivity index can provide significance information about an input random variable. Since each component variance is defined as a conditional expectation with respect to the system model function, the separable nature of the M-DRM approximation can simplify the high-dimension integrations in sensitivity analysis. Several examples were presented to illustrate the numerical accuracy and efficiency of the proposed method in comparison to the Monte Carlo simulation method. The last contribution of the proposed study is the development of a computationally efficient method for polynomial chaos expansion (PCE) of a system's response. This PCE model can be later used uncertainty analysis. However, evaluation of coefficients of a PCE meta-model is computational demanding task due to the involved high-dimensional integrations. With the proposed M-DRM, the involved computational cost can be remarkably reduced compared to the classical methods in literature (simulation method or tensor Gauss quadrature method). Accuracy and efficiency of the proposed method for polynomial chaos expansion were verified by considering several practical examples.
6

Ontology based model framework for conceptual design of treatment flow sheets

Koegst, Thilo 09 April 2014 (has links) (PDF)
The primary objective of wastewater treatment is the removal of pollutants to meet given legal effluent standards. To further reduce operators costs additional recovery of resources and energy is desired by industrial and municipal wastewater treatment. Hence the objective in early stage of planning of treatment facilities lies in the identification and evaluation of promising configurations of treatment units. Obviously this early stage of planning may best be supported by software tools to be able to deal with a variety of different treatment configurations. In chemical process engineering various design tools are available that automatically identify feasible process configurations for the purpose to obtain desired products from given educts. In contrast, the adaptation of these design tools for the automatic generation of treatment unit configurations (process chains) to achieve preset effluent standards is hampered by the following three reasons. First, pollutants in wastewater are usually not defined as chemical substances but by compound parameters according to equal properties (e.g. all particulate matter). Consequently the variation of a single compound parameter leads to a change of related parameters (e.g. relation between Chemical Oxygen Demand and Total Suspended Solids). Furthermore, mathematical process models of treatment processes are tailored towards fractions of compound parameters. This hampers the generic representation of these process models which in turn is essential for automatic identification of treatment configurations. Second, treatment technologies for wastewater treatment rely on a variety of chemical, biological, and physical phenomena. Approaches to mathematically describe these phenomena cover a wide range of modeling techniques including stochastic, conceptual or deterministic approaches. Even more the consideration of temporal and spatial resolutions differ. This again hampers a generic representation of process models. Third, the automatic identification of treatment configurations may either be achieved by the use of design rules or by permutation of all possible combinations of units stored within a database of treatment units. The first approach depends on past experience translated into design rules. Hence, no innovative new treatment configurations can be identified. The second approach to identify all possible configurations collapses by extremely high numbers of treatment configurations that cannot be mastered. This is due to the phenomena of combinatorial explosion. It follows therefrom that an appropriate planning algorithm should function without the need of additional design rules and should be able to identify directly feasible configurations while discarding those impractical. This work presents a planning tool for the identification and evaluation of treatment configurations that tackles the before addressed problems. The planning tool comprises two major parts. An external declarative knowledge base and the actual planning tool that includes a goal oriented planning algorithm. The knowledge base describes parameters for wastewater characterization (i.e. material model) and a set of treatment units represented by process models (i.e. process model). The formalization of the knowledge base is achieved by the Web Ontology Language (OWL). The developed data model being the organization structure of the knowledge base describes relations between wastewater parameters and process models to enable for generic representation of process models. Through these parameters for wastewater characterization as well as treatment units can be altered or added to the knowledge base without the requirement to synchronize already included parameter representations or process models. Furthermore the knowledge base describes relations between parameters and properties of water constituents. This allows to track changes of all wastewater parameters which result from modeling of removal efficiency of applied treatment units. So far two generic treatment units have been represented within the knowledge base. These are separation and conversion units. These two raw types have been applied to represent different types of clarifiers and biological treatment units. The developed planning algorithm is based on a Means-Ends Analysis (MEA). This is a goal oriented search algorithm that posts goals from wastewater state and limit value restrictions to select those treatment units only that are likely to solve the treatment problem. Regarding this, all treatment units are qualified according to postconditions that describe the effect of each unit. In addition, units are also characterized by preconditions that state the application range of each unit. The developed planning algorithm furthermore allows for the identification of simple cycles to account for moving bed reactor systems (e.g. functional unit of aeration tank and clarifier). The evaluation of identified treatment configurations is achieved by total estimated cost of each configuration. The planning tool has been tested on five use cases. Some use cases contained multiple sources and sinks. This showed the possibility to identify water reuse capabilities as well as to identify solutions that go beyond end of pipe solutions. Beyond the originated area of application, the planning tool may be used for advanced interrogations. Thereby the knowledge base and planning algorithm may be further developed to address the objectives to identify configurations for any type of material and energy recovery.
7

Ontology based model framework for conceptual design of treatment flow sheets

Koegst, Thilo 06 December 2013 (has links)
The primary objective of wastewater treatment is the removal of pollutants to meet given legal effluent standards. To further reduce operators costs additional recovery of resources and energy is desired by industrial and municipal wastewater treatment. Hence the objective in early stage of planning of treatment facilities lies in the identification and evaluation of promising configurations of treatment units. Obviously this early stage of planning may best be supported by software tools to be able to deal with a variety of different treatment configurations. In chemical process engineering various design tools are available that automatically identify feasible process configurations for the purpose to obtain desired products from given educts. In contrast, the adaptation of these design tools for the automatic generation of treatment unit configurations (process chains) to achieve preset effluent standards is hampered by the following three reasons. First, pollutants in wastewater are usually not defined as chemical substances but by compound parameters according to equal properties (e.g. all particulate matter). Consequently the variation of a single compound parameter leads to a change of related parameters (e.g. relation between Chemical Oxygen Demand and Total Suspended Solids). Furthermore, mathematical process models of treatment processes are tailored towards fractions of compound parameters. This hampers the generic representation of these process models which in turn is essential for automatic identification of treatment configurations. Second, treatment technologies for wastewater treatment rely on a variety of chemical, biological, and physical phenomena. Approaches to mathematically describe these phenomena cover a wide range of modeling techniques including stochastic, conceptual or deterministic approaches. Even more the consideration of temporal and spatial resolutions differ. This again hampers a generic representation of process models. Third, the automatic identification of treatment configurations may either be achieved by the use of design rules or by permutation of all possible combinations of units stored within a database of treatment units. The first approach depends on past experience translated into design rules. Hence, no innovative new treatment configurations can be identified. The second approach to identify all possible configurations collapses by extremely high numbers of treatment configurations that cannot be mastered. This is due to the phenomena of combinatorial explosion. It follows therefrom that an appropriate planning algorithm should function without the need of additional design rules and should be able to identify directly feasible configurations while discarding those impractical. This work presents a planning tool for the identification and evaluation of treatment configurations that tackles the before addressed problems. The planning tool comprises two major parts. An external declarative knowledge base and the actual planning tool that includes a goal oriented planning algorithm. The knowledge base describes parameters for wastewater characterization (i.e. material model) and a set of treatment units represented by process models (i.e. process model). The formalization of the knowledge base is achieved by the Web Ontology Language (OWL). The developed data model being the organization structure of the knowledge base describes relations between wastewater parameters and process models to enable for generic representation of process models. Through these parameters for wastewater characterization as well as treatment units can be altered or added to the knowledge base without the requirement to synchronize already included parameter representations or process models. Furthermore the knowledge base describes relations between parameters and properties of water constituents. This allows to track changes of all wastewater parameters which result from modeling of removal efficiency of applied treatment units. So far two generic treatment units have been represented within the knowledge base. These are separation and conversion units. These two raw types have been applied to represent different types of clarifiers and biological treatment units. The developed planning algorithm is based on a Means-Ends Analysis (MEA). This is a goal oriented search algorithm that posts goals from wastewater state and limit value restrictions to select those treatment units only that are likely to solve the treatment problem. Regarding this, all treatment units are qualified according to postconditions that describe the effect of each unit. In addition, units are also characterized by preconditions that state the application range of each unit. The developed planning algorithm furthermore allows for the identification of simple cycles to account for moving bed reactor systems (e.g. functional unit of aeration tank and clarifier). The evaluation of identified treatment configurations is achieved by total estimated cost of each configuration. The planning tool has been tested on five use cases. Some use cases contained multiple sources and sinks. This showed the possibility to identify water reuse capabilities as well as to identify solutions that go beyond end of pipe solutions. Beyond the originated area of application, the planning tool may be used for advanced interrogations. Thereby the knowledge base and planning algorithm may be further developed to address the objectives to identify configurations for any type of material and energy recovery.

Page generated in 0.1362 seconds