• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 7
  • 4
  • 2
  • 1
  • Tagged with
  • 35
  • 35
  • 8
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Approximation Spaces in the Numerical Analysis of Cauchy Singular Integral Equations

Luther, Uwe 01 August 2005 (has links) (PDF)
The paper is devoted to the foundation of approximation methods for integral equations of the form (aI+SbI+K)f=g, where S is the Cauchy singular integral operator on (-1,1) and K is a weakly singular integral operator. Here a,b,g are given functions on (-1,1) and the unknown function f on (-1,1) is looked for. It is assumed that a and b are real-valued and Hölder continuous functions on [-1,1] without common zeros and that g belongs to some weighted space of Hölder continuous functions. In particular, g may have a finite number of singularities. Based on known spectral properties of Cauchy singular integral operators approximation methods for the numerical solution of the above equation are constructed, where both aspects the theoretical convergence and the numerical practicability are taken into account. The weighted uniform convergence of these methods is studied using a general approach based on the theory of approximation spaces. With the help of this approach it is possible to prove simultaneously the stability, the convergence and results on the order of convergence of the approximation methods under consideration.
22

Approximation Spaces in the Numerical Analysis of Cauchy Singular Integral Equations

Luther, Uwe 16 June 2005 (has links)
The paper is devoted to the foundation of approximation methods for integral equations of the form (aI+SbI+K)f=g, where S is the Cauchy singular integral operator on (-1,1) and K is a weakly singular integral operator. Here a,b,g are given functions on (-1,1) and the unknown function f on (-1,1) is looked for. It is assumed that a and b are real-valued and Hölder continuous functions on [-1,1] without common zeros and that g belongs to some weighted space of Hölder continuous functions. In particular, g may have a finite number of singularities. Based on known spectral properties of Cauchy singular integral operators approximation methods for the numerical solution of the above equation are constructed, where both aspects the theoretical convergence and the numerical practicability are taken into account. The weighted uniform convergence of these methods is studied using a general approach based on the theory of approximation spaces. With the help of this approach it is possible to prove simultaneously the stability, the convergence and results on the order of convergence of the approximation methods under consideration.
23

Approximation Methods for Two Classes of Singular Integral Equations

Rogozhin, Alexander 13 December 2002 (has links)
The dissertation consists of two parts. In the first part approximate methods for multidimensional weakly singular integral operators with operator-valued kernels are investigated. Convergence results and error estimates are given. There is considered an application of these methods to solving radiation transfer problems. Numerical results are presented, too. In the second part we consider a polynomial collocation method for the numerical solution of a singular integral equation over the interval. More precisely, the operator of our integral equation is supposed to be of the form \ $aI + b \mu^{-1} S \mu I $\ with \ $S$\ the Cauchy singular integral operator, with piecewise continuous coefficients \ $a$\ and \ $b,$\ and with a Jacobi weight \ $\mu.$\ To the equation we apply a collocation method, where the collocation points are the Chebyshev nodes of the first kind and where the trial space is the space of polynomials multiplied by another Jacobi weight. For the stability and convergence of this collocation method in weighted \ $L^2$\ spaces, we derive necessary and sufficient conditions. Moreover, the extension of these results to an algebra generated by the sequences of the collocation method applied to the mentioned singular integral operators is discussed and the behaviour of the singular values of the discretized operators is investigated. / Die Dissertation beschäftigt sich insgesamt mit der numerischen Analysis singulärer Integralgleichungen, besteht aber aus zwei voneinander unabhängigen Teilen. Der este Teil behandelt Diskretisierungsverfahren für mehrdimensionale schwach singuläre Integralgleichungen mit operatorwertigen Kernen. Darüber hinaus wird hier die Anwendung dieser allgemeinen Resultate auf ein Strahlungstransportproblem diskutiert, und numerische Ergebnisse werden präsentiert. Im zweiten Teil betrachten wir ein Kollokationsverfahren zur numerischen Lösung Cauchyscher singulärer Integralgleichungen auf Intervallen. Der Operator der Integralgleichung hat die Form \ $aI + b \mu^{-1} S \mu I $\ mit dem Cauchyschen singulären Integraloperator \ $S,$\ mit stückweise stetigen Koeffizienten \ $a$\ und \ $b,$\ und mit einem klassischen Jacobigewicht \ $\mu.$\ Als Kollokationspunkte dienen die Nullstellen des n-ten Tschebyscheff-Polynoms erster Art und Ansatzfunktionen sind ein in einem geeigneten Hilbertraum orthonormales System gewichteter Tschebyscheff-Polynome zweiter Art. Wir erhalten notwendige und hinreichende Bedingungen für die Stabilität und Konvergenz dieses Kollokationsverfahrens. Außerdem wird das Stabilitätskriterium auf alle Folgen aus der durch die Folgen des Kollokationsverfahrens erzeugten Algebra erweitert. Diese Resultate liefern uns Aussagen über das asymptotische Verhalten der Singulärwerte der Folge der diskreten Operatoren.
24

Pseudospectral Collocation Method Based Energy Management Scheme for a Parallel P2 Hybrid Electric Vehicle

Multani, Sahib Singh 06 October 2020 (has links)
No description available.
25

Integrated Sinc Method for Composite and Hybrid Structures

Slemp, Wesley Campbell Hop 07 July 2010 (has links)
Composite materials and hybrid materials such as fiber-metal laminates, and functionally graded materials are increasingly common in application in aerospace structures. However, adhesive bonding of dissimilar materials makes these materials susceptible to delamination. The use of integrated Sinc methods for predicting interlaminar failure in laminated composites and hybrid material systems was examined. Because the Sinc methods first approximate the highest-order derivative in the governing equation, the in-plane derivatives of in-plane strain needed to obtain interlaminar stresses by integration of the equilibrium equations of 3D elasticity are known without post-processing. Interlaminar stresses obtained with the Sinc method based on Interpolation of Highest derivative were compared for the first-order and third-order shear deformable theories, the refined zigzag beam theory and the higher-order shear and normal deformable beam theory. The results indicate that the interlaminar stresses by the zigzag theory compare well with those obtained by a 3D finite element analysis, while the traditional equivalent single layer theories perform well for some laminates. The philosophy of the Sinc method based on Interpolation of Highest Derivative was extended to create a novel weak form based approach called the Integrated Local Petrov-Galerkin Sinc Method. The Integrated Local Petrov-Galerkin Sinc Method is easily utilized for boundary-value problem on non-rectangular domains as demonstrated for analysis of elastic and elastic-plastic plane-stress panels with elliptical notches. The numerical results showed excellent accuracy compared to similar results obtained with the finite element method. The Integrated Local Petrov-Galerkin Sinc Method was used to analyze interlaminar debonding of composite and fiber-metal laminated beams. A double-cantilever beam and a fixed-ratio mixed mode beam were analyzed using the Integrated Local Petrov-Galerkin Sinc Method and the results were shown to correlate well with those by the finite element method. An adaptive Sinc point distribution technique was implemented for the delamination analysis which significantly improved the methods accuracy for the present problem. Delamination of a GLARE, plane-strain specimen was also analyzed using the Integrated Local Petrov-Galerkin Sinc Method. The results correlate well with 2D, plane-strain analysis by the finite element method, including interlaminar stresses obtained by through-the-thickness integration of the equilibrium equations of 3D elasticity. / Ph. D.
26

Commande prédictive non-linéaire. Application à la production d'énergie. / Nonlinear predictive control. Application to power generation

Fouquet, Manon 30 March 2016 (has links)
Cette thèse porte sur l'optimisation et la commande prédictive des centrales de production d'énergie en utilisant des modèles physiques des installations. Les modèles sont réalisés à l'aide du langage Modelica, un langage équationnel adapté à la modélisation de systèmes multi-physiques. La modélisation de systèmes physiques dans ce langage est présentée dans une première partie, ainsi que les traitements symboliques réalisés par les compilateurs Modelica pour mettre les modèles sous une forme adaptée à l'optimisation. On présente dans une seconde partie le développement d'une méthode d'optimisation dynamique hybride pour les centrales de production d'énergie, qui fournit une trajectoire optimisée de l'installation sur un horizon long. Les trajectoires calculées incluent les trajectoires des commandes continues ainsi que les décisions d'engagement des différents équipements. L'algorithme d'optimisation combine la méthode de collocation et une méthode nommée Sum Up Rounding (SUR) pour la prise en compte des décisions d'engagement. Un algorithme de commande prédictive (MPC) est enfin introduit afin de garantir le suivi des trajectoires optimales et de prendre en compte en temps réel la présence de perturbations et les erreurs du modèle d'optimisation. L'algorithme MPC utilise des modèles linéarisés tangents générés automatiquement à partir du modèle non linéaire. / This thesis deals with hybrid optimal control and Model Predictive Control (MPC) of power plants by use of physical models. Models of the facilities are developped with Modelica, an equation based language tailored for modelling multi-physics systems. Modeling of physical systems with Modelica is introduced in a first part, as well as some of the symbolic processing done by Modelica compilers that transform the original model to a form suited for optimization. Then, a method to solve optimal control problems on hybrid systems (such as power plants) is presented. This methods provides an optimal trajectory for the power plant on a long horizon. The optimal trajectory computed by the method includes the trajectories of continuous inputs as well as switching decisions for components in the plant. The optimization algorithm combines the collocation method and a method named Sum Up Rounding (SUR) for dealing with switches. Finally, a Model Predictive Controller is developped in order to follow this optimal trajectory in real time, and to cope with disturbances on the actual system and modelling errors. The proposed MPC uses tangent linear models of the plant that are derived automatically from the nonlinear model.
27

Stochastic analysis of flow and transport in porous media

Vasylkivska, Veronika S. 06 September 2012 (has links)
Random fields are frequently used in computational simulations of real-life processes. In particular, in this work they are used in modeling of flow and transport in porous media. Porous media as they arise in geological formations are intrinsically deterministic but there is significant uncertainty involved in determination of their properties such as permeability, porosity and diffusivity. In many situations description of properties of the porous media is aided by a limited number of observations at fixed points. These observations constrain the randomness of the field and lead to conditional simulations. In this work we propose a method of simulating the random fields which respect the observed data. An advantage of our method is that in the case that additional data becomes available it can be easily incorporated into subsequent representations. The proposed method is based on infinite series representations of random fields. We provide truncation error estimates which bound the discrepancy between the truncated series and the random field. We additionally provide the expansions for some processes that have not yet appeared in the literature. There are several approaches to efficient numerical computations for partial differential equations with random parameters. In this work we compare the solutions of flow and transport equations obtained by conditional simulations with Monte Carlo (MC) and stochastic collocation (SC) methods. Due to its simplicity MC method is one of the most popular methods used for the solution of stochastic equations. However, it is computationally expensive. The SC method is functionally similar to the MC method but it provides the faster convergence of the statistical moments of the solutions through the use of the carefully chosen collocation points at which the flow and transport equations are solved. We show that for both methods the conditioning on measurements helps to reduce the uncertainty of the solutions of the flow and transport equations. This especially holds in the neighborhood of the conditioning points. Conditioning reduces the variances of solutions helping to quantify the uncertainty in the output of the flow and transport equations. / Graduation date: 2013
28

Information Theoretic Approach To Extractive Text Summarization

Ravindra, G 08 1900 (has links)
Automatic text summarization techniques, which can reduce a source text to a summary text by content generalization or selection have assumed signifi- cance in recent times due to the ever expanding information explosion created by the World Wide Web. Summaries generated by generalization of information are called abstracts and those generated by selection of portions of text (sentences, phrases etc.) are called extracts. Further, summaries could for each document separately or multiple documents could be summarized together to produce a single summary. The challenges in making machines generate extracts or abstracts are primarily due to the lack of understanding of human cognitive processes. Summary generated by humans seems to be influenced by their moral, emotional and ethical stance on the subject and their background knowledge of the content being summarized.These characteristics are hardly understood and difficult to model mathematically. Further automatic summarization is very much handicapped by limitations of existing computing resources and lack of good mathematical models of cognition. In view of these, the role of rigorous mathematical theory in summarization has been limited hitherto. The research reported in this thesis is a contribution towards bringing in the awesome power of well-established concepts information theory to the field of summarization. Contributions of the Thesis The specific focus of this thesis is on extractive summarization. Its domain spans multi-document summarization as well as single document summarization. In the whole thesis the word "summarization" and "summary", imply extract generation and sentence extracts respectively. In this thesis, two new and novel summarizers referred to as ESCI (Extractive Summarization using Collocation Information) and De-ESCI (Dictionary enhanced ESCI) have been proposed. In addition, an automatic summary evaluation technique called DeFuSE (Dictionary enhanced Fuzzy Summary Evaluator) has also been introduced.The mathematical basis for the evolution of the scoring scheme proposed in this thesis and its relationship with other well-known summarization algorithms such as latent Semantic Indexing (LSI) is also derived. The work detailed in this thesis is specific to the domain of extractive summarization of unstructured text without taking into account the data set characteristics such as the positional importance of sentences. This is to ensure that the summarizer works well for a broad class of documents, and to keep the proposed models as generic as possible. Central to the proposed work is the concept of "Collocation Information of a word", its quantification and application to summarization. "Collocation Information" (CI) is the amount of information (Shannon’s measure) that a word and its collocations together contribute to the total information in the document(s) being summarized.The CI of a word has been computed using Shannon’s measure for information using a joint probability distribution. Further, a base value of CI called "Discrimination Threshold" (DT) has also been derived. To determine DT, sentences from a large collection of documents covering various topics including the topic covered by the document(s) being summarized were broken down into sequences of word collocations.The number of possible neighbors for a word within a specified collocation window was determined. This number has been called the "cardinality of the collocating set" and is represented as |ℵ (w)|. It is proved that if |ℵ (w)| determined from this large document collection for any word w is fixed, then the maximum value of the CI for a word w is proportional to |ℵ (w)|. This constrained maximum is the "Discrimination Threshold" and is used as the base value of CI. Experimental evidence detailed in this thesis shows that sentences containing words with CI greater than DT are most likely to be useful in an extract. Words in every sentence of the document(s) being summarized have been assigned scores based on the difference between their current value of CI and their respective DT. Individual word scores have been summed to derive a score for every sentence. Sentences are ranked according to their scores and the first few sentences in the rank order have been selected as the extract summary. Redundant and semantically similar sentences have been excluded from the selection process using a simple similarity detection algorithm. This novel method for extraction has been called ESCI in this thesis. In the second part of the thesis, the advantages of tagging words as nouns, verbs, adjectives and adverbs without the use of sense disambiguation has been explored. A hierarchical model for abstraction of knowledge has been proposed, and those cases where such a model can improve summarization accuracy have been explained. Knowledge abstraction has been achieved by converting collocations into their hypernymous versions. In the second part of the thesis, the advantages of tagging words as nouns, verbs, adjectives and adverbs without the use of sense disambiguation has been explored. A hierarchical model for abstraction of knowledge has been proposed, and those cases where such a model can improve summarization accuracy have been explained. Knowledge abstraction has been achieved by converting collocations into their hypernymous versions. The number of levels of abstraction varies based on the sense tag given to each word in the collocation being abstracted. Once abstractions have been determined, Expectation- Maximization algorithm is used to determine the probability value of each collocation at every level of abstraction. A combination of abstracted collocations from various levels is then chosen and sentences are assigned scores based on collocation information of these abstractions.This summarization scheme has been referred to as De-ESCI (Dictionary enhanced ESCI). It had been observed in many human summary data sets that the factual attribute of the human determines the choice of noun and verb pairs. Similarly, the emotional attribute of the human determines the choice of the number of noun and adjective pairs. In order to bring these attributes into the machine generated summaries, two variants of DeESCI have been proposed. The summarizer with the factual attribute has been called as De-ESCI-F, while the summarizer with the emotional attribute has been called De-ESCI-E in this thesis. Both create summaries having two parts. First part of the summary created by De-ESCI-F is obtained by scoring and selecting only those sentences where a fixed number of nouns and verbs occur.The second part of De-ESCI-F is obtained by ranking and selecting those sentences which do not qualify for the selection process in the first part. Assigning sentence scores and selecting sentences for the second part of the summary is exactly like in ESCI. Similarly, the first part of De-ESCI-E is generated by scoring and selecting only those sentences where fixed number of nouns and adjectives occur. The second part of the summary produced by De-ESCI-E is exactly like the second part in De-ESCI-F. As the model summary generated by human summarizers may or may not contain sentences with preference given to qualifiers (adjectives), the automatic summarizer does not know apriori whether to choose sentences with qualifiers over those without qualifiers. As there are two versions of the summary produced by De-ESCI-F and De-ESCIE, one of them should be closer to the human summarizer’s point of view (in terms of giving importance to qualifiers). This technique of choosing the best candidate summary has been referred to as De-ESCI-F/E. Performance Metrics The focus of this thesis is to propose new models and sentence ranking techniques aimed at improving the accuracy of the extract in terms of sentences selected, rather than on the readability of the summary. As a result, the order of sentences in the summary is not given importance during evaluation. Automatic evaluation metrics have been used and the performance of the automatic summarizer has been evaluated in terms of precision, recall and f-scores obtained by comparing its output with model human generated extract summaries. A novel summary evaluator called DeFuSE has been proposed in this thesis, and its scores are used along with the scores given by a standard evaluator called ROUGE. DeFuSE evaluates an extract in terms of precision, recall and f-score relying on The use of WordNet hypernymy structure to identify semantically similar sentences in a document. It also uses fuzzy set theory to compute the extent to which a sentence from the machine generated extract belongs to the model summary. Performance of candidate summarizers has been discussed in terms of percentage improvement in fscore relative to the baselines. Average of ROUGE and DeFuSE f-score for every summary is computed, and the mean value of these scores is used to compare performance improvement. Performance For illustrative purposes, DUC 2002 and DUC 2003 multi-document data sets have been used. From these data sets only the 400 word summaries of DUC 2002 and track-4 (novelty track) summaries of DUC 2003 are useful for evaluation of sentence extracts and hence only these have been used. f-score has been chosen as a measure of performance. Standard baselines such as coverage, size and lead and also probabilistic baselines have been used to measure percentage improvement in f-score of candidate summarizers relative to these baselines. Further, summaries generated by MEAD using centroid and length as features for ranking (MEAD-CL), MEAD using positional, centroid and length as features for ranking (MEAD-CLP), Microsoft Word automatic summarizer (MS-Word) and Latent Semantic Indexing (LSI) based summarizer were used to compare the performance of the proposed summarization schemes.
29

Quantendynamik von S>N2-Reaktionen / Quantum Dynamics of SN2 Reactions

Hennig, Carsten 01 November 2006 (has links)
No description available.
30

Commande prédictive non-linéaire. Application à la production d'énergie. / Nonlinear predictive control. Application to power generation

Fouquet, Manon 30 March 2016 (has links)
Cette thèse porte sur l'optimisation et la commande prédictive des centrales de production d'énergie en utilisant des modèles physiques des installations. Les modèles sont réalisés à l'aide du langage Modelica, un langage équationnel adapté à la modélisation de systèmes multi-physiques. La modélisation de systèmes physiques dans ce langage est présentée dans une première partie, ainsi que les traitements symboliques réalisés par les compilateurs Modelica pour mettre les modèles sous une forme adaptée à l'optimisation. On présente dans une seconde partie le développement d'une méthode d'optimisation dynamique hybride pour les centrales de production d'énergie, qui fournit une trajectoire optimisée de l'installation sur un horizon long. Les trajectoires calculées incluent les trajectoires des commandes continues ainsi que les décisions d'engagement des différents équipements. L'algorithme d'optimisation combine la méthode de collocation et une méthode nommée Sum Up Rounding (SUR) pour la prise en compte des décisions d'engagement. Un algorithme de commande prédictive (MPC) est enfin introduit afin de garantir le suivi des trajectoires optimales et de prendre en compte en temps réel la présence de perturbations et les erreurs du modèle d'optimisation. L'algorithme MPC utilise des modèles linéarisés tangents générés automatiquement à partir du modèle non linéaire. / This thesis deals with hybrid optimal control and Model Predictive Control (MPC) of power plants by use of physical models. Models of the facilities are developped with Modelica, an equation based language tailored for modelling multi-physics systems. Modeling of physical systems with Modelica is introduced in a first part, as well as some of the symbolic processing done by Modelica compilers that transform the original model to a form suited for optimization. Then, a method to solve optimal control problems on hybrid systems (such as power plants) is presented. This methods provides an optimal trajectory for the power plant on a long horizon. The optimal trajectory computed by the method includes the trajectories of continuous inputs as well as switching decisions for components in the plant. The optimization algorithm combines the collocation method and a method named Sum Up Rounding (SUR) for dealing with switches. Finally, a Model Predictive Controller is developped in order to follow this optimal trajectory in real time, and to cope with disturbances on the actual system and modelling errors. The proposed MPC uses tangent linear models of the plant that are derived automatically from the nonlinear model.

Page generated in 0.1114 seconds