• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 45
  • 13
  • 11
  • 5
  • 5
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 106
  • 106
  • 31
  • 30
  • 20
  • 20
  • 20
  • 20
  • 18
  • 16
  • 16
  • 15
  • 14
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Estimation à erreurs bornées et guidage pilotage des aéronefs autonomes en milieu perturbé. / Bounded error estimation and design of guidance and control laws for small uav's in presence of atmospheric perturbations

Achour, Walid 20 June 2011 (has links)
L’objectif principal du travail de recherche présenté dans ce mémoire est l’amélioration de la sécurité et les performances du vol des mini drones soumis à des perturbations atmosphériques. Pour ce faire, un couplage entre un estimateur ensembliste à erreurs bornées et une stratégie de guidage pilotage est mise en œuvre. L’estimateur ensembliste a été utilisé pour restituer l’état du modèle dynamique du drone en présence de perturbations et de bruits de mesure supposés bornés. L’utilisation de ces techniques avait pour objet tout d’abord de détecter l’occurrence d’une perturbation atmosphérique par estimation de l’état du drone puis d’estimer l’amplitude et la direction du vent agissant sur le véhicule. Des expérimentations dans le générateur de rafale B20 à Lille ont été ainsi présentées afin de valider ces approches et d’évaluer leurs performances. La stratégie de guidage pilotage développée favorise le déplacement du véhicule dans une direction qui tient compte de l’évolution de la perturbation atmosphérique et du prochain point de passage désigné au véhicule. Cette loi de guidage est basée sue la loi de guidage par navigation proportionnelle et a été adaptée pour tenir compte des perturbations dans le déplacement du véhicule. Les résultats obtenus montrent qu’il est possible d’améliorer la sécurité du vol des mini-drones en présence de perturbations atmosphériques transversales, en modifiant en ligne la trajectoire. / The principal objective of this thesis is to enhance the safety of flight for small UAVs in presence of atmospheric perturbation. The approach suggested here consists in coupling a bounded–error estimation method with a new guidance strategy. The bounded error estimation has been used to estimate the states of the dynamical systems corrupted by perturbations and measurement noises, assumed to remain bounded. The method has been first used to detect the occurrence of a wind gust and afterwards to characterize the amplitude and direction of the wind acting on the vehicle Experiments in the B20 gust generator are also presented to validate these approaches and evaluate their performance. The developed guidance strategy provides the vehicle with a direction that takes into account the atmospheric perturbation and the next waypoint position. The guidance law is designed by using proportional navigation guidance that has been adapted to take the perturbations into account. The results presented in this thesis show that it is possible to improve the flight safety in a perturbed environement using the combination of the two methods.
62

Formulação h-adaptativa do método dos elementos de contorno para elasticidade bidimensional com ênfase na propagação da fratura / H-adaptative formulation of the boundary element method for elastic bidimensional with emphasis in the propagation of the fracture

Oscar Bayardo Ramos Lovón 09 June 2006 (has links)
Neste trabalho desenvolveu-se uma formulação adaptativa do método de elementos de contorno (MEC) para a análise de problemas de fratura elástica linear. Foi utilizado o método da colocação para a formulação das equações integrais de deslocamento e de tensão. Para a discretização das equações integrais foram utilizados elementos lineares que possibilitaram a obtenção das expressões exatas das integrais (integração analítica) sobre elementos de contorno e fratura. Para a montagem do sistema de equações algébricas foram utilizadas apenas equações de deslocamento, apenas equações de forças de superfície, ou as duas escritas para nós opostos da fratura levando, portanto ao método dos elementos de contorno dual usualmente empregado na análise de fratura. Para o processo de crescimento da trinca foi desenvolvido um procedimento especial objetivando a correta determinação da direção de crescimento da trinca. Os fatores de intensidade de tensão são calculados por meio da conhecida técnica de correlação de deslocamentos a qual relaciona os deslocamentos atuantes nas faces da fissura. Após a determinação dos fatores de intensidade de tensão é utilizada a teoria da máxima tensão circunferencial para a determinação do ângulo de propagação. O modelo adaptativo empregado é do tipo h onde apenas a sub-divisão dos elementos é feita com base em erros estimados. O erro a ser considerado foi estimado a partir de normas onde se consideraram: a variação aproximada dos deslocamentos, a variação das forças de superfície e a variação da energia de deformação do sistema, calculada com a sua integração sobre o contorno. São apresentados exemplos numéricos para demonstrar a eficiência dos procedimentos propostos. / In this work, an adaptative formulation of the boundary element method is developed to analyze linear elastic fracture problems. The collocation point method was used to formulate the integral equations for the displacements and stresses (or tractions). To discretize the integral equations, linear elements were used to obtain the exact expressions of the integrals over boundary elements and fracture. To construct the linear system of equations were used only displacement equations, traction equations or both of them written for opposite nodes of the fracture, leading to the dual boundary element formulation usually employed in the fracture analyses. For the process of growth of the crack a special procedure was developed aiming at the correct determination of the direction of growth of the crack. The stress intensity factors, to calculate he crack growth angle, are calculated through of correlation displacements technique which relates the displacements actuants in the faces of the crack. The employed adaptative model is the h-type where only the sub-division of the elements is done based on error estimate. The error estimates considered in this work are based on the following norms: displacement, traction and strain energy variations, this last considered from the integration over the boundary. Numerical examples are presented to demonstrate the efficiency of the proposed procedures.
63

Residual Error Estimation And Adaptive Algorithms For Fluid Flows

Ganesh, N 05 1900 (has links)
The thesis deals with the development of a new residual error estimator and adaptive algorithms based on the error estimator for steady and unsteady fluid flows in a finite volume framework. The aposteriori residual error estimator referred to as R--parameter, is a measure of the local truncation error and is derived from the imbalance arising from the use of an exact operator on the numerical solution for conservation laws. A detailed and systematic study of the R--parameter on linear and non--linear hyperbolic problems, involving continuous flows and discontinuities is performed. Simple theoretical analysis and extensive numerical experiments are performed to establish the fact that the R--parameter is a valid estimator at limiter--free continuous flow regions, but is rendered inconsistent at discontinuities and with limiting. The R--parameter is demonstrated to work equally well on different mesh topologies and detects the sources of error, making it an ideal choice to drive adaptive strategies. The theory of the error estimation is also extended for unsteady flows, both on static and moving meshes. The R--parameter can be computed with a low computational overhead and is easily incorporated into existing finite volume codes with minimal effort. Adaptive refinement algorithms for steady flows are devised employing the residual error estimator. For continuous flows devoid of limiters, a purely R--parameter based adaptive algorithm is designed. A threshold length scale derived from the estimator determines the refinement/derefinement criterion, leading to a self--evolving adaptive algorithm devoid of heuristic parameters. On the other hand, for compressible flows involving discontinuities and limiting, a hybrid adaptive algorithm is proposed. In this hybrid algorithm, error indicators are used to flag regions for refinement, while regions of derefinement are detected using the R--parameter. Two variants of these algorithms, which differ in the computation of the threshold length scale are proposed. The disparate behaviour of the R--parameter for continuous and discontinuous flows is exploited to design a simple and effective discontinuity detector for compressible flows. For time--dependent flow problems, a two--step methodology is proposed for adaptive grid refinement. In the first step, the ``best" mesh at any given time instant is determined. The second step involves predicting the evolution of flow phenomena over a period of time and refines regions into which the flow features would progress into. The latter step is implemented using a geometric--based ``Refinement Level Projection" strategy which guarantees that the flow features remain in adapted zones between successive adaptive cycles and hence uniform solution accuracy. Several numerical experiments involving inviscid and viscous flows on different grid topologies are performed to illustrate the success of the proposed adaptive algorithms. Appendix 1 Candidate's response to the comments/queries of the examiners The author would like to thank the reviewers for their appreciation of the work embodied in the thesis and for their comments. The clarifications to the comments and queries posed in the reviews are summarized below. Referee 1 Q: The example of mesh refinement for RANS solution with shock was performed with isotropic mesh, while the author claims that it is appropriate with anisotropic mesh. If this is the case, why did he not demonstrate that ? As the author knows well, in the case of full 3--D configuration, isotropic adaptation will lead to substantial grid points. The large mesh will hamper timely turnaround time of simulation. Therefore it would be a significant contribution to the aero community if this point is investigated at a later date. Response: The author is of the view that for most practical situations, a pragmatic approach to mesh adaptation for RANS computations would merely involve generating a viscous padding of adequate fineness around the body and allowing for grid adaptation only in the outer potential region. Of course, this method would allow for grid adaptation in the outer layers of viscous padding only to the extent the smoothness criterion is satisfied while adapting the grids in the potential region. This completely obviates point addition to the wall (CAD surface) and there by avoids all complexities (like loss in automation) resulting from the interaction with the surface modeler while adding point on the wall. This method is expected to do well for attached flows and mildly separated flows. This method is expected to do well even for problems involving shock - boundary layer interaction, owing to the fact that the shock is normal to the wall surface (recall, a flow aligned grid is ideal to capture such shocks), as long as the interaction does not result in a massive separation. This approach has already been demonstrated in section 4.5.3 where in adaptive high-lift computations have been performed. Isotropic adaptation retains the goodness of the zero level grid and therefore the robustness of the solver does not suffer through successive levels of grid adaptation. This procedure may result in large number of volumes. On the other hand, the anisotropic refinement may result in significantly less number of volumes, but the mesh quality may have badly degenerated during successive levels of adaptation leading to difficulties in convergence. Therefore, the choice of either of these strategies is effectively dictated by requirements on grid quality and grid size. Also, it is generally understood that building tools for anisotropic adaptation are more complicated as compared to those required for isotropic adaptation, while anisotropic refinement may not require point addition on the wall. Considering these facts, in the view of the author, this issue is an open issue and his personal preference would be to use isotropic refinement or a hybrid strategy employing a combination of these methodologies, particularly considering aspects of solution quality. Finally, in both the examples cited by the reviewer (sections 6.4.5 & 6.4.6) the objective was to demonstrate the efficacy of the new adaptive algorithm (using error indicators and the residual estimator), rather than evaluating the pros & cons of isotropic and anisotropic refinement strategies. In the sections cited above, the author has merely highlighted the advantages of the refinement strategies in specific context of the problem considered and these statements need not be considered as general. Referee 2 Q: For convection problems, a good error estimator must be able to distinguish between locally generated error and convected error. The thesis says the residual error estimator is able to do this and some numerical evidence is presented, but can the candidate comment how the estimator is able to achieve this ? Response: The ultimate aim of any AMR strategy is to reduce the global error. The residual error estimator proposed in this work measures the local truncation error. It has been shown in the context of a linear convective equation that the global error in a cell consists of two parts--the locally generated error in the cell (which is the R--parameter) and the local error transported from other cells in the domain. Either of these errors are dependent on the local error itself and any algorithm that reduces the local truncation error (sources of error) will reduce the global error in the domain. This conclusion is supported by the test case of isentropic flow past an airfoil (Chapter 3, C, Pg 79), where refinement based on the R--parameter leads to lower global error levels than a global error based refinement itself. Q: While analysing the R--parameter in Section 3.3, the operator δ2 is missing. Response: The analysis in Section 3.3 is based on Eq.(3.3) (Pg 58) which provides the local truncation error. As can be seen from Eq.(3.14), the LHS represents the discrete operator acting on the numerical solution (which is zero) and the first term on the RHS is the exact operator acting on the numerical solution (which is I[u]). Consequently the truncation terms T1 and T2 contribute to the truncation error R1 . However, from the viewpoint of computing the error estimate on a discretised domain, we need to replace the exact operator I by a higher order discrete operator δ2 . This gives the R-parameter, which has contributions from R1 as well as discretisation errors due to the higher order operator, R2 . When the latter is negligible compared to the former, the R--parameter is an estimate of the local truncation error. The truncation error depends on the accuracy of the reconstruction procedure used in obtaining the numerical solution and hence on the discrete operator δ1. On very similar lines, it can be shown that operator δ2 leads to a formal second order accuracy and this operator is only required in computing the residual error estimate. Q: What does the phrase "exact derivatives of the numerical solution" mean ? Response: This statement exemplifies the fact that the numerical solution is the exact solution to the modified partial differential equation and that the truncation terms T1 and T2 that constitute the R--parameter are functions of the derivatives of this numerical solution. Q: For the operator δ2 quadratic reconstruction is employed. Is the exact or numerical flux function used ? Response: The operator δ2 is a higher order discrete approximation to the exact operator I. Therefore, a quadratic polynomial with a three--point Gauss quadrature has been used in the error estimation procedure. Error estimation does not involve issues with convergence associated with the flow solver and therefore an exact flux function has been employed with the δ2 operator. Nevertheless, it is also possible to use the same numerical flux function as employed in the flow solver for error estimation also. Q: The same stencil of grid points is used for the solution update and the error estimation. Does this not lead to an increased stencil size ? Response: In comparison to reconstruction using higher degree polynomials such as cubic and quartic reconstruction, quadratic reconstruction involves only a smaller stencil of points consisting of the node--sharing neighbours of a cell. The use of such a support stencil is sufficient for linear reconstruction also and adds to the robustness of the flow solver, although a linear reconstruction can, in principle, work with a smaller support stencil. A possible alternative to using quadratic reconstruction (and hence a slightly larger stencil) is to adopt a Defect Correction strategy to obtain derivatives to higher order accuracy and needs to be explored in detail. Q: How is the R--parameter computed for viscous flows ? Response: The computation of the R--parameter for viscous flows is on the same lines as for inviscid flows. The gradients needed for viscous flux computation at the face centers are obtained using quadratic reconstruction. The procedure for calculation of the R--parameter for steady flows (both inviscid and viscous) is the step--by--step algorithm in Section 3.5. Q: In some cases, regions ahead of the shock show no coarsening. Response: The adaptive algorithm proposed in this work does not allow for coarsening of the initial mesh, and regions ahead of the shock remain unaffected (because of uniform flow) at all levels of refinement. Q: Do adaptation strategies terminate automatically atleast for steady flows ? Response: The adaptation strategies (RAS and HAS) must, in principle by virtue of construction of the algorithm, automatically terminate for steady flows. In the HAS algorithms though, there are certain heuristic criteria for termination of refinement especially at shocks/turbulent boundary layers. In this work, a maximum of four cycles of refinement/derefinement have only been carried out and therefore an automatic termination of the adaptive strategies were no studied. Q: How do residual--based adaptive strategies compare and contrast with adjoint--based approaches which are now becoming popular for goal--oriented adaptation ? Adjoint--based methods involve solution to the adjoint problem in addition to solving the primal problem, which represents a substantial computational cost. A timing study for a typical 3D problem[2] indicates that the solution of the adjoint problem (which needs the computation of the Jacobian and sensitivities of the functional) could require as much as one--half of the total time needed to compute the flow solution. On the contrary, R--parameter based refinement involves no additional information than that required by the flow solver and is roughly equivalent to one explicit iteration of the flow solver (Section 3.5.1). For practical 3--D applications, adjoint--based approaches will lead to a prohibitively high cost, and more so for dynamic adaptation. This is also exemplified by the fact that there has been only few recent works on 3D adaptive computations based on adjoint error estimation (which consider only inviscid flows)[1,2]. Goal--oriented adaptation involves reducing the error in some functional of interest. This can be achieved within the framework of R--parameter based adaptation, by introducing additional termination criteria based on integrated quantities. Within an automated adaptation loop, such an algorithm would terminate when the integrated quantities do not change appreciably with refinement levels. This is in contrast to the adjoint--based approach which strives to reduce the error in the functional below a certain threshold. Considering the fact that reducing the residual leads to reducing the global error itself, the R--parameter based adaptive algorithm would also lead to accurate estimates of the integrated quantities (which depend on the numerical solution). This is also reflected in the fact that the R--parameter based adaptation for the three--element NHLP configuration predicts the lift and drag coefficients to reasonable accuracy, as shown in Section 4.5.3. The author is of the belief that the R--parameter based adaptive algorithm holds huge promise for adaptive simulations of flow past complex geometries, both in terms of computational cost and solution accuracy. This is exemplified by successful adaptive simulations of inviscid flow past ONERA M6 wing as well as a conventional missile configuration[3]. A more concrete comparison of the R--parameter based and adjoint--based approaches would involve systematically solving a set of problems by both approaches and has not been considered in this thesis. [1] Nemec and Aftosmis,``Adjoint error estimation and adaptive refinement for embedded--boundary cartesian meshes", AIAA Paper 2007--4187, 2007. [2] Wintzer, Nemec and Aftosmis,``Adjoint--based adaptive mesh refinement for sonic boom prediction", AIAA Paper 2008--6593, 2008. [3] Nikhil Shende, ``A general purpose flow solver for Euler equations", Ph.D. Thesis, Dept. of Aerospace Engg., Indian Institute of Science, 2005.
64

Structured Neural Networks For Modeling And Identification Of Nonlinear Mechanical Systems

Kilic, Ergin 01 September 2012 (has links) (PDF)
Most engineering systems are highly nonlinear in nature and thus one could not develop efficient mathematical models for these systems. Artificial neural networks, which are used in estimation, filtering, identification and control in technical literature, are considered as universal modeling and functional approximation tools. Unfortunately, developing a well trained monolithic type neural network (with many free parameters/weights) is known to be a daunting task since the process of loading a specific pattern (functional relationship) onto a generic neural network is proven to be a NP-complete problem. It implies that if training is conducted on a deterministic computer, the time required for training process grows exponentially with increasing size of the free parameter space (and the training data in correlation). As an alternative modeling technique for nonlinear dynamic systems / this thesis proposed a general methodology for structured neural network topologies and their corresponding applications are realized. The main idea behind this (rather classic) divide-and-conquer approach is to employ a priori information on the process to divide the problem into its fundamental components. Hence, a number of smaller neural networks could be designed to tackle with these elementary mapping problems. Then, all these networks are combined to yield a tailored structured neural network for the purpose of modeling the dynamic system under study accurately. Finally, implementations of the devised networks are taken into consideration and the efficiency of the proposed methodology is tested on four different types of mechanical systems.
65

Interval Based Parameter Identification for System Biology / Intervallbaserad parameteridentifiering för systembiologi

Alami, Mohsen January 2012 (has links)
This master thesis studies the problem of parameter identification for system biology. Two methods have been studied. The method of interval analysis uses subpaving as a class of objects to manipulate and store inner and outer approximations of compact sets. This method works well with the model given as a system of differential equations, but has its limitations, since the analytical expression for the solution to the ODE is not always obtainable, which is needed for constructing the inclusion function. The other method, studied, is SDP-relaxation of a nonlinear and non-convex feasibility problem. This method, implemented in the toolbox bio.SDP, works with system of difference equations, obtained using the Euler discretization method. The discretization method is not exact, raising the need of bounding this discretization error. Several methods for bounding this error has been studied. The method of ∞-norm optimization, also called worst-case-∞-norm is applied on the one-step error estimation method. The methods have been illustrated solving two system biological problems and the resulting SCP have been compared. / Det här examensarbetet studerar problemet med parameteridentifiering för systembiologi. Två metoder har studerats. Metoden med intervallanalys använder union av intervallvektorer som klass av objekt för att manipulera och bilda inre och yttre approximationer av kompakta mängder. Denna metod fungerar väl för modeller givna som ett system av differentialekvationer, men har sina begränsningar, eftersom det analytiska uttrycket för lösningen till differentialekvationen som är nödvändigt att känna till för att kunna formulera inkluderande funktioner, inte alltid är tillgängliga. Den andra studerade metoden, använder SDP-relaxering, som ett sätt att komma runt problemet med olinjäritet och icke-konvexitet i systemet. Denna metod, implementerad i toolboxen bio.SDP, utgår från system av differensekvationer, framtagna via Eulers diskretiserings metod. Diskretiseringsmetoden innehåller fel och osäkerhet, vilket gör det nödvändigt att estimera en gräns för felets storlek. Några felestimeringsmetoder har studerats. Metoden med ∞-norm optimering, också kallat worst-case-∞-norm är tillämpat på ett-stegs felestimerings metoder. Metoderna har illustrerats genom att lösa två system biologiska problem och de accepterade parametermängderna, benämnt SCP, har jämförts och diskuterats.
66

Adaptive numerical techniques for the solution of electromagnetic integral equations

Saeed, Usman 07 July 2011 (has links)
Various error estimation and adaptive refinement techniques for the solution of electromagnetic integral equations were developed. Residual based error estimators and h-refinement implementations were done for the Method of Moments (MoM) solution of electromagnetic integral equations for a number of different problems. Due to high computational cost associated with the MoM, a cheaper solution technique known as the Locally-Corrected Nyström (LCN) method was explored. Several explicit and implicit techniques for error estimation in the LCN solution of electromagnetic integral equations were proposed and implemented for different geometries to successfully identify high-error regions. A simple p-refinement algorithm was developed and implemented for a number of prototype problems using the proposed estimators. Numerical error was found to significantly reduce in the high-error regions after the refinement. A simple computational cost analysis was also presented for the proposed error estimation schemes. Various cost-accuracy trade-offs and problem-specific limitations of different techniques for error estimation were discussed. Finally, a very important problem of slope-mismatch in the global error rates of the solution and the residual was identified. A few methods to compensate for that mismatch using scale factors based on matrix norms were developed.
67

A posteriori error estimation for non-linear eigenvalue problems for differential operators of second order with focus on 3D vertex singularities

Pester, Cornelia 07 May 2006 (has links) (PDF)
This thesis is concerned with the finite element analysis and the a posteriori error estimation for eigenvalue problems for general operator pencils on two-dimensional manifolds. A specific application of the presented theory is the computation of corner singularities. Engineers use the knowledge of the so-called singularity exponents to predict the onset and the propagation of cracks. All results of this thesis are explained for two model problems, the Laplace and the linear elasticity problem, and verified by numerous numerical results.
68

Active evaluation of predictive models

Sawade, Christoph January 2012 (has links)
The field of machine learning studies algorithms that infer predictive models from data. Predictive models are applicable for many practical tasks such as spam filtering, face and handwritten digit recognition, and personalized product recommendation. In general, they are used to predict a target label for a given data instance. In order to make an informed decision about the deployment of a predictive model, it is crucial to know the model’s approximate performance. To evaluate performance, a set of labeled test instances is required that is drawn from the distribution the model will be exposed to at application time. In many practical scenarios, unlabeled test instances are readily available, but the process of labeling them can be a time- and cost-intensive task and may involve a human expert. This thesis addresses the problem of evaluating a given predictive model accurately with minimal labeling effort. We study an active model evaluation process that selects certain instances of the data according to an instrumental sampling distribution and queries their labels. We derive sampling distributions that minimize estimation error with respect to different performance measures such as error rate, mean squared error, and F-measures. An analysis of the distribution that governs the estimator leads to confidence intervals, which indicate how precise the error estimation is. Labeling costs may vary across different instances depending on certain characteristics of the data. For instance, documents differ in their length, comprehensibility, and technical requirements; these attributes affect the time a human labeler needs to judge relevance or to assign topics. To address this, the sampling distribution is extended to incorporate instance-specific costs. We empirically study conditions under which the active evaluation processes are more accurate than a standard estimate that draws equally many instances from the test distribution. We also address the problem of comparing the risks of two predictive models. The standard approach would be to draw instances according to the test distribution, label the selected instances, and apply statistical tests to identify significant differences. Drawing instances according to an instrumental distribution affects the power of a statistical test. We derive a sampling procedure that maximizes test power when used to select instances, and thereby minimizes the likelihood of choosing the inferior model. Furthermore, we investigate the task of comparing several alternative models; the objective of an evaluation could be to rank the models according to the risk that they incur or to identify the model with lowest risk. An experimental study shows that the active procedure leads to higher test power than the standard test in many application domains. Finally, we study the problem of evaluating the performance of ranking functions, which are used for example for web search. In practice, ranking performance is estimated by applying a given ranking model to a representative set of test queries and manually assessing the relevance of all retrieved items for each query. We apply the concepts of active evaluation and active comparison to ranking functions and derive optimal sampling distributions for the commonly used performance measures Discounted Cumulative Gain and Expected Reciprocal Rank. Experiments on web search engine data illustrate significant reductions in labeling costs. / Maschinelles Lernen befasst sich mit Algorithmen zur Inferenz von Vorhersagemodelle aus komplexen Daten. Vorhersagemodelle sind Funktionen, die einer Eingabe – wie zum Beispiel dem Text einer E-Mail – ein anwendungsspezifisches Zielattribut – wie „Spam“ oder „Nicht-Spam“ – zuweisen. Sie finden Anwendung beim Filtern von Spam-Nachrichten, bei der Text- und Gesichtserkennung oder auch bei der personalisierten Empfehlung von Produkten. Um ein Modell in der Praxis einzusetzen, ist es notwendig, die Vorhersagequalität bezüglich der zukünftigen Anwendung zu schätzen. Für diese Evaluierung werden Instanzen des Eingaberaums benötigt, für die das zugehörige Zielattribut bekannt ist. Instanzen, wie E-Mails, Bilder oder das protokollierte Nutzerverhalten von Kunden, stehen häufig in großem Umfang zur Verfügung. Die Bestimmung der zugehörigen Zielattribute ist jedoch ein manueller Prozess, der kosten- und zeitaufwendig sein kann und mitunter spezielles Fachwissen erfordert. Ziel dieser Arbeit ist die genaue Schätzung der Vorhersagequalität eines gegebenen Modells mit einer minimalen Anzahl von Testinstanzen. Wir untersuchen aktive Evaluierungsprozesse, die mit Hilfe einer Wahrscheinlichkeitsverteilung Instanzen auswählen, für die das Zielattribut bestimmt wird. Die Vorhersagequalität kann anhand verschiedener Kriterien, wie der Fehlerrate, des mittleren quadratischen Verlusts oder des F-measures, bemessen werden. Wir leiten die Wahrscheinlichkeitsverteilungen her, die den Schätzfehler bezüglich eines gegebenen Maßes minimieren. Der verbleibende Schätzfehler lässt sich anhand von Konfidenzintervallen quantifizieren, die sich aus der Verteilung des Schätzers ergeben. In vielen Anwendungen bestimmen individuelle Eigenschaften der Instanzen die Kosten, die für die Bestimmung des Zielattributs anfallen. So unterscheiden sich Dokumente beispielsweise in der Textlänge und dem technischen Anspruch. Diese Eigenschaften beeinflussen die Zeit, die benötigt wird, mögliche Zielattribute wie das Thema oder die Relevanz zuzuweisen. Wir leiten unter Beachtung dieser instanzspezifischen Unterschiede die optimale Verteilung her. Die entwickelten Evaluierungsmethoden werden auf verschiedenen Datensätzen untersucht. Wir analysieren in diesem Zusammenhang Bedingungen, unter denen die aktive Evaluierung genauere Schätzungen liefert als der Standardansatz, bei dem Instanzen zufällig aus der Testverteilung gezogen werden. Eine verwandte Problemstellung ist der Vergleich von zwei Modellen. Um festzustellen, welches Modell in der Praxis eine höhere Vorhersagequalität aufweist, wird eine Menge von Testinstanzen ausgewählt und das zugehörige Zielattribut bestimmt. Ein anschließender statistischer Test erlaubt Aussagen über die Signifikanz der beobachteten Unterschiede. Die Teststärke hängt von der Verteilung ab, nach der die Instanzen ausgewählt wurden. Wir bestimmen die Verteilung, die die Teststärke maximiert und damit die Wahrscheinlichkeit minimiert, sich für das schlechtere Modell zu entscheiden. Des Weiteren geben wir eine Möglichkeit an, den entwickelten Ansatz für den Vergleich von mehreren Modellen zu verwenden. Wir zeigen empirisch, dass die aktive Evaluierungsmethode im Vergleich zur zufälligen Auswahl von Testinstanzen in vielen Anwendungen eine höhere Teststärke aufweist. Im letzten Teil der Arbeit werden das Konzept der aktiven Evaluierung und das des aktiven Modellvergleichs auf Rankingprobleme angewendet. Wir leiten die optimalen Verteilungen für das Schätzen der Qualitätsmaße Discounted Cumulative Gain und Expected Reciprocal Rank her. Eine empirische Studie zur Evaluierung von Suchmaschinen zeigt, dass die neu entwickelten Verfahren signifikant genauere Schätzungen der Rankingqualität liefern als die untersuchten Referenzverfahren.
69

Apports du couplage non-intrusif en mécanique non-linéaire des structures / Contributions of non-intrusive coupling in nonlinear structural mechanics

Duval, Mickaël 08 July 2016 (has links)
Le projet ANR ICARE, dans lequel s'inscrit cette thèse, vise au développement de méthodes pour l'analyse de structures complexes et de grande taille. Le défi scientifique consiste à investiguer des zones très localisées, mais potentiellement critiques vis-à-vis de la tenue mécanique d'ensemble. Classiquement, sont mis en œuvre aux échelles globale et locale des représentations, discrétisations, modèles de comportement et outils numériques adaptés à des besoins de simulation gradués en complexité. Le problème global est traité avec un code généraliste dans le cadre d'idéalisations topologiques (formulation plaque, simplification géométrique) et comportementale (homogénéisation) ; l'analyse locale quant à elle demande la mise en œuvre d'outils spécialisés (routines, codes dédiés) pour une représentation fidèle de la géométrie et du comportement.L'objectif de cette thèse consiste à développer un outil efficace de couplage non-intrusif pour la simulation multi-échelles / multi-modèles en calcul de structures. Les contraintes de non-intrusivité se traduisent par la non modification de l'opérateur de rigidité, de la connectivité et du solveur du modèle global, ce qui permet de travailler dans un environnement logiciel fermé. Dans un premier temps, on propose une étude détaillée de l'algorithme de couplage global/local non-intrusif. Sur la base d'exemples et de cas-test représentatifs en calcul de structures (fissuration, plasticité, contact...), on démontre l'efficacité et la flexibilité d'un tel couplage. Aussi, une analyse comparative de plusieurs outils d'optimisation de l'algorithme est menée, et le cas de patchs multiples en interaction est traité. Ensuite le concept de couplage non-intrusif est étendu au cas de non-linéarités globales, et une méthode de calcul parallèle par décomposition de domaine avec relocalisation non-linéaire est développée. Cette méthode nous a permis de paralléliser un code industriel séquentiel sur un mésocentre de calcul intensif. Enfin, on applique la méthode de couplage au raffinement de maillage par patchs d'éléments finis. On propose un estimateur d'erreur en résidu explicite adapté au calcul de solutions multi-échelles via l'algorithme de couplage. Puis, sur la base de cet estimateur, on met en œuvre une procédure non-intrusive de raffinement local de maillage. Au travers de ces travaux, un outil logiciel de couplage non-intrusif a été mis au point, basé sur l'échange de données entre différents codes de calcul (protocole Message Passing Interface). Les développements effectués sont intégrés dans une surcouche Python, dont le rôle est de coupler plusieurs instances de Code_Aster, le code d'analyse de structures développé par EDF R&D, lequel sera utilisé dans l'ensemble des travaux présentés. / This PhD thesis, part of the ANR ICARE project, aims at developing methods for complex analysis of large scale structures. The scientific challenge is to investigate very localised areas, but potentially critical as of mechanical systems resilience. Classically, representation models, discretizations, mechanical behaviour models and numerical tools are used at both global and local scales for simulation needs of graduated complexity. Global problem is handled by a generic code with topology (plate formulation, geometric approximation...) and behaviour (homogenization) simplifications while local analysis needs implementation of specialized tools (routines, dedicated codes) for an accurate representation of the geometry and behaviour. The main goal of this thesis is to develop an efficient non-intrusive coupling tool for multi-scale and multi-model structural analysis. Constraints of non-intrusiveness result in the non-modification of the stiffness operator, connectivity and the global model solver, allowing to work in a closed source software environment. First, we provide a detailed study of global/local non-intrusive coupling algorithm. Making use of several relevant examples (cracking, elastic-plastic behaviour, contact...), we show the efficiency and the flexibility of such coupling method. A comparative analysis of several optimisation tools is also carried on, and the interacting multiple patchs situation is handled. Then, non-intrusive coupling is extended to globally non-linear cases, and a domain decomposition method with non-linear relocalization is proposed. Such methods allowed us to run a parallel computation using only sequential software, on a high performance computing cluster. Finally, we apply the coupling algorithm to mesh refinement with patches of finite elements. We develop an explicit residual based error estimator suitable for multi-scale solutions arising from the non-intrusive coupling, and apply it inside an error driven local mesh refinement procedure. Through this work, a software tool for non-intrusive coupling was developed, based on data exchange between codes (Message Passing Interface protocol). Developments are integrated into a Python wrapper, whose role is to connect several instances of Code_Aster, the structural analysis code developed by EDF R&D, which will be used in the following work.
70

A posteriori error estimations for the generalized finite element method and modified versions / Estimativas de erro a-posteriori para o método dos elementos finitos generalizados e versões modificadas

Rafael Marques Lins 07 August 2015 (has links)
This thesis investigates two a posteriori error estimators, based on gradient recovery, aiming to fill the gap of the error estimations for the Generalized FEM (GFEM) and, mainly, its modified versions called Corrected XFEM (C-XFEM) and Stable GFEM (SGFEM). In order to reach this purpose, firstly, brief reviews regarding the GFEM and its modified versions are presented, where the main advantages attributed to each numerical method are highlighted. Then, some important concepts related to the error study are presented. Furthermore, some contributions involving a posteriori error estimations for the GFEM are shortly described. Afterwards, the two error estimators hereby proposed are addressed focusing on linear elastic fracture mechanics problems. The first estimator was originally proposed for the C-XFEM and is hereby extended to the SGFEM framework. The second one is based on a splitting of the recovered stress field into two distinct parts: singular and smooth. The singular part is computed with the help of the J integral, whereas the smooth one is calculated from a combination between the Superconvergent Patch Recovery (SPR) and Singular Value Decomposition (SVD) techniques. Finally, various numerical examples are selected to assess the robustness of the error estimators considering different enrichment types, versions of the GFEM, solicitant modes and element types. Relevant aspects such as effectivity indexes, error distribution and convergence rates are used for describing the error estimators. The main contributions of this thesis are: the development of two efficient a posteriori error estimators for the GFEM and its modified versions; a comparison between the GFEM and its modified versions; the identification of the positive features of each error estimator and a detailed study concerning the blending element issues. / Esta tese investiga dois estimadores de erro a posteriori, baseados na recuperação do gradiente, visando preencher o hiato das estimativas de erro para o Generalized FEM (GFEM) e, sobretudo, suas versões modificadas denominadas Corrected XFEM (C-XFEM) e Stable GFEM (SGFEM). De modo a alcançar este objetivo, primeiramente, breves revisões a respeito do GFEM e suas versões modificadas são apresentadas, onde as principais vantagens atribuídas a cada método são destacadas. Em seguida, alguns importantes conceitos relacionados ao estudo do erro são apresentados. Além disso, algumas contribuições envolvendo estimativas de erro a posteriori para o GFEM são brevemente descritas. Posteriormente, os dois estimadores de erro propostos neste trabalho são abordados focando em problemas da mecânica da fratura elástico linear. O primeiro estimador foi originalmente proposto para o C-XFEM e por este meio é estendido para o âmbito do SGFEM. O segundo é baseado em uma divisão do campo de tensões recuperadas em duas partes distintas: singular e suave. A parte singular é calculada com o auxílio da integral J, enquanto que a suave é calculada a partir da combinação entre as técnicas Superconvergent Patch Recovery (SPR) e Singular Value Decomposition (SVD). Finalmente, vários exemplos numéricos são selecionados para avaliar a robustez dos estimadores de erro considerando diferentes tipos de enriquecimento, versões do GFEM, modos solicitantes e tipos de elemento. Aspectos relevantes tais como índices de efetividade, distribuição do erro e taxas de convergência são usados para descrever os estimadores de erro. As principais contribuições desta tese são: o desenvolvimento de dois eficientes estimadores de erro a posteriori para o GFEM e suas versões modificadas; uma comparação entre o GFEM e suas versões modificadas; a identificação das características positivas de cada estimador de erro e um estudo detalhado sobre a questão dos elementos de mistura.

Page generated in 0.143 seconds