• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1133
  • 350
  • 166
  • 134
  • 61
  • 45
  • 32
  • 18
  • 15
  • 14
  • 14
  • 14
  • 14
  • 14
  • 13
  • Tagged with
  • 2303
  • 421
  • 287
  • 284
  • 229
  • 212
  • 210
  • 207
  • 158
  • 158
  • 151
  • 150
  • 149
  • 148
  • 134
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Parametric Dynamical Systems: Transient Analysis and Data Driven Modeling

Grimm, Alexander Rudolf 02 July 2018 (has links)
Dynamical systems are a commonly used and studied tool for simulation, optimization and design. In many applications such as inverse problem, optimal control, shape optimization and uncertainty quantification, those systems typically depend on a parameter. The need for high fidelity in the modeling stage leads to large-scale parametric dynamical systems. Since these models need to be simulated for a variety of parameter values, the computational burden they incur becomes increasingly difficult. To address these issues, parametric reduced models have encountered increased popularity in recent years. We are interested in constructing parametric reduced models that represent the full-order system accurately over a range of parameters. First, we define a global joint error mea- sure in the frequency and parameter domain to assess the accuracy of the reduced model. Then, by assuming a rational form for the reduced model with poles both in the frequency and parameter domain, we derive necessary conditions for an optimal parametric reduced model in this joint error measure. Similar to the nonparametric case, Hermite interpolation conditions at the reflected images of the poles characterize the optimal parametric approxi- mant. This result extends the well-known interpolatory H2 optimality conditions by Meier and Luenberger to the parametric case. We also develop a numerical algorithm to construct locally optimal reduced models. The theory and algorithm are data-driven, in the sense that only function evaluations of the parametric transfer function are required, not access to the internal dynamics of the full model. While this first framework operates on the continuous function level, assuming repeated transfer function evaluations are available, in some cases merely frequency samples might be given without an option to re-evaluate the transfer function at desired points; in other words, the function samples in parameter and frequency are fixed. In this case, we construct a parametric reduced model that minimizes a discretized least-squares error in the finite set of measurements. Towards this goal, we extend Vector Fitting (VF) to the parametric case, solving a global least-squares problem in both frequency and parameter. The output of this approach might lead to a moderate size reduced model. In this case, we perform a post- processing step to reduce the output of the parametric VF approach using H2 optimal model reduction for a special parametrization. The final model inherits the parametric dependence of the intermediate model, but is of smaller order. A special case of a parameter in a dynamical system is a delay in the model equation, e.g., arising from a feedback loop, reaction time, delayed response and various other physical phenomena. Modeling such a delay comes with several challenges for the mathematical formulation, analysis, and solution. We address the issue of transient behavior for scalar delay equations. Besides the choice of an appropriate measure, we analyze the impact of the coefficients of the delay equation on the finite time growth, which can be arbitrary large purely by the influence of the delay. / Ph. D. / Mathematical models play an increasingly important role in the sciences for experimental design, optimization and control. These high fidelity models are often computationally expensive and may require large resources, especially for repeated evaluation. Parametric model reduction offers a remedy by constructing models that are accurate over a range of parameters, and yet are much cheaper to evaluate. An appropriate choice of quality measure and form of the reduced model enable us to characterize these high quality reduced models. Our first contribution is a characterization of optimal parametric reduced models and an efficient implementation to construct them. While this first framework assumes we have access to repeated evaluations of the full model, in some cases merely measurement data might be available. In this case, we construct a parametric model that fits the measurements in a least squares sense. The output of this approach might lead to a moderate size reduced model, which we address with a post-processing step that reduces the model size while maintaining important properties. A special case of a parameter is a delay in the model equation, e.g., arising from a feedback loop, reaction time, delayed response and various other physical phenomena. While asymptotically stable solutions eventually vanish, they might grow large before asymptotic behavior takes over; this leads to the notion of transient behavior, which is our main focus for a simple class of delay equations. Besides the choice of an appropriate measure, we analyze the impact of the structure of the delay equation on the transient growth, which can be arbitrary large purely by the influence of the delay.
362

Interpretable Approximation of High-Dimensional Data based on the ANOVA Decomposition

Schmischke, Michael 08 July 2022 (has links)
The thesis is dedicated to the approximation of high-dimensional functions from scattered data nodes. Many methods in this area lack the property of interpretability in the context of explainable artificial intelligence. The idea is to address this shortcoming by proposing a new method that is intrinsically designed around interpretability. The multivariate analysis of variance (ANOVA) decomposition is the main tool to achieve this purpose. We study the connection between the ANOVA decomposition and orthonormal bases to obtain a powerful basis representation. Moreover, we focus on functions that are mostly explained by low-order interactions to circumvent the curse of dimensionality in its exponential form. Through the connection with grouped index sets, we can propose a least-squares approximation idea via iterative LSQR. Here, the proposed grouped transformations provide fast algorithms for multiplication with the appearing matrices. Through global sensitivity indices we are then able to analyze the approximation which can be used in improving it further. The method is also well-suited for the approximation of real data sets where the sparsity-of-effects principle ensures a low-dimensional structure. We demonstrate the applicability of the method in multiple numerical experiments with real and synthetic data.:1 Introduction 2 The Classical ANOVA Decomposition 3 Fast Multiplication with Grouped Transformations 4 High-Dimensional Explainable ANOVA Approximation 5 Numerical Experiments with Synthetic Data 6 Numerical Experiments with Real Data 7 Conclusion Bibliography / Die Arbeit widmet sich der Approximation von hoch-dimensionalen Funktionen aus verstreuten Datenpunkten. In diesem Bereich leiden vielen Methoden darunter, dass sie nicht interpretierbar sind, was insbesondere im Kontext von Explainable Artificial Intelligence von großer Wichtigkeit ist. Um dieses Problem zu adressieren, schlagen wir eine neue Methode vor, die um das Konzept von Interpretierbarkeit entwickelt ist. Unser wichtigstes Werkzeug dazu ist die Analysis of Variance (ANOVA) Zerlegung. Wir betrachten insbesondere die Verbindung der ANOVA Zerlegung zu orthonormalen Basen und erhalten eine wichtige Reihendarstellung. Zusätzlich fokussieren wir uns auf Funktionen, die hauptsächlich durch niedrig-dimensionale Variableninteraktionen erklärt werden. Dies hilft uns, den Fluch der Dimensionen in seiner exponentiellen Form zu überwinden. Über die Verbindung zu Grouped Index Sets schlagen wir dann eine kleinste Quadrate Approximation mit dem iterativen LSQR Algorithmus vor. Dabei liefern die vorgeschlagenen Grouped Transformations eine schnelle Multiplikation mit den entsprechenden Matrizen. Unter Zuhilfenahme von globalen Sensitvitätsindizes können wir die Approximation analysieren und weiter verbessern. Die Methode ist zudem gut dafür geeignet, reale Datensätze zu approximieren, wobei das sparsity-of-effects Prinzip sicherstellt, dass wir mit niedrigdimensionalen Strukturen arbeiten. Wir demonstrieren die Anwendbarkeit der Methode in verschiedenen numerischen Experimenten mit realen und synthetischen Daten.:1 Introduction 2 The Classical ANOVA Decomposition 3 Fast Multiplication with Grouped Transformations 4 High-Dimensional Explainable ANOVA Approximation 5 Numerical Experiments with Synthetic Data 6 Numerical Experiments with Real Data 7 Conclusion Bibliography
363

Extended approach to correlations beyonds mean-field in atomic nuclei

Sieja, Kamila 26 February 2007 (has links) (PDF)
Récemment avec les nouvelles possibilités d'études expérimentales de noyaux exotiques riches en proton, un regain d'intérêt s'est porté sur la problématique des corrélations d'appariement proton-neutron. Ce travail a pour but l'étude des corrélations au delà du champ moyen et en particulier du pairing proton-neutron isoscalaire et isovecteur pour différents isotopes de Germanium N ~ Z. Nous avons d'abord traité l'approche BCS classique avec l'approximation Lipkin-Nogami (LN) de projection sur le bon nombre de particules en utilisant une interaction résiduelle de type contact. Ensuite dans une approche appelée Higher Tamm-Dancoff Approximation (HTDA) les corrélations proton-neutron ont été traitées en conservant explicitement le nombre de particules. Dans les deux cas, nous avons développé les codes numériques correspondants pour traiter les couplages proton-neutron. Les résultats des applications numériques pour quelques noyaux sont discutés et comparés dans les deux approches BCS(LN) et HTDA avec pairing isoscalaire et isovecteur. Nous avons montré que les deux approches donnent une description semblable des corrélations du fondamental mais que la méthode HTDA est plus efficace dans le régime de faible pairing. Nous avons mis en évidence le rôle crucial de la conservation du nombre de particules pour la description des corrélations d'appariement proton-neutron. La prise en compte du pairing T = 0 génère une énergie de liaison supplémentaire pour les noyaux N = Z contribuant au terme d'énergie de Wigner.
364

Approximation diffuse Hermite et ses applications

Savignat, Jean-Michel 06 October 2000 (has links) (PDF)
De nombreuses techniques de résolution d'équations aux dérivées partielles sans maillage ont été développées dans la dernière décennie, proposant une alternative attrayante lorsque les éléments finis atteignent leurs limites. Notre travail se concentre sur l'étude de l'approximation diffuse, de ses applications au lissage et a la résolution des équations différentielles : les éléments diffus. Cependant, les solutions proposées s'appliquent aussi à d'autres méthodes et de nombreux résultats numériques illustrent chaque développement théorique. Dans un premier temps, nous étudions les techniques d'approximation sans maillage et les comparons à l'aide d'une méthode hybride. L'approximation myope est construite en modifiant le critère de construction des splines et montre ainsi la différence de nature entre moindres carres glissants et interpolateurs radiaux (krigeage, splines). Nous développons ensuite l'approximation diffuse Hermite dont l'application au calcul de la courbure des surfaces triangulées aboutit à une technique de haute résolution. Un algorithme de reverse engineering de modèle CAO s'appuie sur cette nouvelle technique et met en évidence son potentiel. L'intégration numérique est un ingrédient essentiel pour une méthode sans maillage. L'analyse du patch test nous conduit a la définition d'une technique robuste et assurant de bons rangs de convergence. Nous l'appliquons à l'approximation Hermite pour construire un modèle de poutre 3d pour le forage pétrolier. Un algorithme de contact roche-structure original est propose pour ce problème.
365

Efficiently Approximating Query Optimizer Diagrams

Dey, Atreyee 08 1900 (has links)
Modern database systems use a query optimizer to identify the most efficient strategy, called “query execution plan”, to execute declarative SQL queries. The role of the query optimizer is especially critical for the complex decision-support queries featured in current data warehousing and data mining applications. Given an SQL query template that is parametrized on the selectivities of the participating base relations and a choice of query optimizer, a plan diagram is a color-coded pictorial enumeration of the execution plan choices of the optimizer over the query parameter space. Complementary to the plan-diagrams are cost and cardinality diagrams which graphically plot the estimated execution costs and cardinalities respectively, over the query parameter space. These diagrams are collectively known as optimizer diagrams. Optimizer diagrams have proved to be a powerful tool for the analysis and redesign of modern optimizers, and are gaining interest in diverse industrial and academic institutions. However, their utility is adversely impacted by the impractically large computational overheads incurred when standard brute-force approaches are used for producing fine-grained diagrams on high-dimensional query templates. In this thesis, we investigate strategies for efficiently producing close approximations to complex optimizer diagrams. Our techniques are customized for different classes of optimizers, ranging from the generic Class I optimizers that provide only the optimal plan for a query, to Class II optimizers that also support costing of sub-optimal plans and Class III optimizers which offer enumerated rank-ordered lists of plans in addition to both the former features. For approximating plan diagrams for Class I optimizers, we first present database oblivious techniques based on classical random sampling in conjunction with nearest neighbor (NN) inference scheme. Next we propose grid sampling algorithms which consider database specific knowledge such as(a) the structural differences between the operator trees of plans on the grid locations and (b) parametric query optimization principle. These algorithms become more efficient when modified to exploit the sub-optimal plan costing feature available with Class II optimizers. The final algorithm developed for Class III optimizers assume plan cost monotonicity and utilize the rank-ordered lists of plans to efficiently generate completely accurate optimizer diagrams. Subsequently, we provide a relaxed variant, which trades quality of approximation, for reduction in diagram generation overhead. Our proposed algorithms are capable of terminating according to user given error bound for plan diagram approximation. For approximating cost diagrams, our strategy is based on linear least square regression performed on a mathematical model of plan cost behavior over the parameter space, in conjunction with interpolation techniques. Game theoretic and linear programming approaches have been employed to further reduce the error in cost approximation. For approximating cardinality diagrams, we propose a novel parametrized mathematical model as a function of selectivities for characterizing query cardinality behavior. The complete cardinality model is constructed by clustering the data points according to their cardinality values and subsequently fitting the model through linear least square regression technique separately for each cluster. For non-sampled data points the cardinality values are estimated by first determining the cluster they belong to and then interpolating the cardinality value according to the suitable model. Extensive experimentation with a representative set of TPC-H and TPC-DS-based query templates on industrial-strength optimizers indicates that our techniques are capable of delivering 90% accurate optimizer diagrams while incurring no more than 20% of the computational overheads of the exhaustive approach. Infact, for full-featured optimizers, we can guarantee zero error optimizer diagrams which usually require less than 10% overheads. Our results exhibit that (a) the approximation is materially faithful to the features of the exact optimizer diagram, with the errors thinly spread across the picture and Largely confined to the plan transition boundaries and (b) the cost increase at the non-sampled point due to assignment of sub-optimal plan is also limited. These approximation techniques have been implemented in the publicly available Picasso optimizer visualizer tool. We have also modified PostgreSQL’s optimizer to incorporate costing of sub-optimal plans and enumerating rank-ordered lists of plans. In addition to these, we have designed estimators for predicting the time overhead involved in approximating optimizer diagrams with regard to user given error bounds. In summary, this thesis demonstrates that accurate approximations to exact optimizer diagrams can indeed be obtained cheaply and consistently, with typical overheads being an order of magnitude lower than the brute-force approach. We hope that our results will encourage database vendors to incorporate the foreign-plan-costing and plan-rank-list features in their optimizer APIs.
366

Uncalibrated robotic visual servo tracking for large residual problems

Munnae, Jomkwun 17 November 2010 (has links)
In visually guided control of a robot, a large residual problem occurs when the robot configuration is not in the neighborhood of the target acquisition configuration. Most existing uncalibrated visual servoing algorithms use quasi-Gauss-Newton methods which are effective for small residual problems. The solution used in this study switches between a full quasi-Newton method for large residual case and the quasi-Gauss-Newton methods for the small case. Visual servoing to handle large residual problems for tracking a moving target has not previously appeared in the literature. For large residual problems various Hessian approximations are introduced including an approximation of the entire Hessian matrix, the dynamic BFGS (DBFGS) algorithm, and two distinct approximations of the residual term, the modified BFGS (MBFGS) algorithm and the dynamic full Newton method with BFGS (DFN-BFGS) algorithm. Due to the fact that the quasi-Gauss-Newton method has the advantage of fast convergence, the quasi-Gauss-Newton step is used as the iteration is sufficiently near the desired solution. A switching algorithm combines a full quasi-Newton method and a quasi-Gauss-Newton method. Switching occurs if the image error norm is less than the switching criterion, which is heuristically selected. An adaptive forgetting factor called the dynamic adaptive forgetting factor (DAFF) is presented. The DAFF method is a heuristic scheme to determine the forgetting factor value based on the image error norm. Compared to other existing adaptive forgetting factor schemes, the DAFF method yields the best performance for both convergence time and the RMS error. Simulation results verify validity of the proposed switching algorithms with the DAFF method for large residual problems. The switching MBFGS algorithm with the DAFF method significantly improves tracking performance in the presence of noise. This work is the first successfully developed model independent, vision-guided control for large residual with capability to stably track a moving target with a robot.
367

Stochastic Newton Methods With Enhanced Hessian Estimation

Reddy, Danda Sai Koti January 2017 (has links) (PDF)
Optimization problems involving uncertainties are common in a variety of engineering disciplines such as transportation systems, manufacturing, communication networks, healthcare and finance. The large number of input variables and the lack of a system model prohibit a precise analytical solution and a viable alternative is to employ simulation-based optimization. The idea here is to simulate a few times the stochastic system under consideration while updating the system parameters until a good enough solution is obtained. Formally, given only noise-corrupted measurements of an objective function, we wish to end a parameter which minimises the objective function. Iterative algorithms using statistical methods search the feasible region to improve upon the candidate parameter. Stochastic approximation algorithms are best suited; most studied and applied algorithms for funding solutions when the feasible region is a continuously valued set. One can use information on the gradient/Hessian of the objective to aid the search process. However, due to lack of knowledge of the noise distribution, one needs to estimate the gradient/Hessian from noisy samples of the cost function obtained from simulation. Simple gradient search schemes take much iteration to converge to a local minimum and are heavily dependent on the choice of step-sizes. Stochastic Newton methods, on the other hand, can counter the ill-conditioning of the objective function as they incorporate second-order information into the stochastic updates. Stochastic Newton methods are often more accurate than simple gradient search schemes. We propose enhancements to the Hessian estimation scheme used in two recently proposed stochastic Newton methods, based on the ideas of random directions stochastic approximation (2RDSA) [21] and simultaneous perturbation stochastic approximation (2SPSA-31) [6], respectively. The proposed scheme, inspired by [29], reduces the error in the Hessian estimate by (i) Incorporating a zero-mean feedback term; and (ii) optimizing the step-sizes used in the Hessian recursion. We prove that both 2RDSA and 2SPSA-3 with our Hessian improvement scheme converges asymptotically to the true Hessian. The key advantage with 2RDSA and 2SPSA-3 is that they require only 75% of the simulation cost per-iteration for 2SPSA with improved Hessian estimation (2SPSA-IH) [29]. Numerical experiments show that 2RDSA-IH outperforms both 2SPSA-IH and 2RDSA without the improved Hessian estimation scheme.
368

Optimal Supply Chain Configuration for the Additive Manufacturing of Biomedical Implants

Emelogu, Adindu Ahurueze 09 December 2016 (has links)
In this dissertation, we study two important problems related to additive manufacturing (AM). In the first part, we investigate the economic feasibility of using AM to fabricate biomedical implants at the sites of hospitals AM versus traditional manufacturing (TM). We propose a cost model to quantify the supply-chain level costs associated with the production of biomedical implants using AM technology, and formulate the problem as a two-stage stochastic programming model, which determines the number of AM facilities to be established and volume of product flow between manufacturing facilities and hospitals at a minimum cost. We use the sample average approximation (SAA) approach to obtain solutions to the problem for a real-world case study of hospitals in the state of Mississippi. We find that the ratio between the unit production costs of AM and TM (ATR), demand and product lead time are key cost parameters that determine the economic feasibility of AM. In the second part, we investigate the AM facility deployment approaches which affect both the supply chain network cost and the extent of benefits derived from AM. We formulate the supply chain network cost as a continuous approximation model and use optimization algorithms to determine how centralized or distributed the AM facilities should be and how much raw materials these facilities should order so that the total network cost is minimized. We apply the cost model to a real-world case study of hospitals in 12 states of southeastern USA. We find that the demand for biomedical implants in the region, fixed investment cost of AM machines, personnel cost of operating the machines and transportation cost are the major factors that determine the optimal AM facility deployment configuration. In the last part, we propose an enhanced sample average approximation (eSAA) technique that improves the basic SAA method. The eSAA technique uses clustering and statistical techniques to overcome the sample size issue inherent in basic SAA. Our results from extensive numerical experiments indicate that the eSAA can perform up to 699% faster than the basic SAA, thereby making it a competitive solution approach of choice in large scale stochastic optimization problems.
369

Least Squares in Sampling Complexity and Statistical Learning

Bartel, Felix 19 January 2024 (has links)
Data gathering is a constant in human history with ever increasing amounts in quantity and dimensionality. To get a feel for the data, make it interpretable, or find underlying laws it is necessary to fit a function to the finite and possibly noisy data. In this thesis we focus on a method achieving this, namely least squares approximation. Its discovery dates back to around 1800 and it has since then proven to be an indispensable tool which is efficient and has the capability to achieve optimal error when used right. Crucial for the least squares method are the ansatz functions and the sampling points. To discuss them, we gather tools from probability theory, frame subsampling, and $L_2$-Marcinkiewicz-Zygmund inequalities. With that we give results in the worst-case or minmax setting, when a set of points is sought for approximating a class of functions, which we model as a generic reproducing kernel Hilbert space. Further, we give error bounds in the statistical learning setting for approximating individual functions from possibly noisy samples. Here, we include the covariate-shift setting as a subfield of transfer learning. In a natural way a parameter choice question arises for balancing over- and underfitting effect. We tackle this by using the cross-validation score, for which we show a fast way of computing as well as prove the goodness thereof.:1 Introduction 2 Least squares approximation 3 Reproducing kernel Hilbert spaces (RKHS) 4 Concentration inequalities 5 Subsampling of finite frames 6 L2 -Marcinkiewicz-Zygmund (MZ) inequalities 7 Least squares in the worst-case setting 8 Least squares in statistical learning 9 Cross-validation 10 Outlook
370

Modélisation mathématique et courbes de croissance

Mir, Youness January 2015 (has links)
La modélisation mathématique est un outil largement employé dans plusieurs disciplines des sciences appliquées. En hydrologie, en biologie, en économie ainsi que d'autres domaines des sciences naturelles, sociales et humaines, le recours à la modélisation mathématique est une démarche de plus en plus fréquente. Par exemple, en hydrologie, plusieurs modèles mathématiques sont conçus pour décrire ou prédire la relation existante entre les hauteurs d'eau et les débits des rivières. Dans le cadre de cette thèse nous nous sommes intéressés au développement de nouveaux modèles permettant de modéliser les phénomènes de croissance qui nécessitent la présence d'une asymptote linéaire croissante ou curviligne. Pour atteindre cet objectif, l'idée de base a été d'utiliser quelques modèles parmi les plus répandus en pratique et de les modifier judicieusement (et simplement) de façon à introduire une asymptote soit linéaire soit curviligne tout en conservant leur unique point d'inflexion. La modification que nous avons introduite conserve aussi le caractère simple et continue de ces modèles ainsi que la forme lisse et croissante de leurs courbes. Nous obtenons ainsi des modèles qui répondent aux besoins de la modélisation lorsque les modèles standards échouent.

Page generated in 0.0873 seconds