• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 102
  • 97
  • 40
  • 10
  • 8
  • 7
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 331
  • 40
  • 38
  • 36
  • 29
  • 29
  • 28
  • 27
  • 25
  • 24
  • 24
  • 23
  • 22
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Improved frequency domain measurement techniques for characterizing power amplifier and multipath environments

McKinley, Michael Dean 19 August 2008 (has links)
This work focuses on fixing measurement inaccuracies to which models and figures of merit are susceptible in two wireless communication environments: power amplifier and multipath. To emulate or rate the performance of these environments, models and figures of merit, respectively, are often used. The usefulness of a model depends on how accurately and efficiently it emulates its real-world counterpart. The usefulness of a figure of merit depends on how accurately it represents system behavior. Most discussions on the challenges and trade-offs faced in modeling nearly always focus on the complexity of the device or channel of interest and the resultant difficulty in describing it. Similarly, figures of merit are meant only to summarize the performance of the device or channel. At some point, either in generation or verification of a model or figure of merit, there is a dependence on measured data. Though the complexity and performance of the device or channel are challenges by themselves, there are other significant sources of distortion that must be minimized to avoid errors in the measured data. For this work, the unique distortion of power amplifier and multipath environments is considered, and then errors in measurement which would obscure these distortions are eliminated. Specifically, three measurement issues are addressed: 1) identifying measurement setup artifacts, 2) achieving consistent measurement results and 3) reducing variations in the environment. This work contributes to increasing the accuracy of microwave measurements used in the modeling of nonlinear high-power amplifiers and used in figures of merit for power amplifiers and multipath channels.
322

Stochastic modelling of financial time series with memory and multifractal scaling

Snguanyat, Ongorn January 2009 (has links)
Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.
323

3-D inversion of helicopter-borne electromagnetic data

Scheunert, Mathias 19 January 2016 (has links) (PDF)
In an effort to improve the accuracy of common 1-D analysis for frequency domain helicopter-borne electromagnetic data at reasonable computing costs, a 3-D inversion approach is developed. The strategy is based on the prior localization of an entire helicopter-borne electromagnetic survey to parts which are actually affected by expected local 3-D anomalies and a separate inversion of those sections of the surveys (cut-&-paste strategy). The discrete forward problem, adapted from the complete Helmholtz equation, is formulated in terms of the secondary electric field employing the finite difference method. The analytical primary field calculation incorporates an interpolation strategy that allows to effectively handle the enormous number of transmitters. For solving the inverse problem, a straightforward Gauss-Newton method and a Tikhonov-type regularization scheme are applied. In addition, different strategies for the restriction of the domain where the inverse problem is solved are used as an implicit regularization. The derived linear least squares problem is solved with Krylov-subspace methods, such as the LSQR algorithm, that are able to deal with the inherent ill-conditioning. As the helicopter-borne electromagnetic problem is characterized by a unique transmitter-receiver relation, an explicit representation of the Jacobian matrix is used. It is shown that this ansatz is the crucial component of the 3-D HEM inversion. Furthermore, a tensor-based formulation is introduced that provides a fast update of the linear system of the forward problem and an effective handling of the sensitivity related algebraic quantities. Based on a synthetic data set of a predefined model problem, different application examples are used to demonstrate the principal functionality of the presented algorithm. Finally, the algorithm is applied to a data set obtained from a real field survey in the Northern German Lowlands. / Die vorliegende Arbeit beschäftigt sich mit der 3-D Inversion von Hubschrauberelektromagnetikdaten im Frequenzbereich. Das vorgestellte Verfahren basiert auf einer vorhergehenden Eingrenzung des Messgebiets auf diejenigen Bereiche, in denen tatsächliche 3-D Strukturen im Untergrund vermutet werden. Die Resultate der 3-D Inversion dieser Teilbereiche können im Anschluss wieder in die Ergebnisse der Auswertung des komplementären Gesamtdatensatzes integriert werden, welche auf herkömmlichen 1-D Verfahren beruht (sog. Cut-&-Paste-Strategie). Die Diskretisierung des Vorwärtsproblems, abgeleitet von einer Sekundärfeldformulierung der vollständigen Helmholtzgleichung, erfolgt mithilfe der Methode der Finiten Differenzen. Zur analytischen Berechnung der zugehörigen Primärfelder wird ein Interpolationsansatz verwendet, welcher den Umgang mit der enorm hohen Anzahl an Quellen ermöglicht. Die Lösung des inversen Problems basiert auf dem Gauß-Newton-Verfahren und dem Tichonow-Regularisierungsansatz. Als Mittel der zusätzlichen impliziten Regularisierung dient eine räumliche Eingrenzung des Gebiets, auf welchem das inverse Problem gelöst wird. Zur iterativen Lösung des zugrundeliegenden Kleinste-Quadrate-Problems werden Krylov-Unterraum-Verfahren, wie der LSQR Algorithmus, verwendet. Aufgrund der charakteristischen Sender-Empfänger-Beziehung wird eine explizit berechnete Jakobimatrix genutzt. Ferner wird eine tensorbasierte Problemformulierung vorgestellt, welche die schnelle Assemblierung leitfähigkeitsabhängiger Systemmatrizen und die effektive Handhabung der zur Berechnung der Jakobimatrix notwendigen algebraischen Größen ermöglicht. Die Funktionalität des beschriebenen Ansatzes wird anhand eines synthetischen Datensatzes zu einem definierten Testproblem überprüft. Abschließend werden Inversionsergebnisse zu Felddaten gezeigt, welche im Norddeutschen Tiefland erhoben worden.
324

Přesné prostoročasy v modifikovaných teoriích gravitace / Exact spacetimes in modified theories of gravity

Karamazov, Michal January 2017 (has links)
In the review part of the thesis we summarize various modified theories of gravity, especially those that are characterized by additional curvature invariants in the Lagrangian density. Further, we review non-twisting geometries, especially their Kundt subclass. Finally, from the principle of least action we derive field equations for the case with the Lagrangian density corresponding to an arbitrary function of the curvature invariants. In the original part of the thesis we explicitly express particular components of the field equations for non-gyratonic Kundt geometry in generic quadratic gravity in arbitrary dimension. Then we discuss how this, in general fourth order, field equations restrict the Kundt metric in selected geome- trically privileged situations. We also analyse the special case of Gauss-Bonnet theory. 1
325

Singular beam shaping from spin-orbit flat optics / Mise en forme singulière de faisceaux lumineux à l'aide de composants optiques spin-orbite plans

Rafayelyan, Mushegh 03 May 2017 (has links)
Dans ce travail nous avons résolu deux problèmes principaux de la mise en forme topologique de faisceau paraxial pour les composants plans : la modalité et le polychromatisme.Nous les résolvons en introduisant de nouveaux concepts d’éléments optiques à interaction spin orbite,à savoir la “q-plate modale” et la “q-plate Bragg-Berry”. D’un côté, la q-plate modale convertit un faisceau gaussien incident en un faisceau de Laguerre-Gauss pour un indice radial et un indice d’azimut donnés, ce qui par conséquent dépasse les capacités des q-plates conventionnelles qui ne modifient que le degré de liberté azimutal, c.à.d. le moment orbital angulaire de la lumière. À des fins expérimentales, deux approches ont été développées : une basée sur des lames de verres nanostructurées artificiellement, l’autre sur des défauts topologiques de cristaux liquides auto-organisés naturellement. D’un autre côté, la q-plate Bragg-Berry consiste en une fine couche inhomogène de cristaux liquides chiraux (cholestériques) devant un miroir, ce qui fournit une mise en forme de faisceau spin-orbite pleinement efficace sur une large bande spectrale du faisceau incident, contrairement au q-plates conventionnelles qui ne sont fabriqués que pour une longueur d’onde donnée. Par ailleurs, nous obtenons une mise en forme de faisceau spin-orbite ultra-large bande en induisant une modulation de la structure supramoléculaire torsadée des cristaux liquides cholestériques selon la direction de propagation de la lumière. Nous montrons également que la présence du miroir derrière permet un puissant contrôle spatio-temporel des propriétés vectorielles de la polarisation du champ lumineux générées par la q-plate Bragg-Berry. / It is well-known that paraxial coherent electromagnetic fields can be completelycharacterized in terms of their radial and azimuthal spatial degrees of freedom in the transverse planethat add to the polarization degree of freedom and wavelength. In this work we address two mainissues of paraxial beam shaping that are the modality and the polychromaticity in the context of flatopticsthat we address by introducing novel concepts of spin-orbit optical elements. Namely, the‘modal q-plate’ and the ‘Bragg-Berry q-plate’. On the one hand, modal q-plate converts an incidentfundamental Gaussian beam into a Laguerre-Gaussian beam of given radial and azimuthal indices,hence going beyond the capabilities of conventional q-plates that only control the azimuthal degreeof freedom, i.e. the orbital angular momentum content of light. Towards experimental realization ofmodal q-plates, two approaches are developed: one based on artificially nanostructured glasses andanother based on naturally self-organized liquid crystal topological defects. On the other hand,Bragg-Berry q-plate consist of mirror-backed inhomogeneous thin film of chiral liquid crystal(cholesteric) that provides fully efficient spin-orbit beam shaping over broad spectral range of theincident beam, in contrast to the conventional q-plates that are designed for single wavelength.Furthermore, ultra-broadband spin-orbit beam shaping is achieved by inducing an extra modulationof the supramolecular twisted structure of the cholesteric liquid crystal along the propagationdirection. We also show that the presence of a back-mirror allows a powerful spatio-temporal controlof the polarization vectorial properties of the light fields generated by Bragg-Berry q-plate.
326

Transformation model selection by multiple hypotheses testing

Lehmann, Rüdiger 17 October 2016 (has links) (PDF)
Transformations between different geodetic reference frames are often performed such that first the transformation parameters are determined from control points. If in the first place we do not know which of the numerous transformation models is appropriate then we can set up a multiple hypotheses test. The paper extends the common method of testing transformation parameters for significance, to the case that also constraints for such parameters are tested. This provides more flexibility when setting up such a test. One can formulate a general model with a maximum number of transformation parameters and specialize it by adding constraints to those parameters, which need to be tested. The proper test statistic in a multiple test is shown to be either the extreme normalized or the extreme studentized Lagrange multiplier. They are shown to perform superior to the more intuitive test statistics derived from misclosures. It is shown how model selection by multiple hypotheses testing relates to the use of information criteria like AICc and Mallows’ Cp, which are based on an information theoretic approach. Nevertheless, whenever comparable, the results of an exemplary computation almost coincide.
327

3-D inversion of helicopter-borne electromagnetic data

Scheunert, Mathias 27 November 2015 (has links)
In an effort to improve the accuracy of common 1-D analysis for frequency domain helicopter-borne electromagnetic data at reasonable computing costs, a 3-D inversion approach is developed. The strategy is based on the prior localization of an entire helicopter-borne electromagnetic survey to parts which are actually affected by expected local 3-D anomalies and a separate inversion of those sections of the surveys (cut-&-paste strategy). The discrete forward problem, adapted from the complete Helmholtz equation, is formulated in terms of the secondary electric field employing the finite difference method. The analytical primary field calculation incorporates an interpolation strategy that allows to effectively handle the enormous number of transmitters. For solving the inverse problem, a straightforward Gauss-Newton method and a Tikhonov-type regularization scheme are applied. In addition, different strategies for the restriction of the domain where the inverse problem is solved are used as an implicit regularization. The derived linear least squares problem is solved with Krylov-subspace methods, such as the LSQR algorithm, that are able to deal with the inherent ill-conditioning. As the helicopter-borne electromagnetic problem is characterized by a unique transmitter-receiver relation, an explicit representation of the Jacobian matrix is used. It is shown that this ansatz is the crucial component of the 3-D HEM inversion. Furthermore, a tensor-based formulation is introduced that provides a fast update of the linear system of the forward problem and an effective handling of the sensitivity related algebraic quantities. Based on a synthetic data set of a predefined model problem, different application examples are used to demonstrate the principal functionality of the presented algorithm. Finally, the algorithm is applied to a data set obtained from a real field survey in the Northern German Lowlands. / Die vorliegende Arbeit beschäftigt sich mit der 3-D Inversion von Hubschrauberelektromagnetikdaten im Frequenzbereich. Das vorgestellte Verfahren basiert auf einer vorhergehenden Eingrenzung des Messgebiets auf diejenigen Bereiche, in denen tatsächliche 3-D Strukturen im Untergrund vermutet werden. Die Resultate der 3-D Inversion dieser Teilbereiche können im Anschluss wieder in die Ergebnisse der Auswertung des komplementären Gesamtdatensatzes integriert werden, welche auf herkömmlichen 1-D Verfahren beruht (sog. Cut-&-Paste-Strategie). Die Diskretisierung des Vorwärtsproblems, abgeleitet von einer Sekundärfeldformulierung der vollständigen Helmholtzgleichung, erfolgt mithilfe der Methode der Finiten Differenzen. Zur analytischen Berechnung der zugehörigen Primärfelder wird ein Interpolationsansatz verwendet, welcher den Umgang mit der enorm hohen Anzahl an Quellen ermöglicht. Die Lösung des inversen Problems basiert auf dem Gauß-Newton-Verfahren und dem Tichonow-Regularisierungsansatz. Als Mittel der zusätzlichen impliziten Regularisierung dient eine räumliche Eingrenzung des Gebiets, auf welchem das inverse Problem gelöst wird. Zur iterativen Lösung des zugrundeliegenden Kleinste-Quadrate-Problems werden Krylov-Unterraum-Verfahren, wie der LSQR Algorithmus, verwendet. Aufgrund der charakteristischen Sender-Empfänger-Beziehung wird eine explizit berechnete Jakobimatrix genutzt. Ferner wird eine tensorbasierte Problemformulierung vorgestellt, welche die schnelle Assemblierung leitfähigkeitsabhängiger Systemmatrizen und die effektive Handhabung der zur Berechnung der Jakobimatrix notwendigen algebraischen Größen ermöglicht. Die Funktionalität des beschriebenen Ansatzes wird anhand eines synthetischen Datensatzes zu einem definierten Testproblem überprüft. Abschließend werden Inversionsergebnisse zu Felddaten gezeigt, welche im Norddeutschen Tiefland erhoben worden.
328

Transformation model selection by multiple hypotheses testing

Lehmann, Rüdiger January 2014 (has links)
Transformations between different geodetic reference frames are often performed such that first the transformation parameters are determined from control points. If in the first place we do not know which of the numerous transformation models is appropriate then we can set up a multiple hypotheses test. The paper extends the common method of testing transformation parameters for significance, to the case that also constraints for such parameters are tested. This provides more flexibility when setting up such a test. One can formulate a general model with a maximum number of transformation parameters and specialize it by adding constraints to those parameters, which need to be tested. The proper test statistic in a multiple test is shown to be either the extreme normalized or the extreme studentized Lagrange multiplier. They are shown to perform superior to the more intuitive test statistics derived from misclosures. It is shown how model selection by multiple hypotheses testing relates to the use of information criteria like AICc and Mallows’ Cp, which are based on an information theoretic approach. Nevertheless, whenever comparable, the results of an exemplary computation almost coincide.
329

Quasi second-order methods for PDE-constrained forward and inverse problems

Zehnder, Jonas 05 1900 (has links)
La conception assistée par ordinateur (CAO), les effets visuels, la robotique et de nombreux autres domaines tels que la biologie computationnelle, le génie aérospatial, etc. reposent sur la résolution de problèmes mathématiques. Dans la plupart des cas, des méthodes de calcul sont utilisées pour résoudre ces problèmes. Le choix et la construction de la méthode de calcul ont un impact important sur les résultats et l'efficacité du calcul. La structure du problème peut être utilisée pour créer des méthodes, qui sont plus rapides et produisent des résultats qualitativement meilleurs que les méthodes qui n'utilisent pas la structure. Cette thèse présente trois articles avec trois nouvelles méthodes de calcul s'attaquant à des problèmes de simulation et d'optimisation contraints par des équations aux dérivées partielles (EDP). Dans le premier article, nous abordons le problème de la dissipation d'énergie des solveurs fluides courants dans les effets visuels. Les solveurs de fluides sont omniprésents dans la création d'effets dans les courts et longs métrages d'animation. Nous présentons un schéma d'intégration temporelle pour la dynamique des fluides incompressibles qui préserve mieux l'énergie comparé aux nombreuses méthodes précédentes. La méthode présentée présente une faible surcharge et peut être intégrée à un large éventail de méthodes existantes. L'amélioration de la conservation de l'énergie permet la création d'animations nettement plus dynamiques. Nous abordons ensuite la conception computationelle dont le but est d'exploiter l'outils computationnel dans le but d'améliorer le processus de conception. Plus précisément, nous examinons l'analyse de sensibilité, qui calcule les sensibilités du résultat de la simulation par rapport aux paramètres de conception afin d'optimiser automatiquement la conception. Dans ce contexte, nous présentons une méthode efficace de calcul de la direction de recherche de Gauss-Newton, en tirant parti des solveurs linéaires directs épars modernes. Notre méthode réduit considérablement le coût de calcul du processus d'optimisation pour une certaine classe de problèmes de conception inverse. Enfin, nous examinons l'optimisation de la topologie à l'aide de techniques d'apprentissage automatique. Nous posons deux questions : Pouvons-nous faire de l'optimisation topologique sans maillage et pouvons-nous apprendre un espace de solutions d'optimisation topologique. Nous appliquons des représentations neuronales implicites et obtenons des résultats structurellement sensibles pour l'optimisation topologique sans maillage en guidant le réseau neuronal pendant le processus d'optimisation et en adaptant les méthodes d'optimisation topologique par éléments finis. Notre méthode produit une représentation continue du champ de densité. De plus, nous présentons des espaces de solution appris en utilisant la représentation neuronale implicite. / Computer-aided design (CAD), visual effects, robotics and many other fields such as computational biology, aerospace engineering etc. rely on the solution of mathematical problems. In most cases, computational methods are used to solve these problems. The choice and construction of the computational method has large impact on the results and the computational efficiency. The structure of the problem can be used to create methods, that are faster and produce qualitatively better results than methods that do not use the structure. This thesis presents three articles with three new computational methods tackling partial differential equation (PDE) constrained simulation and optimization problems. In the first article, we tackle the problem of energy dissipation of common fluid solvers in visual effects. Fluid solvers are ubiquitously used to create effects in animated shorts and feature films. We present a time integration scheme for incompressible fluid dynamics which preserves energy better than many previous methods. The presented method has low overhead and can be integrated into a wide range of existing methods. The improved energy conservation leads to noticeably more dynamic animations. We then move on to computational design whose goal is to harnesses computational techniques for the design process. Specifically, we look at sensitivity analysis, which computes the sensitivities of the simulation result with respect to the design parameters to automatically optimize the design. In this context, we present an efficient way to compute the Gauss-Newton search direction, leveraging modern sparse direct linear solvers. Our method reduces the computational cost of the optimization process greatly for a certain class of inverse design problems. Finally, we look at topology optimization using machine learning techniques. We ask two questions: Can we do mesh-free topology optimization and can we learn a space of topology optimization solutions. We apply implicit neural representations and obtain structurally sensible results for mesh-free topology optimization by guiding the neural network during optimization process and adapting methods from finite element based topology optimization. Our method produces a continuous representation of the density field. Additionally, we present learned solution spaces using the implicit neural representation.
330

Computational modeling and design of nonlinear mechanical systems and materials

Tang, Pengbin 03 1900 (has links)
Les systèmes et matériaux mécaniques non linéaires sont largement utilisés dans divers domaines. Cependant, leur modélisation et leur conception ne sont pas triviales car elles nécessitent une compréhension complète de leurs non-linéarités internes et d'autres phénomènes. Pour permettre une conception efficace, nous devons d'abord introduire des modèles de calcul afin de caractériser avec précision leur comportement complexe. En outre, de nouvelles techniques de conception inverse sont également nécessaires pour comprendre comment le comportement change lorsque nous modifions les paramètres de conception des systèmes mécaniques non linéaires et des matériaux. Par conséquent, dans cette thèse, nous présentons trois nouvelles méthodes pour la modélisation informatique et la conception de systèmes mécaniques non linéaires et de matériaux. Dans le premier article, nous abordons le problème de la conception de systèmes mécaniques non linéaires présentant des mouvements périodiques stables en réponse à une force périodique. Nous présentons une méthode de calcul qui utilise une approche du domaine fréquentiel pour la simulation dynamique et la puissante analyse de sensibilité pour l'optimisation de la conception afin de concevoir des systèmes mécaniques conformes avec des oscillations de grande amplitude. Notre méthode est polyvalente et peut être appliquée à divers types de systèmes mécaniques souples. Nous validons son efficacité en fabriquant et en évaluant plusieurs prototypes physiques. Ensuite, nous nous concentrons sur la modélisation informatique et la caractérisation mécanique des matériaux non linéaires dominés par le contact, en particulier les matériaux à emboîtement discret (DIM), qui sont des tissus de cotte de mailles généralisés constitués d'éléments d'emboîtement quasi-rigides. Contrairement aux matériaux élastiques conventionnels pour lesquels la déformation et la force de rappel sont directement couplées, la mécanique des DIM est régie par des contacts entre des éléments individuels qui donnent lieu à des contraintes de déformation cinématique anisotrope. Pour reproduire le comportement biphasique du DIM sans simuler des structures à micro-échelle coûteuses, nous introduisons une méthode efficace de limitation de la déformation anisotrope basée sur la programmation conique du second ordre (SOCP). En outre, pour caractériser de manière exhaustive la forte anisotropie, le couplage complexe et d'autres phénomènes non linéaires du DIM, nous introduisons une nouvelle approche d'homogénéisation pour distiller des limites de déformation à grande échelle à partir de simulations à micro-échelle et nous développons un modèle macromécanique basé sur des données pour simuler le DIM avec des contraintes de déformation homogénéisées. / Nonlinear mechanical systems and materials are broadly used in diverse fields. However, their modeling and design are nontrivial as they require a complete understanding of their internal nonlinearities and other phenomena. To enable their efficient design, we must first introduce computational models to accurately characterize their complex behavior. Furthermore, new inverse design techniques are also required to capture how the behavior changes when we change the design parameters of nonlinear mechanical systems and materials. Therefore, in this thesis, we introduce three novel methods for computational modeling and design of nonlinear mechanical systems and materials. In the first article, we address the design problem of nonlinear mechanical systems exhibiting stable periodic motions in response to a periodic force. We present a computational method that utilizes a frequency-domain approach for dynamical simulation and the powerful sensitivity analysis for design optimization to design compliant mechanical systems with large-amplitude oscillations. Our method is versatile and can be applied to various types of compliant mechanical systems. We validate its effectiveness by fabricating and evaluating several physical prototypes. Next, we focus on the computation modeling and mechanical characterization of contact-dominated nonlinear materials, particularly Discrete Interlocking Materials (DIM), which are generalized chainmail fabrics made of quasi-rigid interlocking elements. Unlike conventional elastic materials for which deformation and restoring forces are directly coupled, the mechanics of DIM are governed by contacts between individual elements that give rise to anisotropic kinematic deformation constraints. To replicate the biphasic behavior of DIM without simulating expensive microscale structures, we introduce an efficient anisotropic strain-limiting method based on second-order cone programming (SOCP). Additionally, to comprehensively characterize strong anisotropy, complex coupling, and other nonlinear phenomena of DIM, we introduce a novel homogenization approach for distilling macroscale deformation limits from microscale simulations and develop a data-driven macromechanical model for simulating DIM with homogenized deformation constraints.

Page generated in 0.4113 seconds