• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 4
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 27
  • 27
  • 10
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Accumulation des biens, croissance et monnaie / Accumulation of goods, growth and money

Cayemitte, Jean-Marie 17 January 2014 (has links)
Cette thèse construit un modèle théorique qui renouvelle l’approche traditionnelle de l’équilibre du marché. En introduisant dans le paradigme néo-classique le principe de préférence pour la quantité, il génère de façon optimale des stocks dans un marché concurrentiel. Les résultats sont très importants, car ils expliquent à la fois l’émergence des invendus et l’existence de cycles économiques. En outre, il étudie le comportement optimal du monopole dont la puissance de marché dépend non seulement de la quantité de biens étalés, mais aussi de celle de biens achetés. Contrairement à l’hypothèse traditionnelle selon laquelle le monopoleur choisit le prix ou la quantité qui maximise son profit, il attire, via un indice de Lerner généralisé la demande à la fois par le prix et la quantité de biens exposés. Quelle que soit la structure du marché, le phénomène d’accumulation des stocks de biens apparaît dans l’économie. De plus, il a l’avantage d’expliquer explicitement les achats impulsifs non encore traités par la théorie économique. Pour vérifier la robustesse des résultats du modèle théorique, ils sont testés sur des données américaines. En raison de leur non-linéarité, la méthode de Gauss-Newton est appropriée pour analyser l’impact de la préférence pour la quantité sur la production et l’accumulation de biens, et par conséquent sur les prévisions de PIB. Enfin, cette thèse construit un modèle à générations imbriquées à deux pays qui étend l’équilibre dynamique à un gamma-équilibre dynamique sans friction. Sur la base de la contrainte de détention préalable d’encaisse, il ressort les conditions de sur-accumulation du capital et les conséquences de la mobilité du capital sur le bien-être dans un contexte d’accumulation du stock d’invendus / This thesis constructs a theoretical model that renews the traditional approach of the market equilibrium. By introducing into the neoclassical paradigm the principle of preference for quantity, it optimally generates inventories within a competitive market. The results are very important since they explain both the emergence of unsold goods and the existence of economic cycles. In addition, it studies the optimal behavior of a monopolist whose the market power depends not only on the quantity of displayed goods but also that of goods that the main consumer is willing to buy. Contrary to the traditional assumption that the monopolist chooses price or quantity that maximizes its profit, through a generalized Lerner index (GLI) it attracts customers’ demand by both the price and the quantity of displayed goods. Whatever the market structure, the phenomenon of inventory accumulation appears in the economy. Furthermore, it has the advantage of explicitly explaining impulse purchases untreated by economics. To check the robustness of the results,the theoretical model is fitted to U.S. data. Due to its nonlinearity, the Gauss-Newtonmethod is appropriate to highlight the impact of consumers’ preference for quantity on production and accumulation of goods and consequently GDP forecast. Finally, this thesis builds a two-country overlapping generations (OLG) model which extends the dynamic OLG equilibrium to a frictionless dynamic OLG gamma-equilibrium. Based on the cash-inadvance constraint, it highlights the conditions of over-accumulation of capital and welfare implications of capital mobility in a context of accumulation of stock of unsold goods.
22

Optimisation of Active Microstrip Patch Antennas

Jacmenovic, Dennis, dennis_jacman@yahoo.com.au January 2004 (has links)
This thesis presents a study of impedance optimisation of active microstrip patch antennas to multiple frequency points. A single layered aperture coupled microstrip patch antenna has been optimised to match the source reflection coefficient of a transistor in designing an active antenna. The active aperture coupled microstrip patch antenna was optimised to satisfy Global Positioning System (GPS) frequency specifications. A rudimentary aperture coupled microstrip patch antenna consists of a rectangular antenna element etched on the top surface of two dielectric substrates. The substrates are separated by a ground plane and a microstrip feed is etched on the bottom surface. A rectangular aperture in the ground plane provides coupling between the feed and the antenna element. This type of antenna, which conveniently isolates any circuit at the feed from the antenna element, is suitable for integrated circuit design and is simple to fabricate. An active antenna design directly couples an antenna to an active device, therefore saving real estate and power. This thesis focuses on designing an aperture coupled patch antenna directly coupled to a low noise amplifier as part of the front end of a GPS receiver. In this work an in-house software package, dubbed ACP by its creator Dr Rod Waterhouse, for calculating aperture coupled microstrip patch antenna performance parameters was linked to HP-EEsof, a microwave computer aided design and simulation package by Hewlett-Packard. An ANSI C module in HP-EEsof was written to bind the two packages. This process affords the client the benefit of powerful analysis tools offered in HP-EEsof and the fast analysis of ACP for seamless system design. Moreover, the optimisation algorithms in HP-EEsof were employed to investigate which algorithms are best suited for optimising patch antennas. The active antenna design presented in this study evades an input matching network, which is accomplished by designing the antenna to represent the desired source termination of a transistor. It has been demonstrated that a dual-band microstrip patch antenna can be successfully designed to match the source reflection coefficient, avoiding the need to insert a matching network. Maximum power transfer in electrical circuits is accomplished by matching the impedance between entities, which is generally acheived with the use of a matching network. Passive matching networks employed in amplifier design generally consist of discrete components up to the low GHz frequency range or distributed elements at greater frequencies. The source termination for a low noise amplifier will greatly influence its noise, gain and linearity which is controlled by designing a suitable input matching network. Ten diverse search methods offered in HP-EEsof were used to optimise an active aperture coupled microstrip patch antenna. This study has shown that the algorithms based on the randomised search techniques and the Genetic algorithm provide the most robust performance. The optimisation results were used to design an active dual-band antenna.
23

Development Of Deterministic And Stochastic Algorithms For Inverse Problems Of Optical Tomography

Gupta, Saurabh 07 1900 (has links) (PDF)
Stable and computationally efficient reconstruction methodologies are developed to solve two important medical imaging problems which use near-infrared (NIR) light as the source of interrogation, namely, diffuse optical tomography (DOT) and one of its variations, ultrasound-modulated optical tomography (UMOT). Since in both these imaging modalities the system matrices are ill-conditioned owing to insufficient and noisy data, the emphasis in this work is to develop robust stochastic filtering algorithms which can handle measurement noise and also account for inaccuracies in forward models through an appropriate assignment of a process noise. However, we start with demonstration of speeding of a Gauss-Newton (GN) algorithm for DOT so that a video-rate reconstruction from data recorded on a CCD camera is rendered feasible. Towards this, a computationally efficient linear iterative scheme is proposed to invert the normal equation of a Gauss-Newton scheme in the context of recovery of absorption coefficient distribution from DOT data, which involved the singular value decomposition (SVD) of the Jacobian matrix appearing in the update equation. This has sufficiently speeded up the inversion that a video rate recovery of time evolving absorption coefficient distribution is demonstrated from experimental data. The SVD-based algorithm has made the number of operations in image reconstruction to be rather than. 2()ONN3()ONN The rest of the algorithms are based on different forms of stochastic filtering wherein we arrive at a mean-square estimate of the parameters through computing their joint probability distributions conditioned on the measurement up to the current instant. Under this, the first algorithm developed uses a Bootstrap particle filter which also uses a quasi-Newton direction within. Since keeping track of the Newton direction necessitates repetitive computation of the Jacobian, for all particle locations and for all time steps, to make the recovery computationally feasible, we devised a faster update of the Jacobian. It is demonstrated, through analytical reasoning and numerical simulations, that the proposed scheme, not only accelerates convergence but also yields substantially reduced sample variance in the estimates vis-à-vis the conventional BS filter. Both accelerated convergence and reduced sample variance in the estimates are demonstrated in DOT optical parameter recovery using simulated and experimental data. In the next demonstration a derivative free variant of the pseudo-dynamic ensemble Kalman filter (PD-EnKF) is developed for DOT wherein the size of the unknown parameter is reduced by representing of the inhomogeneities through simple geometrical shapes. Also the optical parameter fields within the inhomogeneities are approximated via an expansion based on the circular harmonics (CH) (Fourier basis functions). The EnKF is then used to recover the coefficients in the expansion with both simulated and experimentally obtained photon fluence data on phantoms with inhomogeneous inclusions. The process and measurement equations in the Pseudo-Dynamic EnKF (PD-EnKF) presently yield a parsimonious representation of the filter variables, which consist of only the Fourier coefficients and the constant scalar parameter value within the inclusion. Using fictitious, low-intensity Wiener noise processes in suitably constructed ‘measurement’ equations, the filter variables are treated as pseudo-stochastic processes so that their recovery within a stochastic filtering framework is made possible. In our numerical simulations we have considered both elliptical inclusions (two inhomogeneities) and those with more complex shapes ( such as an annular ring and a dumbbell) in 2-D objects which are cross-sections of a cylinder with background absorption and (reduced) scattering coefficient chosen as = 0.01 mm-1 and = 1.0 mm-1respectively. We also assume=0.02 mm-1 within the inhomogeneity (for the single inhomogeneity case) and=0.02 and 0.03 mm-1 (for the two inhomogeneities case). The reconstruction results by the PD-EnKF are shown to be consistently superior to those through a deterministic and explicitly regularized Gauss-Newton algorithm. We have also estimated the unknown from experimentally gathered fluence data and verified the reconstruction by matching the experimental data with the computed one. The superiority of a modified version of the PD-EnKF, which uses an ensemble square root filter, is also demonstrated in the context of UMOT by recovering the distribution of mean-squared amplitude of vibration, related to the Young’s modulus, in the ultrasound focal volume. Since the ability of a coherent light probe to pick-up the overall optical path-length change is limited to modulo an optical wavelength, the individual displacements suffered owing to the US forcing should be very small, say within a few angstroms. The sensitivity of modulation depth to changes in these small displacements could be very small, especially when the ROI is far removed from the source and detector. The contrast recovery of the unknown distribution in such cases could be seriously impaired whilst using a quasi-Newton scheme (e.g. the GN scheme) which crucially makes use of the derivative information. The derivative-free gain-based Monte Carlo filter not only remedies this deficiency, but also provides a regularization insensitive and computationally competitive alternative to the GN scheme. The inherent ability of a stochastic filter in accommodating the model error owing to a diffusion approximation of the correlation transport may be cited as an added advantage in the context of the UMOT inverse problem. Finally to speed up forward solve of the partial differential equation (PDE) modeling photon transport in the context of UMOT for which the PDE has time as a parameter, a spectral decomposition of the PDE operator is demonstrated. This allows the computation of the time dependent forward solution in terms of the eigen functions of the PDE operator which has speeded up the forward solution, which in turn has rendered the UMOT parameter recovery computationally efficient.
24

3-D inversion of helicopter-borne electromagnetic data

Scheunert, Mathias 19 January 2016 (has links) (PDF)
In an effort to improve the accuracy of common 1-D analysis for frequency domain helicopter-borne electromagnetic data at reasonable computing costs, a 3-D inversion approach is developed. The strategy is based on the prior localization of an entire helicopter-borne electromagnetic survey to parts which are actually affected by expected local 3-D anomalies and a separate inversion of those sections of the surveys (cut-&-paste strategy). The discrete forward problem, adapted from the complete Helmholtz equation, is formulated in terms of the secondary electric field employing the finite difference method. The analytical primary field calculation incorporates an interpolation strategy that allows to effectively handle the enormous number of transmitters. For solving the inverse problem, a straightforward Gauss-Newton method and a Tikhonov-type regularization scheme are applied. In addition, different strategies for the restriction of the domain where the inverse problem is solved are used as an implicit regularization. The derived linear least squares problem is solved with Krylov-subspace methods, such as the LSQR algorithm, that are able to deal with the inherent ill-conditioning. As the helicopter-borne electromagnetic problem is characterized by a unique transmitter-receiver relation, an explicit representation of the Jacobian matrix is used. It is shown that this ansatz is the crucial component of the 3-D HEM inversion. Furthermore, a tensor-based formulation is introduced that provides a fast update of the linear system of the forward problem and an effective handling of the sensitivity related algebraic quantities. Based on a synthetic data set of a predefined model problem, different application examples are used to demonstrate the principal functionality of the presented algorithm. Finally, the algorithm is applied to a data set obtained from a real field survey in the Northern German Lowlands. / Die vorliegende Arbeit beschäftigt sich mit der 3-D Inversion von Hubschrauberelektromagnetikdaten im Frequenzbereich. Das vorgestellte Verfahren basiert auf einer vorhergehenden Eingrenzung des Messgebiets auf diejenigen Bereiche, in denen tatsächliche 3-D Strukturen im Untergrund vermutet werden. Die Resultate der 3-D Inversion dieser Teilbereiche können im Anschluss wieder in die Ergebnisse der Auswertung des komplementären Gesamtdatensatzes integriert werden, welche auf herkömmlichen 1-D Verfahren beruht (sog. Cut-&-Paste-Strategie). Die Diskretisierung des Vorwärtsproblems, abgeleitet von einer Sekundärfeldformulierung der vollständigen Helmholtzgleichung, erfolgt mithilfe der Methode der Finiten Differenzen. Zur analytischen Berechnung der zugehörigen Primärfelder wird ein Interpolationsansatz verwendet, welcher den Umgang mit der enorm hohen Anzahl an Quellen ermöglicht. Die Lösung des inversen Problems basiert auf dem Gauß-Newton-Verfahren und dem Tichonow-Regularisierungsansatz. Als Mittel der zusätzlichen impliziten Regularisierung dient eine räumliche Eingrenzung des Gebiets, auf welchem das inverse Problem gelöst wird. Zur iterativen Lösung des zugrundeliegenden Kleinste-Quadrate-Problems werden Krylov-Unterraum-Verfahren, wie der LSQR Algorithmus, verwendet. Aufgrund der charakteristischen Sender-Empfänger-Beziehung wird eine explizit berechnete Jakobimatrix genutzt. Ferner wird eine tensorbasierte Problemformulierung vorgestellt, welche die schnelle Assemblierung leitfähigkeitsabhängiger Systemmatrizen und die effektive Handhabung der zur Berechnung der Jakobimatrix notwendigen algebraischen Größen ermöglicht. Die Funktionalität des beschriebenen Ansatzes wird anhand eines synthetischen Datensatzes zu einem definierten Testproblem überprüft. Abschließend werden Inversionsergebnisse zu Felddaten gezeigt, welche im Norddeutschen Tiefland erhoben worden.
25

3-D inversion of helicopter-borne electromagnetic data

Scheunert, Mathias 27 November 2015 (has links)
In an effort to improve the accuracy of common 1-D analysis for frequency domain helicopter-borne electromagnetic data at reasonable computing costs, a 3-D inversion approach is developed. The strategy is based on the prior localization of an entire helicopter-borne electromagnetic survey to parts which are actually affected by expected local 3-D anomalies and a separate inversion of those sections of the surveys (cut-&-paste strategy). The discrete forward problem, adapted from the complete Helmholtz equation, is formulated in terms of the secondary electric field employing the finite difference method. The analytical primary field calculation incorporates an interpolation strategy that allows to effectively handle the enormous number of transmitters. For solving the inverse problem, a straightforward Gauss-Newton method and a Tikhonov-type regularization scheme are applied. In addition, different strategies for the restriction of the domain where the inverse problem is solved are used as an implicit regularization. The derived linear least squares problem is solved with Krylov-subspace methods, such as the LSQR algorithm, that are able to deal with the inherent ill-conditioning. As the helicopter-borne electromagnetic problem is characterized by a unique transmitter-receiver relation, an explicit representation of the Jacobian matrix is used. It is shown that this ansatz is the crucial component of the 3-D HEM inversion. Furthermore, a tensor-based formulation is introduced that provides a fast update of the linear system of the forward problem and an effective handling of the sensitivity related algebraic quantities. Based on a synthetic data set of a predefined model problem, different application examples are used to demonstrate the principal functionality of the presented algorithm. Finally, the algorithm is applied to a data set obtained from a real field survey in the Northern German Lowlands. / Die vorliegende Arbeit beschäftigt sich mit der 3-D Inversion von Hubschrauberelektromagnetikdaten im Frequenzbereich. Das vorgestellte Verfahren basiert auf einer vorhergehenden Eingrenzung des Messgebiets auf diejenigen Bereiche, in denen tatsächliche 3-D Strukturen im Untergrund vermutet werden. Die Resultate der 3-D Inversion dieser Teilbereiche können im Anschluss wieder in die Ergebnisse der Auswertung des komplementären Gesamtdatensatzes integriert werden, welche auf herkömmlichen 1-D Verfahren beruht (sog. Cut-&-Paste-Strategie). Die Diskretisierung des Vorwärtsproblems, abgeleitet von einer Sekundärfeldformulierung der vollständigen Helmholtzgleichung, erfolgt mithilfe der Methode der Finiten Differenzen. Zur analytischen Berechnung der zugehörigen Primärfelder wird ein Interpolationsansatz verwendet, welcher den Umgang mit der enorm hohen Anzahl an Quellen ermöglicht. Die Lösung des inversen Problems basiert auf dem Gauß-Newton-Verfahren und dem Tichonow-Regularisierungsansatz. Als Mittel der zusätzlichen impliziten Regularisierung dient eine räumliche Eingrenzung des Gebiets, auf welchem das inverse Problem gelöst wird. Zur iterativen Lösung des zugrundeliegenden Kleinste-Quadrate-Problems werden Krylov-Unterraum-Verfahren, wie der LSQR Algorithmus, verwendet. Aufgrund der charakteristischen Sender-Empfänger-Beziehung wird eine explizit berechnete Jakobimatrix genutzt. Ferner wird eine tensorbasierte Problemformulierung vorgestellt, welche die schnelle Assemblierung leitfähigkeitsabhängiger Systemmatrizen und die effektive Handhabung der zur Berechnung der Jakobimatrix notwendigen algebraischen Größen ermöglicht. Die Funktionalität des beschriebenen Ansatzes wird anhand eines synthetischen Datensatzes zu einem definierten Testproblem überprüft. Abschließend werden Inversionsergebnisse zu Felddaten gezeigt, welche im Norddeutschen Tiefland erhoben worden.
26

Quasi second-order methods for PDE-constrained forward and inverse problems

Zehnder, Jonas 05 1900 (has links)
La conception assistée par ordinateur (CAO), les effets visuels, la robotique et de nombreux autres domaines tels que la biologie computationnelle, le génie aérospatial, etc. reposent sur la résolution de problèmes mathématiques. Dans la plupart des cas, des méthodes de calcul sont utilisées pour résoudre ces problèmes. Le choix et la construction de la méthode de calcul ont un impact important sur les résultats et l'efficacité du calcul. La structure du problème peut être utilisée pour créer des méthodes, qui sont plus rapides et produisent des résultats qualitativement meilleurs que les méthodes qui n'utilisent pas la structure. Cette thèse présente trois articles avec trois nouvelles méthodes de calcul s'attaquant à des problèmes de simulation et d'optimisation contraints par des équations aux dérivées partielles (EDP). Dans le premier article, nous abordons le problème de la dissipation d'énergie des solveurs fluides courants dans les effets visuels. Les solveurs de fluides sont omniprésents dans la création d'effets dans les courts et longs métrages d'animation. Nous présentons un schéma d'intégration temporelle pour la dynamique des fluides incompressibles qui préserve mieux l'énergie comparé aux nombreuses méthodes précédentes. La méthode présentée présente une faible surcharge et peut être intégrée à un large éventail de méthodes existantes. L'amélioration de la conservation de l'énergie permet la création d'animations nettement plus dynamiques. Nous abordons ensuite la conception computationelle dont le but est d'exploiter l'outils computationnel dans le but d'améliorer le processus de conception. Plus précisément, nous examinons l'analyse de sensibilité, qui calcule les sensibilités du résultat de la simulation par rapport aux paramètres de conception afin d'optimiser automatiquement la conception. Dans ce contexte, nous présentons une méthode efficace de calcul de la direction de recherche de Gauss-Newton, en tirant parti des solveurs linéaires directs épars modernes. Notre méthode réduit considérablement le coût de calcul du processus d'optimisation pour une certaine classe de problèmes de conception inverse. Enfin, nous examinons l'optimisation de la topologie à l'aide de techniques d'apprentissage automatique. Nous posons deux questions : Pouvons-nous faire de l'optimisation topologique sans maillage et pouvons-nous apprendre un espace de solutions d'optimisation topologique. Nous appliquons des représentations neuronales implicites et obtenons des résultats structurellement sensibles pour l'optimisation topologique sans maillage en guidant le réseau neuronal pendant le processus d'optimisation et en adaptant les méthodes d'optimisation topologique par éléments finis. Notre méthode produit une représentation continue du champ de densité. De plus, nous présentons des espaces de solution appris en utilisant la représentation neuronale implicite. / Computer-aided design (CAD), visual effects, robotics and many other fields such as computational biology, aerospace engineering etc. rely on the solution of mathematical problems. In most cases, computational methods are used to solve these problems. The choice and construction of the computational method has large impact on the results and the computational efficiency. The structure of the problem can be used to create methods, that are faster and produce qualitatively better results than methods that do not use the structure. This thesis presents three articles with three new computational methods tackling partial differential equation (PDE) constrained simulation and optimization problems. In the first article, we tackle the problem of energy dissipation of common fluid solvers in visual effects. Fluid solvers are ubiquitously used to create effects in animated shorts and feature films. We present a time integration scheme for incompressible fluid dynamics which preserves energy better than many previous methods. The presented method has low overhead and can be integrated into a wide range of existing methods. The improved energy conservation leads to noticeably more dynamic animations. We then move on to computational design whose goal is to harnesses computational techniques for the design process. Specifically, we look at sensitivity analysis, which computes the sensitivities of the simulation result with respect to the design parameters to automatically optimize the design. In this context, we present an efficient way to compute the Gauss-Newton search direction, leveraging modern sparse direct linear solvers. Our method reduces the computational cost of the optimization process greatly for a certain class of inverse design problems. Finally, we look at topology optimization using machine learning techniques. We ask two questions: Can we do mesh-free topology optimization and can we learn a space of topology optimization solutions. We apply implicit neural representations and obtain structurally sensible results for mesh-free topology optimization by guiding the neural network during optimization process and adapting methods from finite element based topology optimization. Our method produces a continuous representation of the density field. Additionally, we present learned solution spaces using the implicit neural representation.
27

Computational modeling and design of nonlinear mechanical systems and materials

Tang, Pengbin 03 1900 (has links)
Les systèmes et matériaux mécaniques non linéaires sont largement utilisés dans divers domaines. Cependant, leur modélisation et leur conception ne sont pas triviales car elles nécessitent une compréhension complète de leurs non-linéarités internes et d'autres phénomènes. Pour permettre une conception efficace, nous devons d'abord introduire des modèles de calcul afin de caractériser avec précision leur comportement complexe. En outre, de nouvelles techniques de conception inverse sont également nécessaires pour comprendre comment le comportement change lorsque nous modifions les paramètres de conception des systèmes mécaniques non linéaires et des matériaux. Par conséquent, dans cette thèse, nous présentons trois nouvelles méthodes pour la modélisation informatique et la conception de systèmes mécaniques non linéaires et de matériaux. Dans le premier article, nous abordons le problème de la conception de systèmes mécaniques non linéaires présentant des mouvements périodiques stables en réponse à une force périodique. Nous présentons une méthode de calcul qui utilise une approche du domaine fréquentiel pour la simulation dynamique et la puissante analyse de sensibilité pour l'optimisation de la conception afin de concevoir des systèmes mécaniques conformes avec des oscillations de grande amplitude. Notre méthode est polyvalente et peut être appliquée à divers types de systèmes mécaniques souples. Nous validons son efficacité en fabriquant et en évaluant plusieurs prototypes physiques. Ensuite, nous nous concentrons sur la modélisation informatique et la caractérisation mécanique des matériaux non linéaires dominés par le contact, en particulier les matériaux à emboîtement discret (DIM), qui sont des tissus de cotte de mailles généralisés constitués d'éléments d'emboîtement quasi-rigides. Contrairement aux matériaux élastiques conventionnels pour lesquels la déformation et la force de rappel sont directement couplées, la mécanique des DIM est régie par des contacts entre des éléments individuels qui donnent lieu à des contraintes de déformation cinématique anisotrope. Pour reproduire le comportement biphasique du DIM sans simuler des structures à micro-échelle coûteuses, nous introduisons une méthode efficace de limitation de la déformation anisotrope basée sur la programmation conique du second ordre (SOCP). En outre, pour caractériser de manière exhaustive la forte anisotropie, le couplage complexe et d'autres phénomènes non linéaires du DIM, nous introduisons une nouvelle approche d'homogénéisation pour distiller des limites de déformation à grande échelle à partir de simulations à micro-échelle et nous développons un modèle macromécanique basé sur des données pour simuler le DIM avec des contraintes de déformation homogénéisées. / Nonlinear mechanical systems and materials are broadly used in diverse fields. However, their modeling and design are nontrivial as they require a complete understanding of their internal nonlinearities and other phenomena. To enable their efficient design, we must first introduce computational models to accurately characterize their complex behavior. Furthermore, new inverse design techniques are also required to capture how the behavior changes when we change the design parameters of nonlinear mechanical systems and materials. Therefore, in this thesis, we introduce three novel methods for computational modeling and design of nonlinear mechanical systems and materials. In the first article, we address the design problem of nonlinear mechanical systems exhibiting stable periodic motions in response to a periodic force. We present a computational method that utilizes a frequency-domain approach for dynamical simulation and the powerful sensitivity analysis for design optimization to design compliant mechanical systems with large-amplitude oscillations. Our method is versatile and can be applied to various types of compliant mechanical systems. We validate its effectiveness by fabricating and evaluating several physical prototypes. Next, we focus on the computation modeling and mechanical characterization of contact-dominated nonlinear materials, particularly Discrete Interlocking Materials (DIM), which are generalized chainmail fabrics made of quasi-rigid interlocking elements. Unlike conventional elastic materials for which deformation and restoring forces are directly coupled, the mechanics of DIM are governed by contacts between individual elements that give rise to anisotropic kinematic deformation constraints. To replicate the biphasic behavior of DIM without simulating expensive microscale structures, we introduce an efficient anisotropic strain-limiting method based on second-order cone programming (SOCP). Additionally, to comprehensively characterize strong anisotropy, complex coupling, and other nonlinear phenomena of DIM, we introduce a novel homogenization approach for distilling macroscale deformation limits from microscale simulations and develop a data-driven macromechanical model for simulating DIM with homogenized deformation constraints.

Page generated in 0.0501 seconds