• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 19
  • 19
  • 16
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Numerical simulation of fracture in unreinforced masonry

Chaimoon, Krit, Civil & Environmental Engineering, Faculty of Engineering, UNSW January 2007 (has links)
The aims of this thesis are to study the fracture behaviour in unreinforced masonry, to carry out a limited experimental program on three-point bending (TPB) masonry panels and to develop a time-dependent fracture formulation for the study of mode I fracture in quasi-brittle materials. A micro-model for fracture in unreinforced masonry is developed using the concept of the discrete crack approach. All basic masonry failure modes are taken into account. To capture brick diagonal tensile cracking and masonry crushing, a linear compression cap is proposed with a criterion for defining the compression cap. The failure surface for brick and brick-mortar interfaces are modelled using a Mohr-Coulomb failure surface with a tension cut-off and a linear compression cap. The fracture formulation, in nonholonomic rate form within a quasi-prescribed displacement approach, is based on a piecewise-linear constitutive law and is in the form of a so-called ?linear complementarity problem? (LCP). The proposed model has been applied to simulating fracture in masonry shear walls and masonry TPB panels. An experimental program was undertaken to investigate the failure behaviour of masonry panels under TPB with relatively low strength mortar. The basic material parameters were obtained from compression, TPB and shear tests on bricks, mortar and brick-mortar interfaces. The experimental results showed that the failure of masonry TPB panels is governed by both tensile and shear failure rather than just tensile failure. The simulation of the masonry TPB tests compared well with the experimental results. In addition, the LCP fracture formulation is enhanced to study the time-dependent mode I fracture in quasi-brittle materials. Two main time-dependent sources, the viscoelasticity of the bulk material and the crack rate dependent opening, are taken into account. A simplified crack rate model is proposed to include the rate-dependent crack opening. The model is applied to predicting time-dependent crack growth in plain concrete beams under sustained loading. The model captures the essential features including the observed strength increase with loading rate, the load-deflection and load-CMOD responses, the deflection-time and CMOD-time curves, the predicted time to failure and the stress distributions in the fracture zone.
12

Modélisation dynamique d'un assemblage de floes rigides / Dynamics of an assembly of rigid ice floes

Rabatel, Matthias 23 November 2015 (has links)
Dans cette thèse, nous présentons un modèle granulaire décrivant la dynamique d'un assemblage de floes rigides de tailles et de formes diverses, soumis aux forces de traînée dues aux courants atmosphérique et océanique. Ce modèle est basé sur les équations des moments linéaire et angulaire pour décrire la dynamique régulière des floes et sur la résolution de problèmes linéaires de complémentarité pour traiter les collisions entre les floes. Entre les collisions, le mouvement d'un floe isolé satisfait la conservation des équations des moments linéaire et angulaire écrites à partir des formulations classiques des traînées dues au vent et à l'océan. Nous décrivons les collisions entre les floes comme des événements instantanés et les traitons avant qu'elles n'entraînent une interpénétration. Cela implique la notion d'impulsion de contact et la mise sous la forme de problèmes linéaires de complémentarité basés sur la condition de Signorini pour la non interpénétration et la loi de Coulomb. La nature du contact est représentée à travers un coefficient de friction et un coefficient de restitution décrivant la perte d'énergie cinétique durant la collision. Dans cette présente version du modèle, le coefficient de restitution est fixé. Le modèle a été validé en utilisant des données obtenues du mouvement de disques de bois évoluant en bassin de test aussi bien qu'en comparant le comportement des floes simulés avec un comportement attendu dans des scénarios classiques de dérive de glace et de collisions entre des solides rigides. Les résultats de simulations comprenant différents assemblages contenant des floes de tailles et de formes variées, soumis à différents scénarios de forçage, sont aussi discutés. Ils montrent tout le potentiel de notre approche sans qu'une analyse détaillée et complète n'ait encore été proposée. / In this thesis, we present a model describing the dynamics of a population of ice floes with arbitrary shapes and sizes, which are exposed to atmospheric and oceanic skin drag. The granular model presented is based on simplified momentum equations for ice floe motion between collisions and on the resolution of linear complementarity problems to deal with ice floe collisions. Between collisions, the motion of an individual ice floe satisfies the linear and angular momentum conservation equations, with classical formula applied to account for atmospheric and oceanic skin drag. To deal with collisions, before they lead to interpenetration, we included a linear complementarity problem based on the Signorini condition and Coulombs law. The nature of the contact is described through a constant coefficient of friction, as well as a coefficient of restitution describing the loss of kinetic energy during the collision. In the present version of our model, this coefficient is fixed. The model was validated using data obtained from the motion of interacting artificial wood floes in a test basin. The results of simulations comprising few hundreds of ice floes of various shapes and sizes, exposed to different forcing scenarios, and under different configurations, are also discussed. They show that the progressive clustering of ice floes as the result of kinetic energy dissipation during collisions is well captured, and suggest a collisional regimes of floe dispersion at small scales, different from a large-scale regime essentially driven by wind forcing.
13

Colourful linear programming / Programmation linéaire colorée

Sarrabezolles, Pauline 06 July 2015 (has links)
Le théorème de Carathéodory coloré, prouvé en 1982 par Bárány, énonce le résultat suivant. Etant donnés d Å1 ensembles de points S1,SdÅ1 dans Rd , si chaque Si contient 0 dans son enveloppe convexe, alors il existe un sous-ensemble arc-en-ciel T µ SdÅ1 iÆ1 Si contenant 0 dans son enveloppe convexe, i.e. un sous-ensemble T tel que jT \Si j • 1 pour tout i et tel que 0 2 conv(T ). Ce théorème a donné naissance à de nombreuses questions, certaines algorithmiques et d’autres plus combinatoires. Dans ce manuscrit, nous nous intéressons à ces deux aspects. En 1997, Bárány et Onn ont défini la programmation linéaire colorée comme l’ensemble des questions algorithmiques liées au théorème de Carathéodory coloré. Parmi ces questions, deux ont particulièrement retenu notre attention. La première concerne la complexité du calcul d’un sous-ensemble arc-en-ciel comme dans l’énoncé du théorème. La seconde, en un sens plus générale, concerne la complexité du problème de décision suivant. Etant donnés des ensembles de points dans Rd , correspondant aux couleurs, il s’agit de décider s’il existe un sous-ensemble arc-en-ciel contenant 0 dans son enveloppe convexe, et ce en dehors des conditions du théorème de Carathéodory coloré. L’objectif de cette thèse est de mieux délimiter les cas polynomiaux et les cas “difficiles” de la programmation linéaire colorée. Nous présentons de nouveaux résultats de complexités permettant effectivement de réduire l’ensemble des cas encore incertains. En particulier, des versions combinatoires du théorème de Carathéodory coloré sont présentées d’un point de vue algorithmique. D’autre part, nous montrons que le problème de calcul d’un équilibre de Nash dans un jeu bimatriciel peut être réduit polynomialement à la programmation linéaire coloré. En prouvant ce dernier résultat, nous montrons aussi comment l’appartenance des problèmes de complémentarité à la classe PPAD peut être obtenue à l’aide du lemme de Sperner. Enfin, nous proposons une variante de l’algorithme de Bárány et Onn, calculant un sous ensemble arc-en-ciel contenant 0 dans son enveloppe convexe sous les conditions du théorème de Carathéodory coloré. Notre algorithme est clairement relié à l’algorithme du simplexe. Après une légère modification, il coïncide également avec l’algorithme de Lemke, calculant un équilibre de Nash dans un jeu bimatriciel. La question combinatoire posée par le théorème de Carathéodory coloré concerne le nombre de sous-ensemble arc-en-ciel contenant 0 dans leurs enveloppes convexes. Deza, Huang, Stephen et Terlaky (Colourful simplicial depth, Discrete Comput. Geom., 35, 597–604 (2006)) ont formulé la conjecture suivante. Si jSi j Æ d Å1 pour tout i 2 {1, . . . ,d Å1}, alors il y a au moins d2Å1 sous-ensemble arc-en-ciel contenant 0 dans leurs enveloppes convexes. Nous prouvons cette conjecture à l’aide d’objets combinatoires, connus sous le nom de systèmes octaédriques, dont nous présentons une étude plus approfondie / The colorful Carathéodory theorem, proved by Bárány in 1982, states the following. Given d Å1 sets of points S1, . . . ,SdÅ1 µ Rd , each of them containing 0 in its convex hull, there exists a colorful set T containing 0 in its convex hull, i.e. a set T µ SdÅ1 iÆ1 Si such that jT \Si j • 1 for all i and such that 0 2 conv(T ). This result gave birth to several questions, some algorithmic and some more combinatorial. This thesis provides answers on both aspects. The algorithmic questions raised by the colorful Carathéodory theorem concern, among other things, the complexity of finding a colorful set under the condition of the theorem, and more generally of deciding whether there exists such a colorful set when the condition is not satisfied. In 1997, Bárány and Onn defined colorful linear programming as algorithmic questions related to the colorful Carathéodory theorem. The two questions we just mentioned come under colorful linear programming. This thesis aims at determining which are the polynomial cases of colorful linear programming and which are the harder ones. New complexity results are obtained, refining the sets of undetermined cases. In particular, we discuss some combinatorial versions of the colorful Carathéodory theorem from an algorithmic point of view. Furthermore, we show that computing a Nash equilibrium in a bimatrix game is polynomially reducible to a colorful linear programming problem. On our track, we found a new way to prove that a complementarity problem belongs to the PPAD class with the help of Sperner’s lemma. Finally, we present a variant of the “Bárány-Onn” algorithm, which is an algorithmcomputing a colorful set T containing 0 in its convex hull whose existence is ensured by the colorful Carathéodory theorem. Our algorithm makes a clear connection with the simplex algorithm. After a slight modification, it also coincides with the Lemke method, which computes a Nash equilibriumin a bimatrix game. The combinatorial question raised by the colorful Carathéodory theorem concerns the number of positively dependent colorful sets. Deza, Huang, Stephen, and Terlaky (Colourful simplicial depth, Discrete Comput. Geom., 35, 597–604 (2006)) conjectured that, when jSi j Æ d Å1 for all i 2 {1, . . . ,d Å1}, there are always at least d2Å1 colourful sets containing 0 in their convex hulls. We prove this conjecture with the help of combinatorial objects, known as the octahedral systems. Moreover, we provide a thorough study of these objects
14

Modification, development, application and computational experiments of some selected network, distribution and resource allocation models in operations research

Nyamugure, Philimon January 2017 (has links)
Thesis (Ph.D. (Statistics)) -- University of Limpopo, 2017 / Operations Research (OR) is a scientific method for developing quantitatively well-grounded recommendations for decision making. While it is true that it uses a variety of mathematical techniques, OR has a much broader scope. It is in fact a systematic approach to solving problems, which uses one or more analytical tools in the process of analysis. Over the years, OR has evolved through different stages. This study is motivated by new real-world challenges needed for efficiency and innovation in line with the aims and objectives of OR – the science of better, as classified by the OR Society of the United Kingdom. New real-world challenges are encountered on a daily basis from problems arising in the fields of water, energy, agriculture, mining, tourism, IT development, natural phenomena, transport, climate change, economic and other societal requirements. To counter all these challenges, new techniques ought to be developed. The growth of global markets and the resulting increase in competition have highlighted the need for OR techniques to be improved. These developments, among other reasons, are an indication that new techniques are needed to improve the day-to-day running of organisations, regardless of size, type and location. The principal aim of this study is to modify and develop new OR techniques that can be used to solve emerging problems encountered in the areas of linear programming, integer programming, mixed integer programming, network routing and travelling salesman problems. Distribution models, resource allocation models, travelling salesman problem, general linear mixed integer ii programming and other network problems that occur in real life, have been modelled mathematically in this thesis. Most of these models belong to the NP-hard (non-deterministic polynomial) class of difficult problems. In other words, these types of problems cannot be solved in polynomial time (P). No general purpose algorithm for these problems is known. The thesis is divided into two major areas namely: (1) network models and (2) resource allocation and distribution models. Under network models, five new techniques have been developed: the minimum weight algorithm for a non-directed network, maximum reliability route in both non-directed and directed acyclic network, minimum spanning tree with index less than two, routing through 0k0 specified nodes, and a new heuristic to the travelling salesman problem. Under the resource allocation and distribution models section, four new models have been developed, and these are: a unified approach to solve transportation and assignment problems, a transportation branch and bound algorithm for the generalised assignment problem, a new hybrid search method over the extreme points for solving a large-scale LP model with non-negative coefficients, and a heuristic for a mixed integer program using the characteristic equation approach. In most of the nine approaches developed in the thesis, efforts were done to compare the effectiveness of the new approaches to existing techniques. Improvements in the new techniques in solving problems were noted. However, it was difficult to compare some of the new techniques to the existing ones because computational packages of the new techniques need to be developed first. This aspect will be subject matter of future research on developing these techniques further. It was concluded with strong evidence, that development of new OR techniques is a must if we are to encounter the emerging problems faced by the world today. Key words: NP-hard problem, Network models, Reliability, Heuristic, Largescale LP, Characteristic equation, Algorithm.
15

Nonnegative matrix and tensor factorizations, least squares problems, and applications

Kim, Jingu 14 November 2011 (has links)
Nonnegative matrix factorization (NMF) is a useful dimension reduction method that has been investigated and applied in various areas. NMF is considered for high-dimensional data in which each element has a nonnegative value, and it provides a low-rank approximation formed by factors whose elements are also nonnegative. The nonnegativity constraints imposed on the low-rank factors not only enable natural interpretation but also reveal the hidden structure of data. Extending the benefits of NMF to multidimensional arrays, nonnegative tensor factorization (NTF) has been shown to be successful in analyzing complicated data sets. Despite the success, NMF and NTF have been actively developed only in the recent decade, and algorithmic strategies for computing NMF and NTF have not been fully studied. In this thesis, computational challenges regarding NMF, NTF, and related least squares problems are addressed. First, efficient algorithms of NMF and NTF are investigated based on a connection from the NMF and the NTF problems to the nonnegativity-constrained least squares (NLS) problems. A key strategy is to observe typical structure of the NLS problems arising in the NMF and the NTF computation and design a fast algorithm utilizing the structure. We propose an accelerated block principal pivoting method to solve the NLS problems, thereby significantly speeding up the NMF and NTF computation. Implementation results with synthetic and real-world data sets validate the efficiency of the proposed method. In addition, a theoretical result on the classical active-set method for rank-deficient NLS problems is presented. Although the block principal pivoting method appears generally more efficient than the active-set method for the NLS problems, it is not applicable for rank-deficient cases. We show that the active-set method with a proper starting vector can actually solve the rank-deficient NLS problems without ever running into rank-deficient least squares problems during iterations. Going beyond the NLS problems, it is presented that a block principal pivoting strategy can also be applied to the l1-regularized linear regression. The l1-regularized linear regression, also known as the Lasso, has been very popular due to its ability to promote sparse solutions. Solving this problem is difficult because the l1-regularization term is not differentiable. A block principal pivoting method and its variant, which overcome a limitation of previous active-set methods, are proposed for this problem with successful experimental results. Finally, a group-sparsity regularization method for NMF is presented. A recent challenge in data analysis for science and engineering is that data are often represented in a structured way. In particular, many data mining tasks have to deal with group-structured prior information, where features or data items are organized into groups. Motivated by an observation that features or data items that belong to a group are expected to share the same sparsity pattern in their latent factor representations, We propose mixed-norm regularization to promote group-level sparsity. Efficient convex optimization methods for dealing with the regularization terms are presented along with computational comparisons between them. Application examples of the proposed method in factor recovery, semi-supervised clustering, and multilingual text analysis are presented.
16

Exploiting contacts for interactive control of animated human characters

Jain, Sumit 30 June 2011 (has links)
One of the common research goals in disciplines such as computer graphics and robotics is to understand the subtleties of human motion and develop tools for recreating natural and meaningful motion. Physical simulation of virtual human characters is a promising approach since it provides a testbed for developing and testing control strategies required to execute various human behaviors. Designing generic control algorithms for simulating a wide range of human activities, which can robustly adapt to varying physical environments, has remained a primary challenge. This dissertation introduces methods for generic and robust control of virtual characters in an interactive physical environment. Our approach is to use the information of the physical contacts between the character and her environment in the control design. We leverage high-level knowledge of the kinematics goals and the interaction with the surroundings to develop active control strategies that robustly adapt to variations in the physical scene. For synthesizing intentional motion requiring long-term planning, we exploit properties of the physical model for creating efficient and robust controllers in an interactive framework. The control design leverages the reference motion capture data and the contact information with the environment for interactive long-term planning. Finally, we propose a compact soft contact model for handling contacts for rigid body virtual characters. This model aims at improving the robustness of existing control methods without adding any complexity to the control design and opens up possibilities for new control algorithms to synthesize agile human motion.
17

Convergence Analysis of Modulus Based Methods for Linear Complementarity Problems / Analiza konvergencije modulus metoda za probleme linearne komplementarnosti

Saeed Aboglida Saeed Abear 18 March 2019 (has links)
<p>The linear complementarity problems (LCP) arise from linear or quadratic programming, or from a variety of other particular application problems, like boundary problems, network equilibrium problems,contact problems, market equilibria problems, bimatrix games etc. Recently, many people have focused on the solver of LCP with a matrix having some kind of special property, for example, when this matrix is an H+-matrix, since this property is a sufficient condition for the existence and uniqueness of the soluition of LCP. Generally speaking, solving LCP can be approached from two essentially different perspectives. One of them includes the use of so-called direct methods, in the literature also known under the name pivoting methods. The other, and from our perspective - more interesting one, which we actually focus on in this thesis,<br />is the iterative approach. Among the vast collection of iterative solvers,our choice was one particular class of modulus based iterative methods.Since the subclass of modulus based-methods is again diverse in some sense, it can be specialized even further, by the introduction and the use of matrix splittings. The main goal of this thesis is to use the theory of H -matrices for proving convergence of the modulus-based multisplit-ting methods, and to use this new technique to analyze some important properties of iterative methods once the convergence has been guaranteed.</p> / <p>Problemi linearne komplementarnosti (LCP) se javljaju kod problema linearnog i kvadratnog programiranja i kod mnogih drugih problema iz prakse, kao &scaron;to su, na&nbsp; primer, problemi sa graničnim slojem, problemi mrežnih ekvilibrijuma, kontaktni problemi, problemi određivanja trži&scaron;ne ravnoteže, problemi bimatričnih igara i mnogi drugi. Ne tako davno, veliki broj autora se bavio razvijanjem postupaka za re&scaron;avanje LCP sa matricom koja ispunjava neko specijalno svojstvo, na primer, da pripada klasi H+-matrica, budući da je dobro poznato da je ovaj uslov dovoljan da obezbedi egzistenciju i jedinstvenost re&scaron;enja LCP. Uop&scaron;teno govoreći, re&scaron;avanju LCP moguce&nbsp; je pristupiti dvojako. Prvi pristup podrazumeva upotrebu takozvanih direktnih metoda, koje su u literaturi poznate i pod nazivom metode pivota. Drugoj kategoriji, koja je i sa stanovi&scaron;ta ove teze interesantna, pripadaju iterativni postupci. S obzirom da je ova kategorija izuzetno bogata, mi smo se opredelili za jednu od najznačajnijih varijanti, a&nbsp; to je modulski iterativni postupak. Međutim, ni ova odrednica nije dovoljno adekvatna, budući da modulski postupci obuhvataju nekolicinu različitih pravaca. Zato smo se odlučili da posmatramo postupke koji se zasnivaju na razlaganjima ali i vi&scaron;estrukim razlaganjima matrice. Glavni cilj ove doktorske disertacije jeste upotreba teorije H -matrica u teoremama o konvergenciji modulskih metoda zasnovanih na multisplitinzima matrice i kori&scaron;ćenje ove nove tehnike, sa ciljem analize bitnih osobina, nakon &scaron;to je konvergencija postupka zagarantovana.</p>
18

Ghosts and machines : regularized variational methods for interactive simulations of multibodies with dry frictional contacts

Lacoursière, Claude January 2007 (has links)
<p>A time-discrete formulation of the variational principle of mechanics is used to provide a consistent theoretical framework for the construction and analysis of low order integration methods. These are applied to mechanical systems subject to mixed constraints and dry frictional contacts and impacts---machines. The framework includes physics motivated constraint regularization and stabilization schemes. This is done by adding potential energy and Rayleigh dissipation terms in the Lagrangian formulation used throughout. These terms explicitly depend on the value of the Lagrange multipliers enforcing constraints. Having finite energy, the multipliers are thus massless ghost particles. The main numerical stepping method produced with the framework is called SPOOK.</p><p>Variational integrators preserve physical invariants globally, exactly in some cases, approximately but within fixed global bounds for others. This allows to product realistic physical trajectories even with the low order methods. These are needed in the solution of nonsmooth problems such as dry frictional contacts and in addition, they are computationally inexpensive. The combination of strong stability, low order, and the global preservation of invariants allows for large integration time steps, but without loosing accuracy on the important and visible physical quantities. SPOOK is thus well-suited for interactive simulations, such as those commonly used in virtual environment applications, because it is fast, stable, and faithful to the physics.</p><p>New results include a stable discretization of highly oscillatory terms of constraint regularization; a linearly stable constraint stabilization scheme based on ghost potential and Rayleigh dissipation terms; a single-step, strictly dissipative, approximate impact model; a quasi-linear complementarity formulation of dry friction that is isotropic and solvable for any nonnegative value of friction coefficients; an analysis of a splitting scheme to solve frictional contact complementarity problems; a stable, quaternion-based rigid body stepping scheme and a stable linear approximation thereof. SPOOK includes all these elements. It is linearly implicit and linearly stable, it requires the solution of either one linear system of equations of one mixed linear complementarity problem per regular time step, and two of the same when an impact condition is detected. The changes in energy caused by constraints, impacts, and dry friction, are all shown to be strictly dissipative in comparison with the free system. Since all regularization and stabilization parameters are introduced in the physics, they map directly onto physical properties and thus allow modeling of a variety of phenomena, such as constraint compliance, for instance.</p><p>Tutorial material is included for continuous and discrete-time analytic mechanics, quaternion algebra, complementarity problems, rigid body dynamics, constraint kinematics, and special topics in numerical linear algebra needed in the solution of the stepping equations of SPOOK.</p><p>The qualitative and quantitative aspects of SPOOK are demonstrated by comparison with a variety of standard techniques on well known test cases which are analyzed in details. SPOOK compares favorably for all these examples. In particular, it handles ill-posed and degenerate problems seamlessly and systematically. An implementation suitable for large scale performance and accuracy testing is left for future work.</p>
19

Ghosts and machines : regularized variational methods for interactive simulations of multibodies with dry frictional contacts

Lacoursière, Claude January 2007 (has links)
A time-discrete formulation of the variational principle of mechanics is used to provide a consistent theoretical framework for the construction and analysis of low order integration methods. These are applied to mechanical systems subject to mixed constraints and dry frictional contacts and impacts---machines. The framework includes physics motivated constraint regularization and stabilization schemes. This is done by adding potential energy and Rayleigh dissipation terms in the Lagrangian formulation used throughout. These terms explicitly depend on the value of the Lagrange multipliers enforcing constraints. Having finite energy, the multipliers are thus massless ghost particles. The main numerical stepping method produced with the framework is called SPOOK. Variational integrators preserve physical invariants globally, exactly in some cases, approximately but within fixed global bounds for others. This allows to product realistic physical trajectories even with the low order methods. These are needed in the solution of nonsmooth problems such as dry frictional contacts and in addition, they are computationally inexpensive. The combination of strong stability, low order, and the global preservation of invariants allows for large integration time steps, but without loosing accuracy on the important and visible physical quantities. SPOOK is thus well-suited for interactive simulations, such as those commonly used in virtual environment applications, because it is fast, stable, and faithful to the physics. New results include a stable discretization of highly oscillatory terms of constraint regularization; a linearly stable constraint stabilization scheme based on ghost potential and Rayleigh dissipation terms; a single-step, strictly dissipative, approximate impact model; a quasi-linear complementarity formulation of dry friction that is isotropic and solvable for any nonnegative value of friction coefficients; an analysis of a splitting scheme to solve frictional contact complementarity problems; a stable, quaternion-based rigid body stepping scheme and a stable linear approximation thereof. SPOOK includes all these elements. It is linearly implicit and linearly stable, it requires the solution of either one linear system of equations of one mixed linear complementarity problem per regular time step, and two of the same when an impact condition is detected. The changes in energy caused by constraints, impacts, and dry friction, are all shown to be strictly dissipative in comparison with the free system. Since all regularization and stabilization parameters are introduced in the physics, they map directly onto physical properties and thus allow modeling of a variety of phenomena, such as constraint compliance, for instance. Tutorial material is included for continuous and discrete-time analytic mechanics, quaternion algebra, complementarity problems, rigid body dynamics, constraint kinematics, and special topics in numerical linear algebra needed in the solution of the stepping equations of SPOOK. The qualitative and quantitative aspects of SPOOK are demonstrated by comparison with a variety of standard techniques on well known test cases which are analyzed in details. SPOOK compares favorably for all these examples. In particular, it handles ill-posed and degenerate problems seamlessly and systematically. An implementation suitable for large scale performance and accuracy testing is left for future work.

Page generated in 0.1162 seconds