• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 196
  • 151
  • 21
  • 13
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 441
  • 441
  • 320
  • 160
  • 153
  • 147
  • 116
  • 112
  • 72
  • 64
  • 60
  • 52
  • 48
  • 48
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

A study on the acoustic performance of tramway low-height noise barriers : gradient-based numerical optimization and experimental approaches / Étude de la performance acoustique des écrans antibruit de faible hauteur pour le tramway : optimisation numérique par méthode de gradient et approches expérimentales

Jolibois, Alexandre 25 November 2013 (has links)
Le bruit est devenu une nuisance importante en zone urbaine au point que selon l'Organisation Mondiale de la Santé, 40% de la population européenne est exposée à des niveaux de bruit excessifs, principalement dû aux transports terrestres. Il devient donc nécessaire de trouver de nouveaux moyens de lutter contre le bruit en zone urbaine. Dans ce travail, on étudie une solution possible à ce problème: un écran bas antibruit. Il s'agit d'un écran de hauteur inférieure à un mètre placé près d'une source, conçu pour réduire le niveau de bruit pour les piétons et les cyclistes à proximité. Ce type de protection est étudié numériquement et expérimentalement. Nous nous intéressons particulièrement aux écrans adaptés au bruit du tramway puisque dans ce cas les sources sont proches du sol et peuvent être atténuées efficacement. La forme ainsi que le traitement de surface de l'écran sont optimisés par une méthode de gradient couplée à une méthode 2D d'éléments finis de frontière. Les variables à optimiser sont les coordonnées de nœuds de contrôle et les paramètres servant à décrire l'impédance de surface. Les sensibilités sont calculées efficacement par la méthode de l'état adjoint. Les formes générées par l'algorithme d'optimisation sont assez irrégulières mais induisent une nette amélioration par rapport à des formes simples, d'au moins 5 dB (A). Il est également montré que l'utilisation de traitement absorbant du côté source de l'écran peut améliorer la performance sensiblement. Ce dernier point est confirmé par des mesures effectuées sur modèle réduit. De plus, un prototype à l'échelle 1 d'écran bas antibruit a été construit et testé en conditions réelles, le long d'une voie de tramway à Grenoble. Les mesures montrent que la protection réduit le niveau de 10 dB (A) pour un récepteur proche situé à hauteur d'oreilles. Ces résultats semblent confirmer l'applicabilité de ces protections pour réduire efficacement le bruit en zone urbaine / Noise has become a main nuisance in urban areas to the point that according to the World Health Organization 40% of the European population is exposed to excessive noise levels, mainly due to ground transportation. There is therefore a need to find new ways to mitigate noise in urban areas. In this work, a possible device to achieve this goal is studied: a low-height noise barrier. It consists of a barrier typically less than one meter high placed close to a source, designed to decrease the noise level for nearby pedestrians and cyclists. This type of device is studied both numerically and experimentally. Tramway noise barriers are especially studied since the noise sources are in this case very close to the ground and can therefore be attenuated efficiently. The shape and the surface treatment of the barrier are optimized using a gradient-based method coupled to a 2D boundary element method (BEM). The optimization variables are the node coordinates of a control mesh and the parameters describing the surface impedance. Sensitivities are calculated efficiently using the adjoint state approach. Numerical results show that the shapes generated by the optimization algorithm tend to be quite irregular but provide a significant improvement of more than 5 dB (A) compared to simpler shapes. Utilizing an absorbing treatment on the source side of the barrier is shown to be efficient as well. This second point has been confirmed by scale model measurements. In addition, a full scale low height noise barrier prototype has been built and tested in situ close to a tramway track in Grenoble. Measurements show that the device provides more than 10 dB (A) of attenuation for a close receiver located at the typical height of human ears. These results therefore seem to confirm the applicability of such protections to efficiently decrease noise exposure in urban areas
432

Eigenvalue Algorithms for Symmetric Hierarchical Matrices / Eigenwert-Algorithmen für Symmetrische Hierarchische Matrizen

Mach, Thomas 05 April 2012 (has links) (PDF)
This thesis is on the numerical computation of eigenvalues of symmetric hierarchical matrices. The numerical algorithms used for this computation are derivations of the LR Cholesky algorithm, the preconditioned inverse iteration, and a bisection method based on LDLT factorizations. The investigation of QR decompositions for H-matrices leads to a new QR decomposition. It has some properties that are superior to the existing ones, which is shown by experiments using the HQR decompositions to build a QR (eigenvalue) algorithm for H-matrices does not progress to a more efficient algorithm than the LR Cholesky algorithm. The implementation of the LR Cholesky algorithm for hierarchical matrices together with deflation and shift strategies yields an algorithm that require O(n) iterations to find all eigenvalues. Unfortunately, the local ranks of the iterates show a strong growth in the first steps. These H-fill-ins makes the computation expensive, so that O(n³) flops and O(n²) storage are required. Theorem 4.3.1 explains this behavior and shows that the LR Cholesky algorithm is efficient for the simple structured Hl-matrices. There is an exact LDLT factorization for Hl-matrices and an approximate LDLT factorization for H-matrices in linear-polylogarithmic complexity. This factorizations can be used to compute the inertia of an H-matrix. With the knowledge of the inertia for arbitrary shifts, one can compute an eigenvalue by bisectioning. The slicing the spectrum algorithm can compute all eigenvalues of an Hl-matrix in linear-polylogarithmic complexity. A single eigenvalue can be computed in O(k²n log^4 n). Since the LDLT factorization for general H-matrices is only approximative, the accuracy of the LDLT slicing algorithm is limited. The local ranks of the LDLT factorization for indefinite matrices are generally unknown, so that there is no statement on the complexity of the algorithm besides the numerical results in Table 5.7. The preconditioned inverse iteration computes the smallest eigenvalue and the corresponding eigenvector. This method is efficient, since the number of iterations is independent of the matrix dimension. If other eigenvalues than the smallest are searched, then preconditioned inverse iteration can not be simply applied to the shifted matrix, since positive definiteness is necessary. The squared and shifted matrix (M-mu I)² is positive definite. Inner eigenvalues can be computed by the combination of folded spectrum method and PINVIT. Numerical experiments show that the approximate inversion of (M-mu I)² is more expensive than the approximate inversion of M, so that the computation of the inner eigenvalues is more expensive. We compare the different eigenvalue algorithms. The preconditioned inverse iteration for hierarchical matrices is better than the LDLT slicing algorithm for the computation of the smallest eigenvalues, especially if the inverse is already available. The computation of inner eigenvalues with the folded spectrum method and preconditioned inverse iteration is more expensive. The LDLT slicing algorithm is competitive to H-PINVIT for the computation of inner eigenvalues. In the case of large, sparse matrices, specially tailored algorithms for sparse matrices, like the MATLAB function eigs, are more efficient. If one wants to compute all eigenvalues, then the LDLT slicing algorithm seems to be better than the LR Cholesky algorithm. If the matrix is small enough to be handled in dense arithmetic (and is not an Hl(1)-matrix), then dense eigensolvers, like the LAPACK function dsyev, are superior. The H-PINVIT and the LDLT slicing algorithm require only an almost linear amount of storage. They can handle larger matrices than eigenvalue algorithms for dense matrices. For Hl-matrices of local rank 1, the LDLT slicing algorithm and the LR Cholesky algorithm need almost the same time for the computation of all eigenvalues. For large matrices, both algorithms are faster than the dense LAPACK function dsyev.
433

Fast algorithms for frequency domain wave propagation

Tsuji, Paul Hikaru 22 February 2013 (has links)
High-frequency wave phenomena is observed in many physical settings, most notably in acoustics, electromagnetics, and elasticity. In all of these fields, numerical simulation and modeling of the forward propagation problem is important to the design and analysis of many systems; a few examples which rely on these computations are the development of metamaterial technologies and geophysical prospecting for natural resources. There are two modes of modeling the forward problem: the frequency domain and the time domain. As the title states, this work is concerned with the former regime. The difficulties of solving the high-frequency wave propagation problem accurately lies in the large number of degrees of freedom required. Conventional wisdom in the computational electromagnetics commmunity suggests that about 10 degrees of freedom per wavelength be used in each coordinate direction to resolve each oscillation. If K is the width of the domain in wavelengths, the number of unknowns N grows at least by O(K^2) for surface discretizations and O(K^3) for volume discretizations in 3D. The memory requirements and asymptotic complexity estimates of direct algorithms such as the multifrontal method are too costly for such problems. Thus, iterative solvers must be used. In this dissertation, I will present fast algorithms which, in conjunction with GMRES, allow the solution of the forward problem in O(N) or O(N log N) time. / text
434

Conventional and Reciprocal Approaches to the Forward and Inverse Problems of Electroencephalography

Finke, Stefan 03 1900 (has links)
Le problème inverse en électroencéphalographie (EEG) est la localisation de sources de courant dans le cerveau utilisant les potentiels de surface sur le cuir chevelu générés par ces sources. Une solution inverse implique typiquement de multiples calculs de potentiels de surface sur le cuir chevelu, soit le problème direct en EEG. Pour résoudre le problème direct, des modèles sont requis à la fois pour la configuration de source sous-jacente, soit le modèle de source, et pour les tissues environnants, soit le modèle de la tête. Cette thèse traite deux approches bien distinctes pour la résolution du problème direct et inverse en EEG en utilisant la méthode des éléments de frontières (BEM): l’approche conventionnelle et l’approche réciproque. L’approche conventionnelle pour le problème direct comporte le calcul des potentiels de surface en partant de sources de courant dipolaires. D’un autre côté, l’approche réciproque détermine d’abord le champ électrique aux sites des sources dipolaires quand les électrodes de surfaces sont utilisées pour injecter et retirer un courant unitaire. Le produit scalaire de ce champ électrique avec les sources dipolaires donne ensuite les potentiels de surface. L’approche réciproque promet un nombre d’avantages par rapport à l’approche conventionnelle dont la possibilité d’augmenter la précision des potentiels de surface et de réduire les exigences informatiques pour les solutions inverses. Dans cette thèse, les équations BEM pour les approches conventionnelle et réciproque sont développées en utilisant une formulation courante, la méthode des résidus pondérés. La réalisation numérique des deux approches pour le problème direct est décrite pour un seul modèle de source dipolaire. Un modèle de tête de trois sphères concentriques pour lequel des solutions analytiques sont disponibles est utilisé. Les potentiels de surfaces sont calculés aux centroïdes ou aux sommets des éléments de discrétisation BEM utilisés. La performance des approches conventionnelle et réciproque pour le problème direct est évaluée pour des dipôles radiaux et tangentiels d’excentricité variable et deux valeurs très différentes pour la conductivité du crâne. On détermine ensuite si les avantages potentiels de l’approche réciproquesuggérés par les simulations du problème direct peuvent êtres exploités pour donner des solutions inverses plus précises. Des solutions inverses à un seul dipôle sont obtenues en utilisant la minimisation par méthode du simplexe pour à la fois l’approche conventionnelle et réciproque, chacun avec des versions aux centroïdes et aux sommets. Encore une fois, les simulations numériques sont effectuées sur un modèle à trois sphères concentriques pour des dipôles radiaux et tangentiels d’excentricité variable. La précision des solutions inverses des deux approches est comparée pour les deux conductivités différentes du crâne, et leurs sensibilités relatives aux erreurs de conductivité du crâne et au bruit sont évaluées. Tandis que l’approche conventionnelle aux sommets donne les solutions directes les plus précises pour une conductivité du crâne supposément plus réaliste, les deux approches, conventionnelle et réciproque, produisent de grandes erreurs dans les potentiels du cuir chevelu pour des dipôles très excentriques. Les approches réciproques produisent le moins de variations en précision des solutions directes pour différentes valeurs de conductivité du crâne. En termes de solutions inverses pour un seul dipôle, les approches conventionnelle et réciproque sont de précision semblable. Les erreurs de localisation sont petites, même pour des dipôles très excentriques qui produisent des grandes erreurs dans les potentiels du cuir chevelu, à cause de la nature non linéaire des solutions inverses pour un dipôle. Les deux approches se sont démontrées également robustes aux erreurs de conductivité du crâne quand du bruit est présent. Finalement, un modèle plus réaliste de la tête est obtenu en utilisant des images par resonace magnétique (IRM) à partir desquelles les surfaces du cuir chevelu, du crâne et du cerveau/liquide céphalorachidien (LCR) sont extraites. Les deux approches sont validées sur ce type de modèle en utilisant des véritables potentiels évoqués somatosensoriels enregistrés à la suite de stimulation du nerf médian chez des sujets sains. La précision des solutions inverses pour les approches conventionnelle et réciproque et leurs variantes, en les comparant à des sites anatomiques connus sur IRM, est encore une fois évaluée pour les deux conductivités différentes du crâne. Leurs avantages et inconvénients incluant leurs exigences informatiques sont également évalués. Encore une fois, les approches conventionnelle et réciproque produisent des petites erreurs de position dipolaire. En effet, les erreurs de position pour des solutions inverses à un seul dipôle sont robustes de manière inhérente au manque de précision dans les solutions directes, mais dépendent de l’activité superposée d’autres sources neurales. Contrairement aux attentes, les approches réciproques n’améliorent pas la précision des positions dipolaires comparativement aux approches conventionnelles. Cependant, des exigences informatiques réduites en temps et en espace sont les avantages principaux des approches réciproques. Ce type de localisation est potentiellement utile dans la planification d’interventions neurochirurgicales, par exemple, chez des patients souffrant d’épilepsie focale réfractaire qui ont souvent déjà fait un EEG et IRM. / The inverse problem of electroencephalography (EEG) is the localization of current sources within the brain using surface potentials on the scalp generated by these sources. An inverse solution typically involves multiple calculations of scalp surface potentials, i.e., the EEG forward problem. To solve the forward problem, models are needed for both the underlying source configuration, the source model, and the surrounding tissues, the head model. This thesis treats two distinct approaches for the resolution of the EEG forward and inverse problems using the boundary-element method (BEM): the conventional approach and the reciprocal approach. The conventional approach to the forward problem entails calculating the surface potentials starting from source current dipoles. The reciprocal approach, on the other hand, first solves for the electric field at the source dipole locations when the surface electrodes are reciprocally energized with a unit current. A scalar product of this electric field with the source dipoles then yields the surface potentials. The reciprocal approach promises a number of advantages over the conventional approach, including the possibility of increased surface potential accuracy and decreased computational requirements for inverse solutions. In this thesis, the BEM equations for the conventional and reciprocal approaches are developed using a common weighted-residual formulation. The numerical implementation of both approaches to the forward problem is described for a single-dipole source model. A three-concentric-spheres head model is used for which analytic solutions are available. Scalp potentials are calculated at either the centroids or the vertices of the BEM discretization elements used. The performance of the conventional and reciprocal approaches to the forward problem is evaluated for radial and tangential dipoles of varying eccentricities and two widely different skull conductivities. We then determine whether the potential advantages of the reciprocal approach suggested by forward problem simulations can be exploited to yield more accurate inverse solutions. Single-dipole inverse solutions are obtained using simplex minimization for both the conventional and reciprocal approaches, each with centroid and vertex options. Again, numerical simulations are performed on a three-concentric-spheres model for radial and tangential dipoles of varying eccentricities. The inverse solution accuracy of both approaches is compared for the two different skull conductivities and their relative sensitivity to skull conductivity errors and noise is assessed. While the conventional vertex approach yields the most accurate forward solutions for a presumably more realistic skull conductivity value, both conventional and reciprocal approaches exhibit large errors in scalp potentials for highly eccentric dipoles. The reciprocal approaches produce the least variation in forward solution accuracy for different skull conductivity values. In terms of single-dipole inverse solutions, conventional and reciprocal approaches demonstrate comparable accuracy. Localization errors are low even for highly eccentric dipoles that produce large errors in scalp potentials on account of the nonlinear nature of the single-dipole inverse solution. Both approaches are also found to be equally robust to skull conductivity errors in the presence of noise. Finally, a more realistic head model is obtained using magnetic resonance imaging (MRI) from which the scalp, skull, and brain/cerebrospinal fluid (CSF) surfaces are extracted. The two approaches are validated on this type of model using actual somatosensory evoked potentials (SEPs) recorded following median nerve stimulation in healthy subjects. The inverse solution accuracy of the conventional and reciprocal approaches and their variants, when compared to known anatomical landmarks on MRI, is again evaluated for the two different skull conductivities. Their respective advantages and disadvantages including computational requirements are also assessed. Once again, conventional and reciprocal approaches produce similarly small dipole position errors. Indeed, position errors for single-dipole inverse solutions are inherently robust to inaccuracies in forward solutions, but dependent on the overlapping activity of other neural sources. Against expectations, the reciprocal approaches do not improve dipole position accuracy when compared to the conventional approaches. However, significantly smaller time and storage requirements are the principal advantages of the reciprocal approaches. This type of localization is potentially useful in the planning of neurosurgical interventions, for example, in patients with refractory focal epilepsy in whom EEG and MRI are often already performed.
435

Boundary integral equation methods in eigenvalue problems of elastodynamics and thin plates /

Kitahara, Michihiro. January 1985 (has links)
Thesis (Ph. D.)--Kyoto University, 1984. / Includes bibliographical references and indexes.
436

Réponse élastodynamique d'une plaque stratifiée anisotrope : approches comparées. : Vers le développement de méthodes hybrides. / Elastodynamic response of a layered anisotropic plate : comparative approaches. : Towards the development of hybrid methods

Mora, Pierric 17 December 2015 (has links)
Cette thèse traite de la résolution du problème direct de propagation d'un champ élastodynamique rayonné par une source dans un milieu stratifié anisotrope. Le contexte applicatif visé est le contrôle non destructif par ondes ultrasonores guidées de plaques de matériaux composites. Aux basses fréquences, ces matériaux sont assimilables à des milieux homogènes, anisotropes et dissipatifs. Deux approches causales sont étudiées et mises en oeuvre pour résoudre l'équation d'onde, et leur intérêt vis-à-vis de la méthode modale harmonique - la plus couramment employée dans ce domaine applicatif - est discuté. L'une des méthodes est modale et est formulée directement dans le domaine temporel. Elle permet de traiter facilement l'anisotropie, y compris en 3D, mais souffre des écueils classiques concernant le régime non-établi ou le cas du guide ouvert. L'autre approche est une formulation dans le domaine de Laplace de la méthode dite par ondes partielles. Elle présente l'intérêt d'être extrêmement polyvalente tout en conduisant à des coûts numériques tout à fait raisonnables. Dans un second temps, la possibilité d'exploiter ces deux méthodes pour résoudre des problèmes de diffraction par des défauts est étudiée. Une approche par éléments finis de frontière basée sur la méthode par ondes partielles est considérée. Elle permet de traiter efficacement le cas de défauts plans. L'extension à des défauts plus généraux est brièvement discutée. / This work adresses the direct problem of the propagation of an elastodynamic field radiated by a source in an anisotropic layered medium. Applications concern non destructive evaluation of composite plates by ultrasonic guided waves. In the lower frequencies, these materials can be modeled as homogeneous, anisotropic and dissipative media. Two causal approaches are studied and developped to solve the wave equation, and their interest is discussed regarding to the widely used harmonic modal method. One of these methods is modal, and is formulated directly in the time domain. It allows to deal easily with anisotropy, even in 3D ; however it also suffers classical shortcomings such as the high cost of the unestablished regime or the difficulty to deal with open waveguides. The other method is a formulation of the so-called partial-waves method in the Laplace domain. Its attractiveness relies in its versatility and in the fact that computational costs can be very acceptable. In a second time, we consider using both methods to solve problems of diffraction by defects. A boundary element method based on the partial-waves approach is developped and leads to solve very efficiently the case of a planar defect. The possibility of treating more general defects is briefly discussed.
437

Studies On The Viability Of The Boundary Element Method For The Real-Time Simulation Of Biological Organs

Kirana Kumara, P 22 August 2016 (has links) (PDF)
Realistic and real-time computational simulation of biological organs (e.g., human kidneys, human liver) is a necessity when one tries to build a quality surgical simulator that can simulate surgical procedures involving these organs. Currently deformable models, spring-mass models, or finite element models are widely used to achieve the realistic simulations and/or the real-time performance. It is widely agreed that continuum mechanics based numerical techniques are preferred over deformable models or spring-mass models, but those techniques are computationally expensive and hence the higher accuracy offered by those numerical techniques come at the expense of speed. Hence there is a need to study the speed of different numerical techniques, while keeping an eye on the accuracy offered by those numerical techniques. Such studies are available for the Finite Element Method (FEM) but rarely available for the Boundary Element Method (BEM). Hence the present work aims to conduct a study on the viability of BEM for the real-time simulation of biological organs, and the present study is justified by the fact that BEM is considered to be inherently efficient when compared to mesh based techniques like FEM. A significant portion of literature on the real-time simulation of biological organs suggests the use of BEM to achieve better simulations. When one talks about the simulation of biological organs, one needs to have the geometry of a biological organ in hand. Geometry of biological organs of interest is not readily available many a times, and hence there is a need to extract the three dimensional (3D) geometry of biological organs from a stack of two dimensional (2D) scanned images. Software packages that can readily reconstruct 3D geometry of biological organs from 2D images are expensive. Hence, a novel procedure that requires only a few free software packages to obtain the geometry of biological organs from 2D image sequences is presented. The geometry of a pig liver is extracted from CT scan images for illustration purpose. Next, the three dimensional geometry of human kidney (left and right kidneys of male, and left and right kidneys of female) is obtained from the Visible Human Dataset (VHD). The novel procedure presented in this work can be used to obtain patient specific organ geometry from patient specific images, without requiring any of the many commercial software packages that can readily do the job. To carry out studies on the speed and accuracy of BEM, a source code for BEM is needed. Since the BEM code for 3D elasticity is not readily available, a BEM code that can solve 3D linear elastostatic problems without accounting for body forces is developed from scratch. The code comes in three varieties: a MATLAB version, a Fortran version (sequential version), and a Fortran version (parallelized version). This is the first free and open source BEM code for 3D elasticity. The developed code is used to carry out studies on the viability of BEM for the real-time simulation of biological organs, and a few representative problems involving kidneys and liver are found to give accurate solutions. The present work demonstrates that it is possible to simulate linear elastostatic behaviour in real-time using BEM without resorting to any type of precomputations, on a computer cluster by fully parallelizing the simulations and by performing simulations on different number of processors and for different block sizes. Since it is possible to get a complete solution in real-time, there is no need to separately prove that every type of cutting, suturing etc. can be simulated in real-time. Future work could involve incorporating nonlinearities into the simulations. Finally, a BEM based simulator may be built, after taking into account details like rendering.
438

Méthodes d'accéleration pour la résolution numérique en électrolocation et en chimie quantique / Acceleration methods for numerical solving in electrolocation and quantum chemistry

Laurent, Philippe 26 October 2015 (has links)
Cette thèse aborde deux thématiques différentes. On s’intéresse d’abord au développement et à l’analyse de méthodes pour le sens électrique appliqué à la robotique. On considère en particulier la méthode des réflexions permettant, à l’image de la méthode de Schwarz, de résoudre des problèmes linéaires à partir de sous-problèmes plus simples. Ces deniers sont obtenus par décomposition des frontières du problème de départ. Nous en présentons des preuves de convergence et des applications. Dans le but d’implémenter un simulateur du problème direct d’électrolocation dans un robot autonome, on s’intéresse également à une méthode de bases réduites pour obtenir des algorithmes peu coûteux en temps et en place mémoire. La seconde thématique traite d’un problème inverse dans le domaine de la chimie quantique. Nous cherchons ici à déterminer les caractéristiques d’un système quantique. Celui-ci est éclairé par un champ laser connu et fixé. Dans ce cadre, les données du problème inverse sont les états avant et après éclairage. Un résultat d’existence locale est présenté, ainsi que des méthodes de résolution numériques. / This thesis tackle two different topics.We first design and analyze algorithms related to the electrical sense for applications in robotics. We consider in particular the method of reflections, which allows, like the Schwartz method, to solve linear problems using simpler sub-problems. These ones are obtained by decomposing the boundaries of the original problem. We give proofs of convergence and applications. In order to implement an electrolocation simulator of the direct problem in an autonomous robot, we build a reduced basis method devoted to electrolocation problems. In this way, we obtain algorithms which satisfy the constraints of limited memory and time resources. The second topic is an inverse problem in quantum chemistry. Here, we want to determine some features of a quantum system. To this aim, the system is ligthed by a known and fixed Laser field. In this framework, the data of the inverse problem are the states before and after the Laser lighting. A local existence result is given, together with numerical methods for the solving.
439

Eigenvalue Algorithms for Symmetric Hierarchical Matrices

Mach, Thomas 20 February 2012 (has links)
This thesis is on the numerical computation of eigenvalues of symmetric hierarchical matrices. The numerical algorithms used for this computation are derivations of the LR Cholesky algorithm, the preconditioned inverse iteration, and a bisection method based on LDLT factorizations. The investigation of QR decompositions for H-matrices leads to a new QR decomposition. It has some properties that are superior to the existing ones, which is shown by experiments using the HQR decompositions to build a QR (eigenvalue) algorithm for H-matrices does not progress to a more efficient algorithm than the LR Cholesky algorithm. The implementation of the LR Cholesky algorithm for hierarchical matrices together with deflation and shift strategies yields an algorithm that require O(n) iterations to find all eigenvalues. Unfortunately, the local ranks of the iterates show a strong growth in the first steps. These H-fill-ins makes the computation expensive, so that O(n³) flops and O(n²) storage are required. Theorem 4.3.1 explains this behavior and shows that the LR Cholesky algorithm is efficient for the simple structured Hl-matrices. There is an exact LDLT factorization for Hl-matrices and an approximate LDLT factorization for H-matrices in linear-polylogarithmic complexity. This factorizations can be used to compute the inertia of an H-matrix. With the knowledge of the inertia for arbitrary shifts, one can compute an eigenvalue by bisectioning. The slicing the spectrum algorithm can compute all eigenvalues of an Hl-matrix in linear-polylogarithmic complexity. A single eigenvalue can be computed in O(k²n log^4 n). Since the LDLT factorization for general H-matrices is only approximative, the accuracy of the LDLT slicing algorithm is limited. The local ranks of the LDLT factorization for indefinite matrices are generally unknown, so that there is no statement on the complexity of the algorithm besides the numerical results in Table 5.7. The preconditioned inverse iteration computes the smallest eigenvalue and the corresponding eigenvector. This method is efficient, since the number of iterations is independent of the matrix dimension. If other eigenvalues than the smallest are searched, then preconditioned inverse iteration can not be simply applied to the shifted matrix, since positive definiteness is necessary. The squared and shifted matrix (M-mu I)² is positive definite. Inner eigenvalues can be computed by the combination of folded spectrum method and PINVIT. Numerical experiments show that the approximate inversion of (M-mu I)² is more expensive than the approximate inversion of M, so that the computation of the inner eigenvalues is more expensive. We compare the different eigenvalue algorithms. The preconditioned inverse iteration for hierarchical matrices is better than the LDLT slicing algorithm for the computation of the smallest eigenvalues, especially if the inverse is already available. The computation of inner eigenvalues with the folded spectrum method and preconditioned inverse iteration is more expensive. The LDLT slicing algorithm is competitive to H-PINVIT for the computation of inner eigenvalues. In the case of large, sparse matrices, specially tailored algorithms for sparse matrices, like the MATLAB function eigs, are more efficient. If one wants to compute all eigenvalues, then the LDLT slicing algorithm seems to be better than the LR Cholesky algorithm. If the matrix is small enough to be handled in dense arithmetic (and is not an Hl(1)-matrix), then dense eigensolvers, like the LAPACK function dsyev, are superior. The H-PINVIT and the LDLT slicing algorithm require only an almost linear amount of storage. They can handle larger matrices than eigenvalue algorithms for dense matrices. For Hl-matrices of local rank 1, the LDLT slicing algorithm and the LR Cholesky algorithm need almost the same time for the computation of all eigenvalues. For large matrices, both algorithms are faster than the dense LAPACK function dsyev.:List of Figures xi List of Tables xiii List of Algorithms xv List of Acronyms xvii List of Symbols xix Publications xxi 1 Introduction 1 1.1 Notation 2 1.2 Structure of this Thesis 3 2 Basics 5 2.1 Linear Algebra and Eigenvalues 6 2.1.1 The Eigenvalue Problem 7 2.1.2 Dense Matrix Algorithms 9 2.2 Integral Operators and Integral Equations 14 2.2.1 Definitions 14 2.2.2 Example - BEM 16 2.3 Introduction to Hierarchical Arithmetic 17 2.3.1 Main Idea 17 2.3.2 Definitions 19 2.3.3 Hierarchical Arithmetic 24 2.3.4 Simple Hierarchical Matrices (Hl-Matrices) 30 2.4 Examples 33 2.4.1 FEM Example 33 2.4.2 BEM Example 36 2.4.3 Randomly Generated Examples 37 2.4.4 Application Based Examples 38 2.4.5 One-Dimensional Integral Equation 38 2.5 Related Matrix Formats 39 2.5.1 H2-Matrices 40 2.5.2 Diagonal plus Semiseparable Matrices 40 2.5.3 Hierarchically Semiseparable Matrices 42 2.6 Review of Existing Eigenvalue Algorithms 44 2.6.1 Projection Method 44 2.6.2 Divide-and-Conquer for Hl(1)-Matrices 45 2.6.3 Transforming Hierarchical into Semiseparable Matrices 46 2.7 Compute Cluster Otto 47 3 QR Decomposition of Hierarchical Matrices 49 3.1 Introduction 49 3.2 Review of Known QR Decompositions for H-Matrices 50 3.2.1 Lintner’s H-QR Decomposition 50 3.2.2 Bebendorf’s H-QR Decomposition 52 3.3 A new Method for Computing the H-QR Decomposition 54 3.3.1 Leaf Block-Column 54 3.3.2 Non-Leaf Block Column 56 3.3.3 Complexity 57 3.3.4 Orthogonality 60 3.3.5 Comparison to QR Decompositions for Sparse Matrices 61 3.4 Numerical Results 62 3.4.1 Lintner’s H-QR decomposition 62 3.4.2 Bebendorf’s H-QR decomposition 66 3.4.3 The new H-QR decomposition 66 3.5 Conclusions 67 4 QR-like Algorithms for Hierarchical Matrices 69 4.1 Introduction 70 4.1.1 LR Cholesky Algorithm 70 4.1.2 QR Algorithm 70 4.1.3 Complexity 71 4.2 LR Cholesky Algorithm for Hierarchical Matrices 72 4.2.1 Algorithm 72 4.2.2 Shift Strategy 72 4.2.3 Deflation 73 4.2.4 Numerical Results 73 4.3 LR Cholesky Algorithm for Diagonal plus Semiseparable Matrices 75 4.3.1 Theorem 75 4.3.2 Application to Tridiagonal and Band Matrices 79 4.3.3 Application to Matrices with Rank Structure 79 4.3.4 Application to H-Matrices 80 4.3.5 Application to Hl-Matrices 82 4.3.6 Application to H2-Matrices 83 4.4 Numerical Examples 84 4.5 The Unsymmetric Case 84 4.6 Conclusions 88 5 Slicing the Spectrum of Hierarchical Matrices 89 5.1 Introduction 89 5.2 Slicing the Spectrum by LDLT Factorization 91 5.2.1 The Function nu(M − µI) 91 5.2.2 LDLT Factorization of Hl-Matrices 92 5.2.3 Start-Interval [a, b] 96 5.2.4 Complexity 96 5.3 Numerical Results 97 5.4 Possible Extensions 100 5.4.1 LDLT Slicing Algorithm for HSS Matrices 103 5.4.2 LDLT Slicing Algorithm for H-Matrices 103 5.4.3 Parallelization 105 5.4.4 Eigenvectors 107 5.5 Conclusions 107 6 Computing Eigenvalues by Vector Iterations 109 6.1 Power Iteration 109 6.1.1 Power Iteration for Hierarchical Matrices 110 6.1.2 Inverse Iteration 111 6.2 Preconditioned Inverse Iteration for Hierarchical Matrices 111 6.2.1 Preconditioned Inverse Iteration 113 6.2.2 The Approximate Inverse of an H-Matrix 115 6.2.3 The Approximate Cholesky Decomposition of an H-Matrix 116 6.2.4 PINVIT for H-Matrices 117 6.2.5 The Interior of the Spectrum 120 6.2.6 Numerical Results 123 6.2.7 Conclusions 130 7 Comparison of the Algorithms and Numerical Results 133 7.1 Theoretical Comparison 133 7.2 Numerical Comparison 135 8 Conclusions 141 Theses 143 Bibliography 145 Index 153
440

Theoretical and experimental study of non-spherical microparticle dynamics in viscoelastic fluid flows

Cheng-Wei Tai (12198344) 06 June 2022 (has links)
<p>Particle suspensions in viscoelastic fluids (e.g., polymeric fluids, liquid crystalline solutions, gels) are ubiquitous in industrial processes and in biology. In such fluids, particles often acquire lift forces that push them to preferential streamlines in the flow domain. This lift force depends greatly on the fluid’s rheology, and plays a vital role in many applications such as particle separations in microfluidic devices, particle rinsing on silicon wafers, and particle resuspension in enhanced oil recovery. Previous studies have provided understanding on how fluid rheology affects the motion of spherical particles in simple viscoelastic fluid flows such as shear flows. However, the combined effect of more complex flow profiles and particle shape is still under-explored. The main contribution of this thesis is to: (a) provide understanding on the migration and rotation dynamics of an arbitrary-shaped particle in complex flows of a viscoelastic fluid, and (b) develop guidelines for designing such suspensions for general applications.</p> <p><br></p> <p>In the first part of the thesis, we develop theories based on the second-order fluid (SOF) constitutive model to provide solutions for the polymeric force and torque on an arbitrary-shaped solid particle under a general quadratic flow field. When the first and second normal stress coefficients satisfy  <strong>Ψ</strong><sub>1</sub>  = −2 <strong>Ψ</strong> <sub>2</sub> (corotational limit), the fluid viscoelasticity modifies only the fluid pressure and we provide exact solutions to the polymer force and torque on the particle. For a general SOF with  <strong>Ψ</strong> <sub>1</sub> ≠  −2 <strong>Ψ</strong> <sub>2</sub>, fluid viscoelasticity modifies the shear stresses, and we provide a procedure for numerical solutions. General scaling laws are also identified to quantify the polymeric lift based on different particle shapes and orientation. We find that the particle migration speed is directly proportional to the length the particle spans in the shear gradient direction (L<sub>sg</sub>), and that polymeric torques lead to unique orientation behavior under flow.</p> <p><br></p> <p>Secondly, we investigate the migration and rotational behavior of prolate and oblate spheroids in various viscoelastic, pressure-driven flows. In a 2-D slit flow, fluid viscoelasticity causes prolate particles to transition to a log-rolling motion where the particles orient perpendicular to the flow-flow gradient plane. This behavior leads to a slower overall migration speed (i.e., lift) of prolate particles towards the flow centerline compared to spherical particles of the same volume. In a circular tube flow, prolate particles align their long axis along the flow direction due to the extra polymer torque generated by the velocity curvature in all radial directions. Again, this effect causes prolate particles to migrate slower to the flow centerline than spheres of the same volume. For oblate particles, we quantify their long-time orientation and find that they migrate slower than spheres of the same volume, but exhibit larger migration speeds than prolate particles. Lastly, we examine the effect of normal stress ratio ? <strong>α</strong>  = <strong>Ψ</strong> <sub>2</sub> /<strong>Ψ</strong><sub>1 </sub>on the particle motion and find that this parameter only quantitatively impacts the particle migration velocity but has negligible effect on the rotational dynamics. We therefore can utilize the exact solution derived under the corotational limit (?<strong>α</strong> = −1/2) for a quick and reasonable prediction on the particle dynamics.</p> <p><br></p> <p>We next experimentally investigate the migration behavior of spheroidal particles in microfluidic systems and draw comparisons to our theoretical predictions. A dilute suspension of prolate/oblate microparticles in a density-matched 8% aqueous polyvinylpyrrolidone (PVP) solution is used as the model suspension system. Using brightfield microscopy, we qualitatively confirm our theoretical predictions for flow Deborah numbers 0 < De < 0.1 – i.e., that spherical particles show faster migration speed than prolate and oblate particles of the same volume in tube flows.</p> <p><br></p> <p>We finally design a holographic imaging method to capture the 3-D position and orientation of dynamic microparticles in microfluidic flow. We adopt in-line holography setup and propose a straightforward hologram reconstruction method to extract the 3-D position and orientation of a non-spherical particle. The method utilizes image moment to locate the particle and localize the detection region. We detect the particle position in the depth direction by quantifying the image sharpness at different depth position, and uses principal component analysis (PCA) to detect the orientation of the particle. For a semi-transparent particle that produces complex diffraction patterns, a mask based on the image moment information can be utilized during the image sharpness process to better resolve the particle position.</p> <p><br></p> <p>In the last part of this thesis, we conclude our work and discuss the future research perspectives. We also comment on the possible application of current work to various fields of research and industrial processes.</p> <p><br></p>

Page generated in 0.1383 seconds