• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • 1
  • Tagged with
  • 21
  • 17
  • 14
  • 12
  • 11
  • 10
  • 8
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Compression de maillages de grande taille / Efficient compression of large meshes

Courbet, Clément 05 January 2011 (has links)
Il y a une décennie, le contenu numérique virtuel était limité à quelques applications – majoritairementles jeux vidéos, les films en 3D et la simulation numérique. Aujourd’hui, grâce à l’apparition de cartes graphiques performantes et bon marché, les objets 3D sont utilisés dans de nombreuses applications. A peu près tous les terminaux possédant des capacités d’affichage – des clusters de visualisation haute performance jusqu’aux smart phones – intègrent maintenant une puce graphique qui leur permet de faire du rendu 3D. Ainsi, les applications 3D sont bien plus variées qu’il y a quelques années. On citera par exemple la réalité virtuelle et augmentée en temps réel ou les mondes virtuels 3D. Dans ce contexte, le besoin de méthodes efficaces pour la transmission et la visualisation des données 3D est toujours plus pressant. De plus, la taille des maillages 3D ne cesse de s’accroître avec la précision de la représentation. Par exemple, les scanners 3D actuels sont capables de numériser des objets du monde réel avec une précision de seulement quelques micromètres, et génèrent des maillages contenant plusieurs centaines de millions d’´el´ements. D’un autre côté, une précision accrue en simulation numérique requiert des maillages plus fins, et les méthodes massivement parallèles actuelles sont capables de travailler avec des milliards de mailles. Dans ce contexte, la compression de ces données – en particulier la compression de maillages – est un enjeu important. Durant la décennie passée, de nombreuses méthodes ont été développées pour coder les maillages polygonaux. Néanmoins, ces techniques ne sont plus adaptées au contexte actuel, car elles supposentque la compression et la d´ecompression sont des processus sym´etriques qui ont lieu sur un mat´erielsimilaire. Dans le cadre actuel, au contraire, le contenu 3D se trouve cr´e´e, compressé et distribué par des machines de hautes performances, tandis que l’exploitation des données – par exemple, la visualisation – est effectuée à distance sur des périphériques de capacité plus modeste – éventuellement mobiles – qui ne peuvent traiter les maillages de grande taille dans leur int´egralité. Ceci fait de lacompression de maillage un processus intrinsèquement asymétrique.Dans cette thèse, notre objectif est d’étudier et de proposer des méthodes pour la compression de maillages de grande taille. Nous nous intéressons plus particulièrement aux méthodes d’accès aléatoire, qui voient la compression comme un problème intrinsèquement asymétrique. Dans ce modèle, le codeur a accès à des ressources informatiques importantes, tandis que la décompression estun processus temps réel (souple) qui se fait avec du matériel de plus faible puissance. Nous décrivons un algorithme de ce type et l’appliquons au cas de la visualisation interactive. Nous proposons aussi un algorithme streaming pour compresser des maillages hexaèdriques de très grande taille utilisés dans le contexte de la simulation numérique. Nous sommes ainsi capables decompresser des maillages comportant de l’ordre de 50 millions de mailles en moins de deux minutes, et en n’utilisant que quelques mégaoctets de mémoire vive. Enfin, nous proposons, indépendamment de ces deux algorithmes, un cadre théorique général pour améliorer la compression de géométrie. Cet algorithme peut être utilisé pour développer des méthodes de prédiction pour n’importe quel algorithme basé sur un paradigme prédictif – ce qui est la cas dela majorité des méthodes existantes. Nous dérivons ainsi des schémas de prédictions compatibles avec plusieurs méthodes de la littérature. Ces schémas augmentent les taux de compression de 9% enmoyenne. Sous des hypothèses usuelles, nous utilisons aussi ces résultats pour prouver l’optimalité de certains algorithmes existants. / A decade ago, 3D content was restricted to a few applications – mainly games, 3D graphics andscientific simulations. Nowadays, thanks to the development cheap and efficient specialized renderingdevices, 3D objects are ubiquitous. Virtually all devices with a display – from a large visualizationclusters to smart phones – now integrate 3D rendering capabilities. Therefore, 3D applications arenow far more diverse than a few years ago, and include for example real-time virtual and augmentedreality, as well as 3D virtual worlds. In this context, there is an ever increasing need for efficient toolsto transmit and visualize 3D content.In addition, the size of 3D meshes always increases with accuracy of representation. On one hand,recent 3D scanners are able to digitalize real-world objects with a precision of a few micrometers, andgenerate meshes with several hundred million elements. On the other hand, numerical simulationsalways require finer meshes for better accuracy, and massively parallel simulation methods now generatemeshes with billions of elements. In this context, 3D data compression – in particular 3D meshcompression – services are of strategic importance.The previous decade has seen the development of many efficient methods for encoding polygonalmeshes. However, these techniques are no longer adapted to the current context, because they supposethat encoding and decoding are symmetric processes that take place on the same kind of hardware.In contrast, remote 3D content will typically be created, compressed and served by high-performancemachines, while exploitation (e.g. visualization) will be carried out remotely on smaller – possiblyhand held – devices that cannot handle large meshes as a whole. This makes mesh compression anintrinsically asymmetric process.Our objective in this dissertation is to address the compression of these large meshes. In particularwe study random-accessible compression schemes, that consider mesh compression as an asymmetricproblem where the compressor is an off-line process and has access to a large amount of resources,while decompression is a time-critical process with limited resources. We design such a compressionscheme and apply it to interactive visualization.In addition, we propose a streaming compression algorithm that targets the very large hexahedralmeshes that are common in the context of scientific numerical simulation. Using this scheme, we areable to compress meshes of 50 million hexahedra in less than two minutes using a few megabytes ofmemory.Independently from these two specific algorithms, we develop a generic theoretical framework toaddress mesh geometry compression. This framework can be used to derive geometry compressionschemes for any mesh compression algorithm based on a predictive paradigm – which is the case of thelarge majority of compression schemes. Using this framework, we derive new geometry compressionschemes that are compatible with existing mesh compression algorithms but improve compressionratios – by approximately 9% on average. We also prove the optimality of some other schemes underusual smoothness assumptions.
12

Hexahedral Mesh Refinement Using an Error Sizing Function

Paudel, Gaurab 01 June 2011 (has links) (PDF)
The ability to effectively adapt a mesh is a very important feature of high fidelity finite element modeling. In a finite element analysis, a relatively high node density is desired in areas of the model where there are high error estimates from an initial analysis. Providing a higher node density in such areas improves the accuracy of the model and reduces the computational time compared to having a high node density over the entire model. Node densities can be determined for any model using the sizing functions based on the geometry of the model or the error estimates from the finite element analysis. Robust methods for mesh adaptation using sizing functions are available for refining triangular, tetrahedral, and quadrilateral elements. However, little work has been published for adaptively refining all hexahedral meshes using sizing functions. This thesis describes a new approach to drive hexahedral refinement based upon an error sizing function and a mechanism to compare the sizes of the node after refinement.
13

Surfacing Splicing: A Method of Quadrilateral Mesh Generation and Modification for Surfaces by Dual Creation and Manipulation

Grover, Benjamin Todd 01 April 2002 (has links) (PDF)
The effective generation high quality quadrilateral surface meshes is an area of important research and development for the finite element community. Quadrilateral elements generally lead to more efficient and accurate finite results. In addition, some all hexahedral volume meshing algorithms are based on an initial quadrilateral mesh surface mesh that has specific connectivity requirements. This thesis presents a new and unique procedure named "Surfaced Splicing". Surface Splicing allows for the generation of all quadrilateral surface meshes as well as the ability to edit these meshes via the dual. The dual contains the same data as the mesh but, unlike the mesh, the dual directly allows the visualization of how surface and volume elements interrelate and connect with one another. The dual also provides mesh connectivity information that is crucial in forming an all-quadrilateral surface mesh that can form the basis of an all-hexahedral volume mesh.
14

Conformal Refinement of All-Hexahedral Finite Element Meshes

Harris, Nathan 01 August 2004 (has links) (PDF)
Mesh adaptation techniques are used to modify complex finite element meshes to reduce analysis time and improve accuracy. Modification of all-hexahedral meshes has proven difficult to the unique connectivity constraints they exhibit. This thesis presents an automated tool for local, conformal refinement of all-hexahedral meshes based on the insertion of multi-directional twist planes into the spatial twist continuum. The contributions of this thesis are (1) the ability to conformally refine all entities of an all-hexahedral element mesh, (2) the simplification of template insertion to multi-directional refinement. The refinement algorithm is divided into single hex sheet operations, where individual refinement steps are performed completely within a single hex sheet, and parallel sheet operation, where each refinement step occurs within two parallel hex sheets. Combining these two procedures facilitates the refinement of any mesh feature. Refinement is accomplished by replacing original mesh elements with one or more of six base templates selected by the number of nodes, flagged for refinement on the element. The refinement procedures are covered in detail with representative graphics and examples that illustrate the application of the techniques and the results of the refinement.
15

High throughput patient-specific orthopaedic analysis: development of interactive tools and application to graft placement in anterior cruciate ligament reconstruction

Ramme, Austin Jedidiah 01 May 2012 (has links)
Medical imaging technologies have allowed for in vivo evaluation of the human musculoskeletal system. With advances in both medical imaging and computing, patient-specific model development of anatomic structures is becoming a reality. Three-dimensional surface models are useful for patient-specific measurements and finite element studies. Orthopaedics is closely tied to engineering in the analysis of injury mechanisms, design of implantable medical devices, and potentially in the prediction of injury. However, a disconnection exists between medical imaging and orthopaedic analysis; whereby, the ability to generate three-dimensional models from an imaging dataset is difficult, which has restricted its application to large patient populations. We have compiled image processing, image segmentation, and surface generation tools in a single software package catered specifically to image-based orthopaedic analysis. We have also optimized an automated segmentation technique to allow for high-throughput bone segmentation and developed algorithms that help to automate the cumbersome process of mesh generation in finite element analysis. We apply these tools to evaluate graft placement in anterior cruciate ligament reconstruction in a multicenter study that aims to improve the patient outcomes of those that undergo this procedure.
16

Parallel simulation of coupled flow and geomechanics in porous media

Wang, Bin, 1984- 16 January 2015 (has links)
In this research we consider developing a reservoir simulator capable of simulating complex coupled poromechanical processes on massively parallel computers. A variety of problems arising from petroleum and environmental engineering inherently necessitate the understanding of interactions between fluid flow and solid mechanics. Examples in petroleum engineering include reservoir compaction, wellbore collapse, sand production, and hydraulic fracturing. In environmental engineering, surface subsidence, carbon sequestration, and waste disposal are also coupled poromechanical processes. These economically and environmentally important problems motivate the active pursuit of robust, efficient, and accurate simulation tools for coupled poromechanical problems. Three coupling approaches are currently employed in the reservoir simulation community to solve the poromechanics system, namely, the fully implicit coupling (FIM), the explicit coupling, and the iterative coupling. The choice of the coupling scheme significantly affects the efficiency of the simulator and the accuracy of the solution. We adopt the fixed-stress iterative coupling scheme to solve the coupled system due to its advantages over the other two. Unlike the explicit coupling, the fixed-stress split has been theoretically proven to converge to the FIM for linear poroelasticity model. In addition, it is more efficient and easier to implement than the FIM. Our computational results indicate that this approach is also valid for multiphase flow. We discretize the quasi-static linear elasticity model for geomechanics in space using the continuous Galerkin (CG) finite element method (FEM) on general hexahedral grids. Fluid flow models are discretized by locally mass conservative schemes, specifically, the mixed finite element method (MFE) for the equation of state compositional flow on Cartesian grids and the multipoint flux mixed finite element method (MFMFE) for the single phase and two-phase flows on general hexahedral grids. While both the MFE and the MFMFE generate cell-centered stencils for pressure, the MFMFE has advantages in handling full tensor permeabilities and general geometry and boundary conditions. The MFMFE also obtains accurate fluxes at cell interfaces. These characteristics enable the simulation of more practical problems. For many reservoir simulation applications, for instance, the carbon sequestration simulation, we need to account for thermal effects on the compositional flow phase behavior and the solid structure stress evolution. We explicitly couple the poromechanics equations to a simplified energy conservation equation. A time-split scheme is used to solve heat convection and conduction successively. For the convection equation, a higher order Godunov method is employed to capture the sharp temperature front; for the conduction equation, the MFE is utilized. Simulations of coupled poromechanical or thermoporomechanical processes in field scales with high resolution usually require parallel computing capabilities. The flow models, the geomechanics model, and the thermodynamics model are modularized in the Integrated Parallel Accurate Reservoir Simulator (IPARS) which has been developed at the Center for Subsurface Modeling at the University of Texas at Austin. The IPARS framework handles structured (logically rectangular) grids and was originally designed for element-based data communication, such as the pressure data in the flow models. To parallelize the node-based geomechanics model, we enhance the capabilities of the IPARS framework for node-based data communication. Because the geomechanics linear system is more costly to solve than those of flow and thermodynamics models, the performance of linear solvers for the geomechanics model largely dictates the speed and scalability of the coupled simulator. We use the generalized minimal residual (GMRES) solver with the BoomerAMG preconditioner from the hypre library and the geometric multigrid (GMG) solver from the UG4 software toolbox to solve the geomechanics linear system. Additionally, the multilevel k-way mesh partitioning algorithm from METIS is used to generate high quality mesh partitioning to improve solver performance. Numerical examples of coupled poromechanics and thermoporomechanics simulations are presented to show the capabilities of the coupled simulator in solving practical problems accurately and efficiently. These examples include a real carbon sequestration field case with stress-dependent permeability, a synthetic thermoporoelastic reservoir simulation, poroelasticity simulations on highly distorted hexahedral grids, and parallel scalability tests on a massively parallel computer. / text
17

Two-Refinement by Pillowing for Structured Hexahedral Meshes

Malone, J. Bruce 06 December 2012 (has links) (PDF)
A number of methods for adaptation of existing all-hexahedral grids by localized refinement have been developed; however, none ideally fit all refinement needs. This thesis presents the structure to a method of two-refinement developed for conformal, structured, all-hexahedral grids that offers flexibility beyond what has been offered to date. The method is fundamentally based on pillowing pairs of sheets of hexes. This thesis also suggests an implementation of the method, shows the results of examples refined using it and compares these results to results from implementing three-refinement on the same examples.
18

Three-dimensional modeling of rigid pavement

Beegle, David J. January 1998 (has links)
No description available.
19

Hybrid particle-element method for a general hexahedral mesh

Hernandez, Roque Julio 02 November 2009 (has links)
The development of improved numerical methods for computer simulation of high velocity impact dynamics is of importance in a variety of science and engineering fields. The growth of computing capabilities has created a demand for improved parallel algorithms for high velocity impact modeling. In addition, there are selected impact applications where experimentation is very costly, or even impossible (e.g. when certain bioimpact or space debris problems are of interest). This dissertation extends significantly the class of problems where particle-element based impact simulation techniques may be effectively applied in engineering design. This dissertation develops a hybrid particle-finite element method for a general hexahedral mesh. This work included the formulation of a numerical algorithm for the generation of an ellipsoidal particle set for an unstructured hex mesh, and a new interpolation kernel for the density. The discrete model is constructed using thermomechanical Lagrange equations. The formulation is validated via simulation of published impact experiments. / text
20

Sculpting: An Improved Inside-out Scheme for All Hexahedral Meshing

Walton, Kirk S. 01 April 2003 (has links) (PDF)
Generating all hexahedral meshes on arbitrary geometries has been an area of important research in recent history. Hexahedral meshes have advantages over tetrahedral meshes in structural mechanics because they provide more accurate results with fewer degrees of freedom. Many different approaches have been used to create all-hexahedral meshes. Grid-based, inside-out, or superposition meshing all refer to a similar meshing approach that is a very common mesh generation technique. Grid-based algorithms provide the ability to generate all hexahedral meshes by introducing a structured mesh that bounds the complete body modeled, marking hexahedra to define an interior and exterior mesh, manipulating the boundary region between interior and exterior regions of the structured mesh to fit the specific boundary of the body, and finally, discarding the exterior hexahedra from the given body. Such algorithms generally provide high quality meshes on the interior of the body yet distort element at the boundary in order to fill voids and match surfaces along these regions. The sculpting algorithm as presented here, addresses the difficulty in forming quality elements near boundary regions in two ways. The algorithm first finds more intelligent methods to define a structured mesh that conforms to the body to lessen large distortions to the boundary elements. Second, the algorithm uses collapsing templates to adjust the position of boundary elements to mimic the topology of the body prior to capturing the geometric boundary.

Page generated in 0.0399 seconds