• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 239
  • 81
  • 31
  • 30
  • 17
  • 7
  • 6
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 543
  • 543
  • 111
  • 70
  • 66
  • 62
  • 61
  • 59
  • 58
  • 57
  • 57
  • 56
  • 54
  • 50
  • 48
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Distribution-based Exploration and Visualization of Large-scale Vector and Multivariate Fields

Lu, Kewei 08 August 2017 (has links)
No description available.
272

Numerical simulations of unsteady flows in a pulse detonation engine by the conservation element and solution element method

He, Hao 13 March 2006 (has links)
No description available.
273

Load-Balancing Spatially Located Computations using Rectangular Partitions

Bas, Erdeniz Ozgun 29 July 2011 (has links)
No description available.
274

Hardware-based Parallel Computing for Real-time Simulation of Soft-object Deformation

Mafi, Ramin 06 1900 (has links)
In the last two decades there has been an increasing interest in the field of haptics science. Real-time simulation of haptic interaction with non-rigid deformable object/tissue is computationally demanding. The computational bottleneck in finite- element (FE) modeling of deformable objects is in solving a large but sparse linear system of equations at each time step of the simulation. Depending on the mechanical properties of the object, high-fidelity stable haptic simulations require an update rate in the order of 100 − 1000 Hz. Direct software-based implementations that use conventional computers are fairly limited in the size of the model that they can process at such high rates. In this thesis, a new hardware-based parallel implementation of the iterative Conjugate Gradient (CG) algorithm for solving linear systems of equations is pro- posed. Sparse matrix-vector multiplication (SpMxV) is the main computational kernel in iterative solution methods such as the CG algorithm. Modern micro- processors exhibit poor performance in executing memory-bound tasks such as SpMxV. In the proposed hardware architecture, a novel organization of on-chip memory resources enables concurrent utilization of a large number of fixed-point computing units on a FPGA device for performing the calculations. The result is a powerful parallel computing platform that can iteratively solve the system of equations arising from the FE models of object deformation within the timing constraint of real-time haptics applications. Numerical accuracy of the fixed-point implementation, the hardware architecture design, and issues pertaining to the degree of parallelism and scalability of the solution are discussed in details. The proposed computing platform in this thesis is successfully employed in a set of haptic interaction experiments using static and dynamic linear FE-based models. / Master of Applied Science (MASc)
275

Bayesian Modeling of Complex High-Dimensional Data

Huo, Shuning 07 December 2020 (has links)
With the rapid development of modern high-throughput technologies, scientists can now collect high-dimensional complex data in different forms, such as medical images, genomics measurements. However, acquisition of more data does not automatically lead to better knowledge discovery. One needs efficient and reliable analytical tools to extract useful information from complex datasets. The main objective of this dissertation is to develop innovative Bayesian methodologies to enable effective and efficient knowledge discovery from complex high-dimensional data. It contains two parts—the development of computationally efficient functional mixed models and the modeling of data heterogeneity via Dirichlet Diffusion Tree. The first part focuses on tackling the computational bottleneck in Bayesian functional mixed models. We propose a computational framework called variational functional mixed model (VFMM). This new method facilitates efficient data compression and high-performance computing in basis space. We also propose a new multiple testing procedure in basis space, which can be used to detect significant local regions. The effectiveness of the proposed model is demonstrated through two datasets, a mass spectrometry dataset in a cancer study and a neuroimaging dataset in an Alzheimer's disease study. The second part is about modeling data heterogeneity by using Dirichlet Diffusion Trees. We propose a Bayesian latent tree model that incorporates covariates of subjects to characterize the heterogeneity and uncover the latent tree structure underlying data. This innovative model may reveal the hierarchical evolution process through branch structures and estimate systematic differences between groups of samples. We demonstrate the effectiveness of the model through the simulation study and a brain tumor real data. / Doctor of Philosophy / With the rapid development of modern high-throughput technologies, scientists can now collect high-dimensional data in different forms, such as engineering signals, medical images, and genomics measurements. However, acquisition of such data does not automatically lead to efficient knowledge discovery. The main objective of this dissertation is to develop novel Bayesian methods to extract useful knowledge from complex high-dimensional data. It has two parts—the development of an ultra-fast functional mixed model and the modeling of data heterogeneity via Dirichlet Diffusion Trees. The first part focuses on developing approximate Bayesian methods in functional mixed models to estimate parameters and detect significant regions. Two datasets demonstrate the effectiveness of proposed method—a mass spectrometry dataset in a cancer study and a neuroimaging dataset in an Alzheimer's disease study. The second part focuses on modeling data heterogeneity via Dirichlet Diffusion Trees. The method helps uncover the underlying hierarchical tree structures and estimate systematic differences between the group of samples. We demonstrate the effectiveness of the method through the brain tumor imaging data.
276

Robust Online Trajectory Prediction for Non-cooperative Small Unmanned Aerial Vehicles

Badve, Prathamesh Mahesh 21 January 2022 (has links)
In recent years, unmanned aerial vehicles (UAVs) have got a boost in their applications in civilian areas like aerial photography, agriculture, communication, etc. An increasing research effort is being exerted to develop sophisticated trajectory prediction methods for UAVs for collision detection and trajectory planning. The existing techniques suffer from problems such as inadequate uncertainty quantification of predicted trajectories. This work adopts particle filters together with Löwner-John ellipsoid to approximate the highest posterior density region for trajectory prediction and uncertainty quantification. The particle filter is tuned and tested on real-world and simulated data sets and compared with the Kalman filter. A parallel computing approach for particle filter is further proposed. This parallel implementation makes the particle filter faster and more suitable for real-time online applications. / Master of Science / In recent years, unmanned aerial vehicles (UAVs) have got a boost in their applications in civilian areas like aerial photography, agriculture, communication, etc. Over the coming years, the number of UAVs will increase rapidly. As a result, the risk of mid-air collisions grows, leading to property damages and possible loss of life if a UAV collides with manned aircraft. An increasing research effort has been made to develop sophisticated trajectory prediction methods for UAVs for collision detection and trajectory planning. The existing techniques suffer from problems such as inadequate uncertainty quantification of predicted trajectories. This work adopts particle filters, a Bayesian inferencing technique for trajectory prediction. The use of minimum volume enclosing ellipsoid to approximate the highest posterior density region for prediction uncertainty quantification is also investigated. The particle filter is tuned and tested on real-world and simulated data sets and compared with the Kalman filter. A parallel computing approach for particle filter is further proposed. This parallel implementation makes the particle filter faster and more suitable for real-time online applications.
277

Coupled-Cluster Methods for Large Molecular Systems Through Massive Parallelism and Reduced-Scaling Approaches

Peng, Chong 02 May 2018 (has links)
Accurate correlated electronic structure methods involve a significant amount of computations and can be only employed to small molecular systems. For example, the coupled-cluster singles, doubles, and perturbative triples model (CCSD(T)), which is known as the ``gold standard" of quantum chemistry for its accuracy, usually can treat molecules with 20-30 atoms. To extend the reach of accurate correlated electronic structure methods to larger molecular systems, we work towards two directions: parallel computing and reduced-cost/scaling approaches. Parallel computing can utilize more computational resources to handle systems that demand more substantial computational efforts. Reduced-cost/scaling approaches, which introduce approximations to the existing electronic structure methods, can significantly reduce the amount of computation and storage requirements. In this work, we introduce a new distributed-memory massively parallel implementation of standard and explicitly correlated (F12) coupled-cluster singles and doubles (CCSD) with canonical bigO{N^6} computational complexity ( C. Peng, J. A. Calvin, F. Pavov{s}evi'c, J. Zhang, and E. F. Valeev, textit{J. Phys. Chem. A} 2016, textbf{120}, 10231.), based on the TiledArray tensor framework. Excellent strong scaling is demonstrated on a multi-core shared-memory computer, a commodity distributed-memory computer, and a national-scale supercomputer. We also present a distributed-memory implementation of the density-fitting (DF) based CCSD(T) method. (C. Peng, J. A. Calvin, and E. F. Valeev, textit{in preparation for submission}) An improved parallel DF-CCSD is presented utilizing lazy evaluation for tensors with more than two unoccupied indices, which makes the DF-CCSD storage requirements always smaller than those of the non-iterative triples correction (T). Excellent strong scaling is observed on both shared-memory and distributed-memory computers equipped with conventional Intel Xeon processors and the Intel Xeon Phi (Knights Landing) processors. With the new implementation, the CCSD(T) energies can be evaluated for systems containing 200 electrons and 1000 basis functions in a few days using a small size commodity cluster, with even more massive computations possible on leadership-class computing resources. The inclusion of F12 correction to the CCSD(T) method makes it converge to basis set limit much more rapidly. The large-scale parallel explicitly correlated coupled-cluster program makes the accurate estimation of the coupled-cluster basis set limit for molecules with 20 or more atoms a routine. Thus, it can be used rigorously to test the emerging reduced-scaling coupled-cluster approaches. Moreover, we extend the pair natural orbital (PNO) approach to excited states through the equation-of-motion coupled cluster singles and doubles (EOM-CCSD) method. (C. Peng, M. C. Clement, and E. F. Valeev, textit{submitted}) We simulate the PNO-EOM-CCSD method using an existing massively parallel canonical EOM-CCSD program. We propose the use of state-averaged PNOs, which are generated from the average of the pair density of excited states, to span the PNO space of all the excited states. The doubles amplitudes in the CIS(D) method are used to compute the state-averaged pair density of excited states. The issue of incorrect states in the state-averaged pair density, caused by an energy reordering of excited states between the CIS(D) and EOM-CCSD, is resolved by simply computing more states than desired. We find that with a truncation threshold of $10^{-7}$, the truncation error for the excitation energy is already below 0.02 eV for the systems tested, while the average number of PNOs is reduced to 50-70 per pair. The accuracy of the PNO-EOM-CCSD method on local, Rydberg and charge transfer states is also investigated. / Ph. D.
278

Utilizing Hierarchical Clusters in the Design of Effective and Efficient Parallel Simulations of 2-D and 3-D Ising Spin Models

Muthukrishnan, Gayathri 28 May 2004 (has links)
In this work, we design parallel Monte Carlo algorithms for the Ising spin model on a hierarchical cluster. A hierarchical cluster can be considered as a cluster of homogeneous nodes which are partitioned into multiple supernodes such that communication across homogenous clusters is represented by a supernode topological network. We consider different data layouts and provide equations for choosing the best data layout under such a network paradigm. We show that the data layouts designed for a homogeneous cluster will not yield results as good as layouts designed for a hierarchical cluster. We derive theoretical results on the performance of the algorithms on a modified version of the LogP model that represents such tiered networking, and present simulation results to analyze the utility of the theoretical design and analysis. Furthermore, we consider the 3-D Ising model and design parallel algorithms for sweep spin selection on both homogeneous and hierarchical clusters. We also discuss the simulation of hierarchical clusters on a homogeneous set of machines, and the efficient implementation of the parallel Ising model on such clusters. / Master of Science
279

Constructing Covering Arrays using Parallel Computing and Grid Computing

Avila George, Himer 10 September 2012 (has links)
A good strategy to test a software component involves the generation of the whole set of cases that participate in its operation. While testing only individual values may not be enough, exhaustive testing of all possible combinations is not always feasible. An alternative technique to accomplish this goal is called combinato- rial testing. Combinatorial testing is a method that can reduce cost and increase the effectiveness of software testing for many applications. It is based on con- structing functional test-suites of economical size, which provide coverage of the most prevalent configurations. Covering arrays are combinatorial objects, that have been applied to do functional tests of software components. The use of cov- ering arrays allows to test all the interactions, of a given size, among the input parameters using the minimum number of test cases. For software testing, the fundamental problem is finding a covering array with the minimum possible number of rows, thus reducing the number of tests, the cost, and the time expended on the software testing process. Because of the importance of the construction of (near) optimal covering arrays, much research has been carried out in developing effective methods for constructing them. There are several reported methods for constructing these combinatorial models, among them are: (1) algebraic methods, recursive methods, (3) greedy methods, and (4) metaheuristics methods. Metaheuristic methods, particularly through the application of simulated anneal- ing has provided the most accurate results in several instances to date. Simulated annealing algorithm is a general-purpose stochastic optimization method that has proved to be an effective tool for approximating globally optimal solutions to many optimization problems. However, one of the major drawbacks of the simulated an- nealing is the time it requires to obtain good solutions. In this thesis, we propose the development of an improved simulated annealing algorithm / Avila George, H. (2012). Constructing Covering Arrays using Parallel Computing and Grid Computing [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/17027
280

Study, Modelling and Implementation of the Level Set Method Used in Micromachining Processes

Montoliu Álvaro, Carles 09 December 2015 (has links)
[EN] The main topic of the present thesis is the improvement of fabrication processes simulation by means of the Level Set (LS) method. The LS is a mathematical approach used for evolving fronts according to a motion defined by certain laws. The main advantage of this method is that the front is embedded inside a higher dimensional function such that updating this function instead of directly the front itself enables a trivial handling of complex situations like the splitting or coalescing of multiple fronts. In particular, this document is focused on wet and dry etching processes, which are widely used in the micromachining process of Micro-Electro-Mechanical Systems (MEMS). A MEMS is a system formed by mechanical elements, sensors, actuators, and electronics. These devices have gained a lot of popularity in last decades and are employed in several industry fields such as automotive security, motion sensors, and smartphones. Wet etching process consists in removing selectively substrate material (e.g. silicon or quartz) with a liquid solution in order to form a certain structure. This is a complex process since the result of a particular experiment depends on many factors, such as crystallographic structure of the material, etchant solution or its temperature. Similarly, dry etching processes are used for removing substrate material, however, gaseous substances are employed in the etching stage. In both cases, the usage of a simulator capable of predicting accurately the result of a certain experiment would imply a significant reduction of design time and costs. There exist a few LS-based wet etching simulators but they have many limitations and they have never been validated with real experiments. On the other hand, atomistic models are currently considered the most advanced simulators. Nevertheless, atomistic simulators present some drawbacks like the requirement of a prior calibration process in order to use the experimental data. Additionally, a lot of effort must be invested to create an atomistic model for simulating the etching process of substrate materials with different atomistic structures. Furthermore, the final result is always formed by unconnected atoms, which makes difficult a proper visualization and understanding of complex structures, thus, usually an additional visualization technique must be employed. For its part, dry etching simulators usually employ an explicit representation technique to evolve the surface being etched according to etching models. This strategy can produce unrealistic results, specially in complex situations like the interaction of multiple surfaces. Despite some models that use implicit representation have been published, they have never been directly compared with real experiments and computational performance of the implementations have not been properly analysed. The commented limitations are addressed in the various chapters of the present thesis, producing the following contributions: - An efficient LS implementation in order to improve the visual representation of atomistic wet etching simulators. This implementation produces continuous surfaces from atomistic results. - Definition of a new LS-based model which can directly use experimental data of many etchant solutions (such as KOH, TMAH, NH4HF2, and IPA and Triton additives) to simulate wet etching processes of various substrate materials (e.g. silicon and quartz). - Validation of the developed wet etching simulator by comparing it to experimental and atomistic simulator results. - Implementation of a LS-based tool which evolves the surface being etched according to dry etching models in order to enable the simulation of complex processes. This implementation is also validated experimentally. - Acceleration of the developed wet and dry etching simulators by using Graphics Processing Units (GPUs). / [ES] El tema principal de la presente tesis consiste en mejorar la simulación de los procesos de fabricación utilizando el método Level Set (LS). El LS es una técnica matemática utilizada para la evolución de frentes según un movimiento definido por unas leyes. La principal ventaja de este método es que el frente está embebido dentro de una función definida en una dimensión superior. Actualizar dicha función en lugar del propio frente permite tratar de forma trivial situaciones complejas como la separación o la colisión de diversos frentes. En concreto, este documento se centra en los procesos de atacado húmedo y seco, los cuales son ampliamente utilizados en el proceso de fabricación de Sistemas Micro-Electro-Mecánicos (MEMS, de sus siglas en inglés). Un MEMS es un sistema formado por elementos mecánicos, sensores, actuadores y electrónica. Estos dispositivos hoy en día son utilizados en muchos campos de la industria como la seguridad automovilística, sensores de movimiento y teléfonos inteligentes. El proceso de atacado húmedo consiste en eliminar de forma selectiva el material del sustrato (por ejemplo, silicio o cuarzo) con una solución líquida con el fin de formar una estructura específica. Éste es un proceso complejo pues el resultado depende de muchos factores, tales como la estructura cristalográfica del material, la solución atacante o su temperatura. De forma similar, los procesos de atacado seco son utilizados para eliminar el material del sustrato, sin embargo, se utilizan sustancias gaseosas en la fase de atacado. En ambos casos, la utilización de un simulador capaz de predecir de forma precisa el resultado de un experimento concreto implicaría una reducción significativa del tiempo de diseño y de los costes. Existen unos pocos simuladores del proceso de atacado húmedo basados en el método LS, no obstante tienen muchas limitaciones y nunca han sido validados con experimentos reales. Por otro lado, los simuladores atomísticos son hoy en día considerados los simuladores más avanzados pero tienen algunos inconvenientes como la necesidad de un proceso de calibración previo para poder utilizar los datos experimentales. Además, debe invertirse mucho esfuerzo para crear un modelo atomístico para la simulación de materiales de sustrato con distintas estructuras atomísticas. Asimismo, el resultado final siempre está formado por átomos inconexos que dificultan una correcta visualización y un correcto entendimiento de aquellas estructuras complejas, por tanto, normalmente debe emplearse una técnica adicional para la visualización de dichos resultados. Por su parte, los simuladores del proceso de atacado seco normalmente utilizan técnicas de representación explícita para evolucionar, según los modelos de atacado, la superficie que está siendo atacada. Esta técnica puede producir resultados poco realistas, sobre todo en situaciones complejas como la interacción de múltiples superficies. A pesar de que unos pocos modelos son capaces de solventar estos problemas, nunca han sido comparados con experimentos reales ni el rendimiento computacional de las correspondientes implementaciones ha sido adecuadamente analizado. Las expuestas limitaciones son abordadas en la presente tesis y se han producido las siguientes contribuciones: - Implementación eficiente del método LS para mejorar la representación visual de los simuladores atomísticos del proceso de atacado húmedo. - Definición de un nuevo modelo basado en el LS que pueda usar directamente los datos experimentales de muchos atacantes para simular el proceso de atacado húmedo de diversos materiales de sustrato. - Validación del simulador comparándolo con resultados experimentales y con los de simuladores atomísticos. - Implementación de una herramienta basada en el método LS que evolucione la superficie que está siendo atacada según los modelos de atacado seco para habilitar la simulación de procesos comple / [CA] El tema principal de la present tesi consisteix en millorar la simulació de processos de fabricació mitjançant el mètode Level Set (LS). El LS és una tècnica matemàtica utilitzada per a l'evolució de fronts segons un moviment definit per unes lleis en concret. El principal avantatge d'aquest mètode és que el front està embegut dins d'una funció definida en una dimensió superior. D'aquesta forma, actualitzar la dita funció en lloc del propi front, permet tractar de forma trivial situacions complexes com la separació o la col·lisió de diversos fronts. En concret, aquest document es centra en els processos d'atacat humit i sec, els quals són àmpliament utilitzats en el procés de fabricació de Sistemes Micro-Electro-Mecànics (MEMS, de les sigles en anglès). Un MEMS és un sistema format per elements mecànics, sensors, actuadors i electrònica. Aquests dispositius han guanyat molta popularitat en les últimes dècades i són utilitzats en molts camps de la indústria, com la seguretat automobilística, sensors de moviment i telèfons intel·ligents. El procés d'atacat humit consisteix en eliminar de forma selectiva el material del substrat (per exemple, silici o quars) amb una solució líquida, amb la finalitat de formar una estructura específica. Aquest és un procés complex ja que el resultat de un determinat experiment depèn de molts factors, com l'estructura cristal·logràfica del material, la solució atacant o la seva temperatura. De manera similar, els processos d'atacat sec son utilitzats per a eliminar el material del substrat, no obstant, s'utilitzen substàncies gasoses en la fase d'atacat. En ambdós casos, la utilització d'un simulador capaç de predir de forma precisa el resultat d'un experiment en concret implicaria una reducció significativa del temps de disseny i dels costos. Existeixen uns pocs simuladors del procés d'atacat humit basats en el mètode LS, no obstant tenen moltes limitacions i mai han sigut validats amb experiments reals. Per la seva part, els simuladors atomístics tenen alguns inconvenients com la necessitat d'un procés de calibratge previ per a poder utilitzar les dades experimentals. A més, deu invertir-se molt d'esforç per crear un model atomístic per a la simulació de materials de substrat amb diferents estructures atomístiques. Així mateix, el resultat final sempre està format per àtoms inconnexos que dificulten una correcta visualització i un correcte enteniment d'aquelles estructures complexes, per tant, normalment deu emprar-se una tècnica addicional per a la visualització d'aquests resultats. D'altra banda, els simuladors del procés d'atacat sec normalment utilitzen tècniques de representació explícita per evolucionar, segons els models d'atacat, la superfície que està sent atacada. Aquesta tècnica pot introduir resultats poc realistes, sobretot en situacions complexes com per exemple la interacció de múltiples superfícies. A pesar que uns pocs models son capaços de resoldre aquests problemes, mai han sigut comparats amb experiments reals ni tampoc el rendiment computacional de les corresponents implementacions ha sigut adequadament analitzat. Les exposades limitacions son abordades en els diferents capítols de la present tesi i s'han produït les següents contribucions: - Implementació eficient del mètode LS per millorar la representació visual dels simuladors atomístics del procés d'atacat humit. - Definició d'un nou model basat en el mètode LS que puga utilitzar directament les dades experimentals de molts atacants per a simular el procés d'atacat humit de diversos materials de substrat. - Validació del simulador d'atacat humit desenvolupat comparant-lo amb resultats experimentals i amb els de simuladors atomístics. - Implementació d'una ferramenta basada en el mètode LS que evolucione la superfície que està sent atacada segons els models d'atacat sec per, d'aquesta forma, habilitar la simulació de processo / Montoliu Álvaro, C. (2015). Study, Modelling and Implementation of the Level Set Method Used in Micromachining Processes [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/58609

Page generated in 0.0922 seconds