• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 226
  • 81
  • 30
  • 24
  • 14
  • 7
  • 6
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 501
  • 501
  • 103
  • 70
  • 61
  • 58
  • 58
  • 57
  • 57
  • 56
  • 54
  • 54
  • 52
  • 50
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

TIME-OF-FLIGHT NEUTRON CT FOR ISOTOPE DENSITY RECONSTRUCTION AND CONE-BEAM CT SEPARABLE MODELS

Thilo Balke (15348532) 26 April 2023 (has links)
<p>There is a great need for accurate image reconstruction in the context of non-destructive evaluation. Major challenges include the ever-increasing necessity for high resolution reconstruction with limited scan and reconstruction time and thus fewer and noisier measurements. In this thesis, we leverage advanced Bayesian modeling of the physical measurement process and probabilistic prior information of the image distribution in order to yield higher image quality despite limited measurement time. We demonstrate in several ways efficient computational performance through the exploitation of more efficient memory access, optimized parametrization of the system model, and multi-pixel parallelization. We demonstrate that by building high-fidelity forward models that we can generate quantitatively reliable reconstructions despite very limited measurement data.</p> <p><br></p> <p>In the first chapter, we introduce an algorithm for estimating isotopic densities from neutron time-of-flight imaging data. Energy resolved neutron imaging (ERNI) is an advanced neutron radiography technique capable of non-destructively extracting spatial isotopic information within a given material. Energy-dependent radiography image sequences can be created by utilizing neutron time-of-flight techniques. In combination with uniquely characteristic isotopic neutron cross-section spectra, isotopic areal densities can be determined on a per-pixel basis, thus resulting in a set of areal density images for each isotope present in the sample. By preforming ERNI measurements over several rotational views, an isotope decomposed 3D computed tomography is possible. We demonstrate a method involving a robust and automated background estimation based on a linear programming formulation. The extremely high noise due to low count measurements is overcome using a sparse coding approach. It allows for a significant computation time improvement, from weeks to a few hours compared to existing neutron evaluation tools, enabling at the present stage a semi-quantitative, user-friendly routine application. </p> <p><br></p> <p>In the second chapter, we introduce the TRINIDI algorithm, a more refined algorithm for the same problem.</p> <p>Accurate reconstruction of 2D and 3D isotope densities is a desired capability with great potential impact in applications such as evaluation and development of next-generation nuclear fuels.</p> <p>Neutron time-of-flight (TOF) resonance imaging offers a potential approach by exploiting the characteristic neutron adsorption spectra of each isotope.</p> <p>However, it is a major challenge to compute quantitatively accurate images due to a variety of confounding effects such as severe Poisson noise, background scatter, beam non-uniformity, absorption non-linearity, and extended source pulse duration. We present the TRINIDI algorithm which is based on a two-step process in which we first estimate the neutron flux and background counts, and then reconstruct the areal densities of each isotope and pixel.</p> <p>Both components are based on the inversion of a forward model that accounts for the highly non-linear absorption, energy-dependent emission profile, and Poisson noise, while also modeling the substantial spatio-temporal variation of the background and flux. </p> <p>To do this, we formulate the non-linear inverse problem as two optimization problems that are solved in sequence.</p> <p>We demonstrate on both synthetic and measured data that TRINIDI can reconstruct quantitatively accurate 2D views of isotopic areal density that can then be reconstructed into quantitatively accurate 3D volumes of isotopic volumetric density.</p> <p><br></p> <p>In the third chapter, we introduce a separable forward model for cone-beam computed tomography (CT) that enables efficient computation of a Bayesian model-based reconstruction. Cone-beam CT is an attractive tool for many kinds of non-destructive evaluation (NDE). Model-based iterative reconstruction (MBIR) has been shown to improve reconstruction quality and reduce scan time. However, the computational burden and storage of the system matrix is challenging. In this paper we present a separable representation of the system matrix that can be completely stored in memory and accessed cache-efficiently. This is done by quantizing the voxel position for one of the separable subproblems. A parallelized algorithm, which we refer to as zipline update, is presented that speeds up the computation of the solution by about 50 to 100 times on 20 cores by updating groups of voxels together. The quality of the reconstruction and algorithmic scalability are demonstrated on real cone-beam CT data from an NDE application. We show that the reconstruction can be done from a sparse set of projection views while reducing artifacts visible in the conventional filtered back projection (FBP) reconstruction. We present qualitative results using a Markov Random Field (MRF) prior and a Plug-and-Play denoiser.</p>
412

A Parallel Computing Approach for Identifying Retinitis Pigmentosa Modifiers in Drosophila Using Eye Size and Gene Expression Data

Chawin Metah (15361576) 29 April 2023 (has links)
<p>For many years, researchers have developed ways to diagnose degenerative disease in the retina by utilizing multiple gene analysis techniques. Retinitis pigmentosa (RP) disease can cause either partially or totally blindness in adults. For that reason, it is crucial to find a way to pinpoint the causes in order to develop a proper medication or treatment. One of the common methods is genome-wide analysis (GWA). However, it cannot fully identify the genes that are indirectly related to the changes in eye size. In this research, RNA sequencing (RNA-seq) analysis is used to link the phenotype to genotype, creating a pool of candidate genes that might associate with the RP. This will support future research in finding a therapy or treatment to cure such disease in human adults.</p> <p><br></p> <p>Using the Drosophila Genetic Reference Panel (DGRP) – a gene reference panel of fruit fly – two types of datasets are involved in this analysis: eye-size data and gene expression data with two replicates for each strain. This allows us to create a phenotype-genotype map. In other words, we are trying to trace the genes (genotype) that exhibit the RP disease guided by comparing their eye size (phenotype). The basic idea of the algorithm is to discover the best replicate combination that maximizes the correlation between gene expression and eye-size. Since there are 2N possible replicate combinations, where N is the number of selected strains, the original implementation of sequential algorithm was computationally intensive.</p> <p><br></p> <p>The original idea of finding the best replicate combination was proposed by Nguyen et al. (2022). In this research, however, we restructured the algorithms to distribute the tasks of finding the best replicate combination and run them in parallel. The implementation was done using the R programming language, utilizing doParallel and foreach packages, and able to execute on a multicore machine. The program was tested on both a laptop and a server, and the experimental results showed an outstanding improvement in terms of the execution time. For instance, while using 32 processes, the results reported up to 95% reduction in execution time when compared with the sequential version of the code. Furthermore, with the increment of computational capabilities, we were able to explore and analyze more extreme eye-size lines using three eye-size datasets representing different phenotype models. This further improved the accuracy of the results where the top candidate genes from all cases showed connection to RP.</p>
413

Visual Analytics of Big Data from Molecular Dynamics Simulation

Rajendran, Catherine Jenifer Rajam 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Protein malfunction can cause human diseases, which makes the protein a target in the process of drug discovery. In-depth knowledge of how protein functions can widely contribute to the understanding of the mechanism of these diseases. Protein functions are determined by protein structures and their dynamic properties. Protein dynamics refers to the constant physical movement of atoms in a protein, which may result in the transition between different conformational states of the protein. These conformational transitions are critically important for the proteins to function. Understanding protein dynamics can help to understand and interfere with the conformational states and transitions, and thus with the function of the protein. If we can understand the mechanism of conformational transition of protein, we can design molecules to regulate this process and regulate the protein functions for new drug discovery. Protein Dynamics can be simulated by Molecular Dynamics (MD) Simulations. The MD simulation data generated are spatial-temporal and therefore very high dimensional. To analyze the data, distinguishing various atomic interactions within a protein by interpreting their 3D coordinate values plays a significant role. Since the data is humongous, the essential step is to find ways to interpret the data by generating more efficient algorithms to reduce the dimensionality and developing user-friendly visualization tools to find patterns and trends, which are not usually attainable by traditional methods of data process. The typical allosteric long-range nature of the interactions that lead to large conformational transition, pin-pointing the underlying forces and pathways responsible for the global conformational transition at atomic level is very challenging. To address the problems, Various analytical techniques are performed on the simulation data to better understand the mechanism of protein dynamics at atomic level by developing a new program called Probing Long-distance interactions by Tapping into Paired-Distances (PLITIP), which contains a set of new tools based on analysis of paired distances to remove the interference of the translation and rotation of the protein itself and therefore can capture the absolute changes within the protein. Firstly, we developed a tool called Decomposition of Paired Distances (DPD). This tool generates a distance matrix of all paired residues from our simulation data. This paired distance matrix therefore is not subjected to the interference of the translation or rotation of the protein and can capture the absolute changes within the protein. This matrix is then decomposed by DPD using Principal Component Analysis (PCA) to reduce dimensionality and to capture the largest structural variation. To showcase how DPD works, two protein systems, HIV-1 protease and 14-3-3 σ, that both have tremendous structural changes and conformational transitions as displayed by their MD simulation trajectories. The largest structural variation and conformational transition were captured by the first principal component in both cases. In addition, structural clustering and ranking of representative frames by their PC1 values revealed the long-distance nature of the conformational transition and locked the key candidate regions that might be responsible for the large conformational transitions. Secondly, to facilitate further analysis of identification of the long-distance path, a tool called Pearson Coefficient Spiral (PCP) that generates and visualizes Pearson Coefficient to measure the linear correlation between any two sets of residue pairs is developed. PCP allows users to fix one residue pair and examine the correlation of its change with other residue pairs. Thirdly, a set of visualization tools that generate paired atomic distances for the shortlisted candidate residue and captured significant interactions among them were developed. The first tool is the Residue Interaction Network Graph for Paired Atomic Distances (NG-PAD), which not only generates paired atomic distances for the shortlisted candidate residues, but also display significant interactions by a Network Graph for convenient visualization. Second, the Chord Diagram for Interaction Mapping (CD-IP) was developed to map the interactions to protein secondary structural elements and to further narrow down important interactions. Third, a Distance Plotting for Direct Comparison (DP-DC), which plots any two paired distances at user’s choice, either at residue or atomic level, to facilitate identification of similar or opposite pattern change of distances along the simulation time. All the above tools of PLITIP enabled us to identify critical residues contributing to the large conformational transitions in both HIV-1 protease and 14-3-3σ proteins. Beside the above major project, a side project of developing tools to study protein pseudo-symmetry is also reported. It has been proposed that symmetry provides protein stability, opportunities for allosteric regulation, and even functionality. This tool helps us to answer the questions of why there is a deviation from perfect symmetry in protein and how to quantify it.
414

Smith-Waterman Sequence Alignment For Massively Parallel High-Performance Computing Architectures

Steinfadt, Shannon Irene 19 April 2010 (has links)
No description available.
415

COMPARISON OF THE PERFORMANCE OF NVIDIA ACCELERATORS WITH SIMD AND ASSOCIATIVE PROCESSORS ON REAL-TIME APPLICATIONS

Shaker, Alfred M. 27 July 2017 (has links)
No description available.
416

An Efficient Method for Computing Excited State Properties of Extended Molecular Aggregates Based on an Ab-Initio Exciton Model

Morrison, Adrian Franklin January 2017 (has links)
No description available.
417

Parallel Solution of the Subset-sum Problem: An Empirical Study

Bokhari, Saniyah S. 21 July 2011 (has links)
No description available.
418

CFD – DEM Modeling and Parallel Implementation of Three Dimensional Non- Spherical Particulate Systems

Srinivasan, Vivek 18 July 2019 (has links)
Particulate systems in practical applications such as biomass combustion, blood cellular systems and granular particles in fluidized beds, have often been computationally represented using spherical surfaces, even though the majority of particles in archetypal fluid-solid systems are non-spherical. While spherical particles are more cost-effective to simulate, notable deficiencies of these implementations are their substantial inaccuracies in predicting the dynamics of particle mixtures. Alternatively, modeling dense fluid-particulate systems using non-spherical particles involves increased complexity, with computational cost manifesting as the biggest bottleneck. However, with recent advancements in computer hardware, simulations of three-dimensional particulate systems using irregular shaped particles have garnered significant interest. In this research, a novel Discrete Element Method (DEM) model that incorporates geometry definition, collision detection, and post-collision kinematics has been developed to accurately simulate non-spherical particulate systems. Superellipsoids, which account for 80% of particles commonly found in nature, are used to represent non-spherical shapes. Collisions between these particles are processed using a distance function computation carried out with respect to their surfaces. An event - driven model and a time-driven model have been employed in the current framework to resolve collisions. The collision model's influence on non–spherical particle dynamics is verified by observing the conservation of momentum and total kinetic energy. Furthermore, the non-spherical DEM model is coupled with an in-house fluid flow solver (GenIDLEST). The combined CFD-DEM model's results are validated by comparing to experimental measurements in a fluidized bed. The parallel scalability of the non-spherical DEM model is evaluated in terms of its efficiency and speedup. Major factors affecting wall clock time of simulations are analyzed and an estimate of the model's dependency on these factors is documented. The developed framework allows for a wide range of particle geometries to be simulated in GenIDLEST. / Master of Science / CFD – DEM (Discrete Element Method) is a technique of coupling fluid flow solvers with granular solid particles. CFD – DEM simulations are beneficial in recreating pragmatic applications such as blood cellular flows, fluidized beds and pharmaceutics. Up until recently, particles in these flows have been modeled as spheres as the generation of particle geometry and collision detection algorithms are straightforward. However, in real – life occurrences, most particles are irregular in shape, and approximating them as spheres in computational works leads to a substantial loss of accuracy. On the other hand, non – spherical particles are more complex to generate. When these particles are in motion, they collide and exhibit complex trajectories. Majority of the wall clock time is spent in resolving collisions between these non – spherical particles. Hence, generic algorithms to detect and resolve collisions have to be incorporated. This primary focus of this research work is to develop collision detection and resolution algorithms for non – spherical particles. Collisions are detected using inherent geometrical properties of the class of particles used. Two popular models (event-driven and time-driven) are implemented and utilized to update the trajectories of particles. These models are coupled with an in – house fluid solver (GenIDLEST) and the functioning of the DEM model is validated with experimental results from previous research works. Also, since the computational effort required is higher in the case of non – spherical particulate simulations, an estimate of the scalability of the problem and factors influencing time to simulations are presented.
419

Multidisciplinary Optimization and Damage Tolerance of Stiffened Structures

Jrad, Mohamed 13 May 2015 (has links)
The structural optimization of a cantilever aircraft wing with curvilinear spars and ribs and stiffeners is described. The design concept of reinforcing the wing structure using curvilinear stiffening members has been explored due to the development of novel manufacturing technologies like electron-beam-free-form-fabrication (EBF3). For the optimization of a complex wing, a common strategy is to divide the optimization procedure into two subsystems: the global wing optimization which optimizes the geometry of spars, ribs and wing skins; and the local panel optimization which optimizes the design variables of local panels bordered by spars and ribs. The stiffeners are placed on the local panels to increase the stiffness and buckling resistance. The panel thickness, size and shape of stiffeners are optimized to minimize the structural weight. The geometry of spars and ribs greatly influences the design of stiffened panels. During the local panel optimization, the stress information is taken from the global model as a displacement boundary condition on the panel edges using the so-called "Global-Local Approach". The aircraft design is characterized by multiple disciplines: structures, aeroelasticity and buckling. Particle swarm optimization is used in the integration of global/local optimization to optimize the SpaRibs. The interaction between the global wing optimization and the local panel optimization is usually computationally expensive. A parallel computing technology has been developed in Python programming to reduce the CPU time. The license cycle-check method and memory self-adjustment method are two approaches that have been applied in the parallel framework in order to optimize the use of the resources by reducing the license and memory limitations and making the code robust. The integrated global-local optimization approach has been applied to subsonic NASA common research model (CRM) wing, which proves the methodology's application scaling with medium fidelity FEM analysis. Both the global wing design variables and local panel design variables are optimized to minimize the wing weight at an acceptable computational cost. The structural weight of the wing has been, therefore, reduced by 40% and the parallel implementation allowed a reduction in the CPU time by 89%. The aforementioned Global-Local Approach is investigated and applied to a composite panel with crack at its center. Because of composite laminates' heterogeneity, an accurate analysis of these requires very high time and storage space. In the presence of structural discontinuities like cracks, delaminations, cutouts etc., the computational complexity increases significantly. A possible alternative to reduce the computational complexity is the global-local analysis which involves an approximate analysis of the whole structure followed by a detailed analysis of a significantly smaller region of interest. We investigate here the performance of the global-local scheme based on the finite element method by comparing it to the traditional finite element method. To do so, we conduct a 2D structural analysis of a composite square plate, with a thin rectangular notch at its center, subjected to a uniform transverse pressure, using the commercial software ABAQUS. We show that the presence of the thin notch affects only the local response of the structure and that the size of the affected area depends on the notch length. We investigate also the effect of the notch shape on the response of the structure. Stiffeners attached to composite panels may significantly increase the overall buckling load of the resultant stiffened structure. Buckling analysis of a composite panel with attached longitudinal stiffeners under compressive loads is performed using Ritz method with trigonometric functions. Results are then compared to those from ABAQUS FEA for different shell elements. The case of composite panel with one, two, and three stiffeners is investigated. The effect of the distance between the stiffeners on the buckling load is also studied. The variation of the buckling load and buckling modes with the stiffeners' height is investigated. It is shown that there is an optimum value of stiffeners' height beyond which the structural response of the stiffened panel is not improved and the buckling load does not increase. Furthermore, there exist different critical values of stiffener's height at which the buckling mode of the structure changes. Next, buckling analysis of a composite panel with two straight stiffeners and a crack at the center is performed. Finally, buckling analysis of a composite panel with curvilinear stiffeners and a crack at the center is also conducted. ABAQUS is used for these two examples and results show that panels with a larger crack have a reduced buckling load. It is shown also that the buckling load decreases slightly when using higher order 2D shell FEM elements. A damage tolerance framework, EBF3PanelOpt, has been developed to design and analyze curvilinearly stiffened panels. The framework is written with the scripting language PYTHON and it interacts with the commercial software MSC. Patran (for geometry and mesh creation), MSC. Nastran (for finite element analysis), and MSC. Marc (for damage tolerance analysis). The crack location is set to the location of the maximum value of the major principal stress while its orientation is set normal to the major principal axis direction. The effective stress intensity factor is calculated using the Virtual Crack Closure Technique and compared to the fracture toughness of the material in order to decide whether the crack will expand or not. The ratio of these two quantities is used as a constraint, along with the buckling factor, Kreisselmeier and Steinhauser criteria, and crippling factor. The EBF3PanelOpt framework is integrated within a two-step Particle Swarm Optimization in order to minimize the weight of the panel while satisfying the aforementioned constraints and using all the shape and thickness parameters as design variables. The result of the PSO is used then as an initial guess for the Gradient Based Optimization using only the thickness parameters as design variables. The GBO is applied using the commercial software VisualDOC. / Ph. D.
420

Ensemble Kalman filtering for hydraulic conductivity characterization: Parallelization and non-Gaussianity

Xu, Teng 03 November 2014 (has links)
Tesis por compendio / The ensemble Kalman filter (EnKF) is nowadays recognized as an excellent inverse method for hydraulic conductivity characterization using transient piezometric head data. and it is proved that the EnKF is computationally efficient and capable of handling large fields compared to other inverse methods. However, it is needed a large ensemble size (Chen and Zhang, 2006) to get a high quality estimation, which means a lots of computation time. Parallel computing is an efficient alterative method to reduce the commutation time. Besides, although the EnKF is good accounting for the non linearities of the state equation, it fails when dealing with non-Gaussian distribution fields. Recently, many methods are developed trying to adapt the EnKF to non-Gaussian distributions(detailed in the History and present state chapter). Zhou et al. (2011, 2012) have proposed a Normal-Score Ensemble Kalman Filter (NS-EnKF) to character the non-Gaussian distributed conductivity fields, and already showed that transient piezometric head was enough for hydraulic conductivity characterization if a training image for the hydraulic conductivity was available. Then in this work, we will show that, when without such a training image but with enough transient piezometric head information, the performance of the updated ensemble of realizations in the characterization of the non-Gaussian reference field. In the end, we will introduce a new method for parameterizing geostatistical models coupling with the NS-EnKF in the characterization of a Heterogenous non-Gaussian hydraulic conductivity field. So, this doctor thesis is mainly including three parts, and the name of the parts as below. 1, Parallelized Ensemble Kalman Filter for Hydraulic Conductivity Characterization. 2, The Power of Transient Piezometric Head Data in Inverse Modeling: An Application of the Localized Normal-score EnKF with Covariance Inflation in a Heterogenous Bimodal Hydraulic Conductivity Field. 3, Parameterizing geostatistical models coupling with the NS-EnKF for Heterogenous Bimodal Hydraulic Conductivity characterization. / Xu, T. (2014). Ensemble Kalman filtering for hydraulic conductivity characterization: Parallelization and non-Gaussianity [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/43769 / Compendio

Page generated in 0.1034 seconds