• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 2
  • 2
  • Tagged with
  • 15
  • 15
  • 10
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Analysis of GPU-based convolution for acoustic wave propagation modeling with finite differences: Fortran to CUDA-C step-by-step

Sadahiro, Makoto 04 September 2014 (has links)
By projecting observed microseismic data backward in time to when fracturing occurred, it is possible to locate the fracture events in space, assuming a correct velocity model. In order to achieve this task in near real-time, a robust computational system to handle backward propagation, or Reverse Time Migration (RTM), is required. We can then test many different velocity models for each run of the RTM. We investigate the use of a Graphics Processing Unit (GPU) based system using Compute Unified Device Architecture for C (CUDA-C) as the programming language. Our preliminary results show a large improvement in run-time over conventional programming methods based on conventional Central Processing Unit (CPU) computing with Fortran. Considerable room for improvement still remains. / text
12

Modelling and Control of Batch Processes

Aumi, Siam 04 1900 (has links)
<p>This thesis considers the problems of modelling and control of batch processes, a class of finite duration chemical processes characterized by their absence of equilibrium conditions and nonlinear, time-varying dynamics over a wide range of operating conditions. In contrast to continuous processes, the control objective in batch processes is to achieve a non-equilibrium desired end-point or product quality by the batch termination time. However, the distinguishing features of batch processes complicate their control problem and call for dedicated modelling and control tools. In the initial phase of this research, a predictive controller based on the novel concept of reverse-time reachability regions (RTRRs) is developed. Defined as the set of states from where the process can be steered inside a desired end-point neighbourhood by batch termination subject to input constraints and model uncertainties, an algorithm is developed to characterize these sets at each sampling instance offline; these characterizations subsequently play an integral role in the control design. A key feature of the resultant controller is that it requires the online computation of only the immediate control action while guaranteeing reachability to the desired end-point neighbourhood, rendering the control problem efficiently solvable even when using the nonlinear process model. Moreover, the use of RTRRs and one-step ahead type control policy embeds important fault-tolerant characteristics into the controller. Next, we address the problem of the unavailability of reliable and computationally manageable first-principles-based process models by developing a new data-based modelling approach. In this approach, local linear models (identified via latent variable regression techniques) are combined with weights (arising from fuzzy c-means clustering) to describe global nonlinear process dynamics. Nonlinearities are captured through the appropriate combination of the different models while the linearity of the individual models prevents against a computationally expensive predictive controller. This modelling approach is also generalized to account for time-varying dynamics by incorporating online learning ability into the model, making it adaptive. This is accomplished by developing a probabilistic recursive least squares (PRLS) algorithm for updating a subset of the model parameters. The data-based modelling approach is first used to generate data-based reverse-time reachability regions (RTRRs), which are subsequently incorporated in a new predictive controller. Next, the modelling approach is applied on a complex nylon-6,6 batch polymerization process in order to design a trajectory tracking predictive controller for the key process outputs. Through simulations, the modelling approach is shown to capture the major process nonlinearities and closed-loop results demonstrate the advantages of the proposed controller over existing options. Through further simulation studies, model adaptation (via the PRLS algorithm) is shown to be crucial for achieving acceptable control performance when encountering large disturbances in the initial conditions. Finally, we consider the problem of direct quality control even when there are limited quality-related measurements available from the process; this situation typically calls for indirectly pursuing the control objective through trajectory tracking control. To address the problem of unavailability of online quality measurements, an inferential quality model, which relates the process conditions over the entire batch duration to the final quality, is required. The accuracy of this type of quality model, however, is sensitive to the prediction of the future batch behaviour until batch termination. This "missing data" problem is handled by integrating the previously developed data-based modelling approach with the inferential model in a predictive control framework. The key feature of this approach is that the causality and nonlinear relationships between the future inputs and outputs are accounted for in predicting the final quality and computing the manipulated input trajectory. The efficacy of the proposed predictive control design is illustrated via simulations of the nylon-6,6 batch polymerization process with a different control objective than considered previously.</p> / Doctor of Philosophy (PhD)
13

Schémas numérique d'ordre élevé en temps et en espace pour l'équation des ondes du premier ordre. Application à la Reverse Time Migration. / High Order time and space schemes for the first order wave equation. Application to the Reverse Time Migration.

Ventimiglia, Florent 05 June 2014 (has links)
L’imagerie du sous-sol par équations d’onde est une application de l’ingénierie pétrolière qui mobilise des ressources de calcul très importantes. On dispose aujourd’hui de calculateurs puissants qui rendent accessible l’imagerie de régions complexes mais des progrès sont encore nécessaires pour réduire les coûts de calcul et améliorer la qualité des simulations. Les méthodes utilisées aujourd’hui ne permettent toujours pas d’imager correctement des régions très hétérogènes 3D parce qu’elles sont trop coûteuses et /ou pas assez précises. Les méthodes d’éléments finis sont reconnues pour leur efficacité à produire des simulations de qualité dans des milieux hétérogènes. Dans cette thèse, on a fait le choix d’utiliser une méthode de Galerkine discontinue (DG) d’ordre élevé à flux centrés pour résoudre l’équation des ondes acoustiques et on développe un schéma d’ordre élevé pour l’intégration en temps qui peut se coupler avec la technique de discrétisation en espace, sans générer des coûts de calcul plus élevés qu’avec le schéma d’ordre deux Leap-Frog qui est le plus couramment employé. Le nouveau schéma est comparé au schéma d’ordre élevé ADER qui s’avère plus coûteux car il requiert un plus grand nombre d’opérations pour un niveau de précision fixé. De plus, le schéma ADER utilise plus de mémoire, ce qui joue aussi en faveur du nouveau schéma car la production d’images du sous-sol consomme beaucoup de mémoire et justifie de développer des méthodes numériques qui utilisent la mémoire au minimum. On analyse également la précision des deux schémas intégrés dans un code industriel et appliqués à des cas test réalistes. On met en évidence des phénomènes de pollution numériques liés à la mise en oeuvre d'une source ponctuelle dans le schéma DG et on montre qu'on peut éliminer ces ondes parasites en introduisant un terme de pénalisation non dissipatif dans la formulation DG. On finit cette thèse en discutant les difficultés engendrées par l'utilisation de schémas numériques dans un contexte industriel, et en particulier l'effet des calculs en simple précision. / Oil engineering uses a wide variety of technologies including imaging wave equation which involves very large computing resources. Very powerful computers are now available that make imaging of complex areas possible, but further progress is needed both to reduce the computational cost and improve the simulation accuracy. The current methods still do not allow to image properly heterogeneous 3D regions because they are too expensive and / or not accurate enough. Finite element methods turn out to be efficient for producing good simulations in heterogeneous media. In this thesis, we thus chose to use a high order Discontinuous Galerkin (DG) method based upon centered fluxes to solve the acoustic wave equation and developed a high-order scheme for time integration which can be coupled with the space discretization technique, without generating higher computational cost than the second-order Leap Frog scheme which is the most widely used . The new scheme is compared to the high order ADER scheme which is more expensive because it requires a larger number of computations for a fixed level of accuracy. In addition, the ADER scheme uses more memory, which also works in favor of the new scheme since producing subsurface images consumes lots of memory and justifies the development of low-memory numerical methods. The accuracy of both schemes is then analyzed when they are included in an industrial code and applied to realistic problems. The comparison highlights the phenomena of numerical pollution that occur when injecting a point source in the DG scheme and shows that spurious waves can be eliminated by introducing a non-dissipative penalty term in the DG formulation. This work ends by discussing the difficulties induced by using numerical methods in an industrial framework, and in particular the effect of single precision calculations.
14

Escalabilidade Paralela de um Algoritmo de Migra??o Reversa no Tempo (RTM) Pr?-empilhamento / PARALLEL SCALABILITY OF A PRESTACK REVERSE TIME MIGRATION (RTM) ALGORITHM

Ros?rio, Desnes Augusto Nunes do 21 December 2012 (has links)
Made available in DSpace on 2014-12-17T14:56:09Z (GMT). No. of bitstreams: 1 DesnesANR_DISSERT.pdf: 3501359 bytes, checksum: 5155a508018af1e52dae20205b8f726b (MD5) Previous issue date: 2012-12-21 / The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors / A s?smica ? uma ?rea de extrema import?ncia na geof?sica. Associada principalmente ? explora??o de petr?leo, essa linha de pesquisa concentra boa parte de todo o investimento realizado nesta grande ?rea. A aquisi??o, o processamento e a interpreta??o dos dados s?smicos s?o as partes que comp?em um estudo s?smico. O processamento s?smico em especial tem como objetivo ? obten??o de uma imagem que represente as estruturas geol?gicas em subsuperf?cie. O processamento s?smico evoluiu significativamente nas ?ltimas d?cadas devido ?s demandas da ind?stria petrol?fera, e aos avan?os tecnol?gicos de hardware que proporcionaram maiores capacidades de armazenamento e processamento de informa??es digitais, que por sua vez possibilitaram o desenvolvimento de algoritmos de processamento mais sofisticados, tais como os que utilizam arquiteturas paralelas de processamento. Uma das etapas importantes contidas no processamento s?smico ? o imageamento. A migra??o ? uma das t?cnicas usadas para no imageamento com o objetivo de obter uma se??o s?smica que represente de forma mais precisa e fiel as estruturas geol?gicas. O resultado da migra??o ? uma imagem 2D ou 3D na qual ? poss?vel a identifica??o de falhas e domos salinos dentre outras estruturas de interesse, poss?veis reservat?rios de hidrocarbonetos. Entretanto, uma migra??o rica em qualidade e precis?o pode ser um processo demasiadamente longo, devido ?s heur?sticas matem?ticas do algoritmo e ? quantidade extensa de entradas e sa?das de dados envolvida neste processo, podendo levar dias, semanas e at? meses de execu??o ininterrupta em supercomputadores, o que representa grande custo computacional e financeiro, o que pode inviabilizar a aplica??o desses m?todos. Tendo como objetivo a melhoria de desempenho, este trabalho realizou a paraleliza??o do n?cleo de um algoritmo de Migra??o Reversa no Tempo (RTM - do ingl?s: Reverse Time Migration), utilizando o modelo de programa??o paralela OpenMP (do ingl?s: Open Multi-Processing), devido ao alto esfor?o computacional demandado por essa t?cnica de migra??o. Al?m disso, foram realizadas an?lises de desempenho tais como de speedup, efici?ncia, e, por fim, a identifica??o do grau de escalabilidade algor?tmica com rela??o ao avan?o tecnol?gico esperado para futuros processadores
15

Towards a History and Aesthetics of Reverse Motion

Tohline, Andrew M. 17 September 2015 (has links)
No description available.

Page generated in 0.0871 seconds