• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 28
  • 28
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Tfdtlm: a New Computationally Efficient Frequency Domain Tlm Based on Transient Analysis Techniques

Salama, Iman Mohamed 01 October 1997 (has links)
The TLM was initially formulated and developed in the time domain. One key issue in a time domain analysis approach is the computational efficiency, where a single impulsive excitation could yield information over a wide frequency range. Also, it may be more natural and realistic to model non linear and frequency dispersive properties in the time domain rather than in the frequency domain. However, in some circumstances, frequency domain analysis may be more appealing. This might be due to the fact that the traditional teaching of electromagnetics emphasizes frequency domain concepts as frequency dispersive constitutive parameters, complex frequency dependent impedances and reflection coefficients. It might be even easier and more direct to be able to model these parameters in frequency domain rather than trying to synthesize an equivalent time domain model. The only limitation of frequency domain analysis, is that the analysis has to be repeated at every frequency point in the frequency range of interest. In this work, a new frequency domain TLM (FDTLM) approach is introduced which combines the superior features of both the time domain and the frequency domain TLM. The approach is based on a steady state analysis in the frequency domain using transient analysis techniques and hence is referred to as TFDTLM. In this approach, the link lines impedances are derived in the frequency domain and are chosen to model the frequency dispersive material parameters. The impedances and propagation constants are allowed to be complex and frequency dependent. Consequently, the TFDTLM can provide more accurate modeling for wave propagation in a frequency dispersive medium. The approach was inspired by the concept of bounce diagram in the time domain and the equivalent frequency domain bounce diagram. To make the TFDTLM approach computationally efficient as compared to other frequency domain TLM approaches, it was critical to maintain some relationship between the mesh response at one frequency point and any other frequency point. The goal was to be able to extract all the frequency domain information in a wide frequency range by performing only one simulation. To achieve this, the transitions between two adjacent cell in all media expressed by (exp(-gamma*L)) have to be expressed in terms of the propagation factor of some reference medium chosen to be the medium with the least propagation delay. This was done with the aid of a digital filter approximation that can be implemented iteratively inside the TLM mesh. The filter can be thought of as some type of compensation equivalent to the stubs in a time domain TLM, yet more accurate and more general. An important advantage of the TFDTLM is that it can easily be interfaced with existing time domain TLM schemes as well as absorbing boundary conditions originally developed for time domain TLM with the slightest modifications. The TFDTLM is implemented a three dimensional mesh and the superior performance of the new approach in modeling lossy inhomogeneous media is demonstrated. The new approach in addition to being computationally efficient as compared to other frequency domain TLM methods, has proven to have superior dispersion behavior in modeling lossy inhomogeneous media as compared to time domain TLM . / Ph. D.
2

Fast target tracking technique for synthetic aperture radars

Kauffman, Kyle J. January 2009 (has links)
Title from first page of PDF document. Includes bibliographical references (p. 40).
3

Coping with the computational and statistical bipolar nature of machine learning

Machart, Pierre 21 December 2012 (has links)
L'Apprentissage Automatique tire ses racines d'un large champ disciplinaire qui inclut l'Intelligence Artificielle, la Reconnaissance de Formes, les Statistiques ou l'Optimisation. Dès les origines de l'Apprentissage, les questions computationelles et les propriétés en généralisation ont toutes deux été identifiées comme centrales pour la discipline. Tandis que les premières concernent les questions de calculabilité ou de complexité (sur un plan fondamental) ou d'efficacité computationelle (d'un point de vue plus pratique) des systèmes d'apprentissage, les secondes visent a comprendre et caractériser comment les solutions qu'elles fournissent vont se comporter sur de nouvelles données non encore vues. Ces dernières années, l'émergence de jeux de données à grande échelle en Apprentissage Automatique a profondément remanié les principes de la Théorie de l'Apprentissage. En prenant en compte de potentielles contraintes sur le temps d'entraînement, il faut faire face à un compromis plus complexe que ceux qui sont classiquement traités par les Statistiques. Une conséquence directe tient en ce que la mise en place d'algorithmes efficaces (autant en théorie qu'en pratique) capables de tourner sur des jeux de données a grande échelle doivent impérativement prendre en compte les aspects statistiques et computationels de l'Apprentissage de façon conjointe. Cette thèse a pour but de mettre à jour, analyser et exploiter certaines des connections qui existent naturellement entre les aspects statistiques et computationels de l'Apprentissage. / Machine Learning is known to have its roots in a broad spectrum of fields including Artificial Intelligence, Pattern Recognition, Statistics or Optimisation. From the earliest stages of Machine Learning, both computational issues and generalisation properties have been identified as central to the field. While the former address the question of computability, complexity (from a fundamental perspective) or computational efficiency (on a more practical standpoint) of learning systems, the latter aim at understanding and characterising how well the solutions they provide perform on new, unseen data. Those last years, the emergence of large-scale datasets in Machine Learning has been deeply reshaping the principles of Learning Theory. Taking into account possible constraints on the training time, one has to deal with more complex trade-offs than the ones classically addressed by Statistics. As a direct consequence, designing new efficient algorithms (both in theory and practice), able to handle large-scale datasets, imposes to jointly deal with the statistical and computational aspects of Learning. The present thesis aims at unravelling, analysing and exploiting some of the connections that naturally exist between the statistical and computational aspects of Learning. More precisely, in a first part, we extend the stability analysis, which relates some algorithmic properties to the generalisation abilities of learning algorithms, to a novel (and fine-grain) performance measure, namely the confusion matrix. In a second part, we present a novel approach to learn a kernel-based regression function, that serves the learning task at hand and exploits the structure of
4

An efficient algorithm for blade loss simulations applied to a high-order rotor dynamics problem

Parthasarathy, Nikhil Kaushik 30 September 2004 (has links)
In this thesis, a novel approach is presented for blade loss simulation of an aircraft gas turbine rotor mounted on rolling element bearings with squeeze film dampers, seal rub and enclosed in a flexible housing. The modal truncation augmentation (MTA) method provides an efficient tool for modeling this large order system with localized nonlinearities in the ball bearings. The gas turbine engine, which is composed of the power turbine and gas generator rotors, is modeled with 38 lumped masses. A nonlinear angular contact bearing model is employed, which has ball and race degrees of freedom and uses a modified Hertzian contact force between the races and balls and for the seal rub. This combines a dry contact force and viscous damping force. A flexible housing with seal rub is also included whose modal description is imported from ANSYS. Prediction of the maximum contact load and the corresponding stress on an elliptical contact area between the races and balls is made during the blade loss simulations. A finite-element based squeeze film damper (SFD), which determines the pressure profile of the oil film and calculates damper forces for any type of whirl orbit is utilized in the simulation. The new approach is shown to provide efficient and accurate predictions of whirl amplitudes, maximum contact load and stress in the bearings, transmissibility, thermal growths, maximum and minimum damper pressures and the amount of unbalanced force for incipient oil film cavitation. It requires about 4 times less computational time than the traditional approaches and has an error of less than 5 %.
5

An efficient algorithm for blade loss simulations applied to a high-order rotor dynamics problem

Parthasarathy, Nikhil Kaushik 30 September 2004 (has links)
In this thesis, a novel approach is presented for blade loss simulation of an aircraft gas turbine rotor mounted on rolling element bearings with squeeze film dampers, seal rub and enclosed in a flexible housing. The modal truncation augmentation (MTA) method provides an efficient tool for modeling this large order system with localized nonlinearities in the ball bearings. The gas turbine engine, which is composed of the power turbine and gas generator rotors, is modeled with 38 lumped masses. A nonlinear angular contact bearing model is employed, which has ball and race degrees of freedom and uses a modified Hertzian contact force between the races and balls and for the seal rub. This combines a dry contact force and viscous damping force. A flexible housing with seal rub is also included whose modal description is imported from ANSYS. Prediction of the maximum contact load and the corresponding stress on an elliptical contact area between the races and balls is made during the blade loss simulations. A finite-element based squeeze film damper (SFD), which determines the pressure profile of the oil film and calculates damper forces for any type of whirl orbit is utilized in the simulation. The new approach is shown to provide efficient and accurate predictions of whirl amplitudes, maximum contact load and stress in the bearings, transmissibility, thermal growths, maximum and minimum damper pressures and the amount of unbalanced force for incipient oil film cavitation. It requires about 4 times less computational time than the traditional approaches and has an error of less than 5 %.
6

Issues of Computational Efficiency and Model Approximation for Spatial Individual-Level Infectious Disease Models

Dobbs, Angie 06 January 2012 (has links)
Individual-level models (ILMs) are models that can use the spatial-temporal nature of disease data to capture the disease dynamics. Parameter estimation is usually done via Markov chain Monte Carlo (MCMC) methods, but correlation between model parameters negatively affects MCMC mixing. Introducing a normalization constant to alleviate the correlation results in MCMC convergence over fewer iterations, however this negatively effects computation time. It is important that model fitting is done as efficiently as possible. An upper-truncated distance kernel is introduced to quicken the computation of the likelihood, but this causes a loss in goodness-of-fit. The normalization constant and upper-truncated distance kernel are evaluated as components in various ILMs via a simulation study. The normalization constant is seen not to be worthwhile, as the effect of increased computation time is not outweighed by the reduced correlation. The upper-truncated distance kernel reduces computation time but worsens model fit as the truncation distance decreases. / Studies have been funded by OMAFRA & NSERC, with computing equipment provided by CSI.
7

Finite element analysis and experimental study of metal powder compaction

KASHANI ZADEH, HOSSEIN 23 September 2010 (has links)
In metal powder compaction, density non-uniformity due to friction can be a source of flaws. Currently in industry, uniform density distribution is achieved by the optimization of punch motions through trial and error. This method is both costly and time consuming. Over the last decade, the finite element (FE) method has received significant attention as an alternative to the trial and error method; however, there is still lack of an accurate and robust material model for the simulation of metal powder compaction. In this study, Cam-clay and Drucker-Prager cap (DPC) material models were implemented into the commercial FE software ABAQUS/Explicit using the user-subroutine VUMAT. The Cam-clay model was shown to be appropriate for simple geometries. The DPC model is a pressure-dependent, non-smooth, multi-yield surface material model with a high curvature in the cap yield surface. This high curvature tends to result in instability issues; a sub-increment technique was implemented to address this instability problem. The DPC model also shows instability problems at the intersection of the yield surfaces; this problem was solved using the corner region in DPC material models for soils. The computational efficiency of the DPC material model was improved using a novel technique to solve the constitutive equations. In a case study it was shown that the numerical technique leads to a 30% decrease in computational cost, while degrading the accuracy of the analysis by only 0.4%. The forward Euler method was shown to be accurate in the integration of the constitutive equations using an error control scheme. Experimental tests were conducted where cylindrical-shaped parts were compacted from Distaloy AE iron based powder to a final density of 7.0 g/cm3. To measure local density, metallography and image processing was used. The FE results were compared to experimental results and it was shown that the FE analysis predicted local relative density within 2% of the actual experimental density. / Thesis (Ph.D, Mechanical and Materials Engineering) -- Queen's University, 2010-09-23 12:15:27.371
8

Facilitating higher-fidelity simulations of axial compressor instability and other turbomachinery flow conditions

Herrick, Gregory Paul 03 May 2008 (has links)
The quest to accurately capture flow phenomena with length-scales both short and long and to accurately represent complex flow phenomena within disparately sized geometry inspires a need for an efficient, highidelity, multi-block structured computational fluid dynamics (CFD) parallel computational scheme. This research presents and demonstrates a more efficient computational method by which to perform multi-block structured CFD parallel computational simulations, thus facilitating higheridelity solutions of complicated geometries (due to the inclusion of grids for "small" flow areas which are often merely modeled) and their associated flows. This computational framework offers greater flexibility and user-control in allocating the resource balance between process count and wallclock computation time. The principal modifications implemented in this revision consist of a "multiple grid-block per processing core" software infrastructure and an analytic computation of viscous flux Jacobians. The development of this scheme is largely motivated by the desire to simulate axial compressor stall inception with more complete gridding of the flow passages (including rotor tip clearance regions) than has been previously done while maintaining high computational efficiency (i.e., minimal consumption of computational resources), and thus this paradigm shall be demonstrated with an examination of instability in a transonic axial compressor. However, the paradigm presented herein facilitates CFD simulation of myriad previously impractical geometries and flows and is not limited to detailed analyses of axial compressor flows. While the simulations presented herein were technically possible under the previous structure of the subject software, they were much less computationally efficient and thus not pragmatically feasible; the previous research using this software to perform three-dimensional, full-annulus, timeurate, unsteady, full-stage (with sliding-interface) simulations of rotating stall inception in axial compressors utilized tip clearance periodic models, while the scheme here is demonstrated by a simulation of axial compressor stall inception utilizing gridded rotor tip clearance regions. As will be discussed, much previous research --- experimental, theoretical, and computational --- has suggested that understanding clearance flow behavior is critical to understanding stall inception, and previous computational research efforts which have used tip clearance models have begged the question, "What about the clearance flows?". This research begins to address that question.
9

Fast Target Tracking Technique for Synthetic Aperture Radars

Kauffman, Kyle J. 17 August 2009 (has links)
No description available.
10

Rate-dependent cohesive-zone models for fracture and fatigue

Salih, Sarmed January 2018 (has links)
Despite the phenomena of fracture and fatigue having been the focus of academic research for more than 150 years, it remains in effect an empirical science lacking a complete and comprehensive set of predictive solutions. In this regard, the focus of the research in this thesis is on the development of new cohesive-zone models for fracture and fatigue that are afforded an ability to capture strain-rate effects. For the case of monotonic fracture in ductile material, different combinations of material response are examined with rate effects appearing either in the bulk material or localised to the cohesive-zone or in both. The development of a new rate-dependent CZM required first an analysis of two existing methods for incorporating rate dependency, i.e.either via a temporal critical stress or a temporal critical separation. The analysis revealed unrealistic crack behaviour at high loading rates. The new rate-dependent cohesive model introduced in the thesis couples the temporal responses of critical stress and critical separation and is shown to provide a stable and realistic solution to dynamic fracture. For the case of fatigue, a new frequency-dependent cohesive-zone model (FDCZM) has been developed for the simulation of both high and low-cycle fatigue-crack growth in elasto-plastic material. The developed model provides an alternative approach that delivers the accuracy of the loading-unloading hysteresis damage model along with the computational efficiency of the equally well-established envelope load-damage model by incorporating a fast-track feature. With the fast-track procedure, a particular damage state for one loading cycle is 'frozen in' over a predefined number of cycles. Stress and strain states are subsequently updated followed by an update on the damage state in the representative loading cycle which again is 'frozen in' and applied over the same number of cycles. The process is repeated up to failure. The technique is shown to be highly efficient in terms of time and cost and is particularly effective when a large number of frozen cycles can be applied without significant loss of accuracy. To demonstrate the practical worth of the approach, the effect that the frequency has on fatigue crack growth in austenitic stainless-steel 304 is analysed. It is found that the crack growth rate (da/dN) decreases with increasing frequency up to a frequency of 5 Hz after which it levels off. The behaviour, which can be linked to martensitic phase transformation, is shown to be accurately captured by the new FDCZM.

Page generated in 0.1206 seconds