• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 13
  • 10
  • 4
  • 4
  • 1
  • 1
  • Tagged with
  • 66
  • 31
  • 26
  • 24
  • 14
  • 12
  • 12
  • 11
  • 11
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

On the Development of Coherent Structure in a Planet Jet (Part 3, Multi-Point Simultaneous Measurement of Main Streamwise Velocity and the Reconstruction of Velocity Field by the KL Expansion)

SAKAI, Yasuhiko, TANAKA, Nobuhiko, YAMAMOTO, Mutsumi, KUSHIDA, Takehiro 08 1900 (has links)
No description available.
22

On the Development of Coherent Structure in a Planet Jet (Part2, Investigation of Spatio-Temporal Velocity Structure by the KL Expansion)

SAKAI, Yasuhiko, TANAKA, Nobuhiko, KUSHIDA, Takehiro 08 1900 (has links)
No description available.
23

On the Development of Coherent Structure in a Plane Jet (Part1, Characteristics of Two-Point Velocity Correlation and Analysis of Eigenmodes by the KL Expansion)

SAKAI, Yasuhiko, TANAKA, Nobuhiko, KUSHIDA, Takehiro 02 1900 (has links)
No description available.
24

A Hybrid Design of Speech Recognition System for Chinese Names

Hsu, Po-Min 06 September 2004 (has links)
A speech recognition system for Chinese names based on Karhunen Loeve transform (KLT), MFCC, hidden Markov model (HMM) and Viterbi algorithm is proposed in this thesis. KLT is the optimal transform in minimum mean square error and maximal energy packing sense to reduce data. HMM is a stochastic approach which characterizes many of the variability in speech signal by recording the state transitions. For the speaker-dependent case, the correct identification rate can be achieved 93.97% within 3 seconds in the laboratory environment.
25

Data assimilation for parameter estimation in coastal ocean hydrodynamics modeling

Mayo, Talea Lashea 25 February 2014 (has links)
Coastal ocean models are used for a vast array of applications. These applications include modeling tidal and coastal flows, waves, and extreme events, such as tsunamis and hurricane storm surges. Tidal and coastal flows are the primary application of this work as they play a critical role in many practical research areas such as contaminant transport, navigation through intracoastal waterways, development of coastal structures (e.g. bridges, docks, and breakwaters), commercial fishing, and planning and execution of military operations in marine environments, in addition to recreational aquatic activities. Coastal ocean models are used to determine tidal amplitudes, time intervals between low and high tide, and the extent of the ebb and flow of tidal waters, often at specific locations of interest. However, modeling tidal flows can be quite complex, as factors such as the configuration of the coastline, water depth, ocean floor topography, and hydrographic and meteorological impacts can have significant effects and must all be considered. Water levels and currents in the coastal ocean can be modeled by solv- ing the shallow water equations. The shallow water equations contain many parameters, and the accurate estimation of both tides and storm surge is dependent on the accuracy of their specification. Of particular importance are the parameters used to define the bottom stress in the domain of interest [50]. These parameters are often heterogeneous across the seabed of the domain. Their values cannot be measured directly and relevant data can be expensive and difficult to obtain. The parameter values must often be inferred and the estimates are often inaccurate, or contain a high degree of uncertainty [28]. In addition, as is the case with many numerical models, coastal ocean models have various other sources of uncertainty, including the approximate physics, numerical discretization, and uncertain boundary and initial conditions. Quantifying and reducing these uncertainties is critical to providing more reliable and robust storm surge predictions. It is also important to reduce the resulting error in the forecast of the model state as much as possible. The accuracy of coastal ocean models can be improved using data assimilation methods. In general, statistical data assimilation methods are used to estimate the state of a model given both the original model output and observed data. A major advantage of statistical data assimilation methods is that they can often be implemented non-intrusively, making them relatively straightforward to implement. They also provide estimates of the uncertainty in the predicted model state. Unfortunately, with the exception of the estimation of initial conditions, they do not contribute to the information contained in the model. The model error that results from uncertain parameters is reduced, but information about the parameters in particular remains unknown. Thus, the other commonly used approach to reducing model error is parameter estimation. Historically, model parameters such as the bottom stress terms have been estimated using variational methods. Variational methods formulate a cost functional that penalizes the difference between the modeled and observed state, and then minimize this functional over the unknown parameters. Though variational methods are an effective approach to solving inverse problems, they can be computationally intensive and difficult to code as they generally require the development of an adjoint model. They also are not formulated to estimate parameters in real time, e.g. as a hurricane approaches landfall. The goal of this research is to estimate parameters defining the bottom stress terms using statistical data assimilation methods. In this work, we use a novel approach to estimate the bottom stress terms in the shallow water equations, which we solve numerically using the Advanced Circulation (ADCIRC) model. In this model, a modified form of the 2-D shallow water equations is discretized in space by a continuous Galerkin finite element method, and in time by finite differencing. We use the Manning’s n formulation to represent the bottom stress terms in the model, and estimate various fields of Manning’s n coefficients by assimilating synthetic water elevation data using a square root Kalman filter. We estimate three types of fields defined on both an idealized inlet and a more realistic spatial domain. For the first field, a Manning’s n coefficient is given a constant value over the entire domain. For the second, we let the Manning’s n coefficient take two distinct values, letting one define the bottom stress in the deeper water of the domain and the other define the bottom stress in the shallower region. And finally, because bottom stress terms are generally spatially varying parameters, we consider the third field as a realization of a stochastic process. We represent a realization of the process using a Karhunen-Lo`ve expansion, and then seek to estimate the coefficients of the expansion. We perform several observation system simulation experiments, and find that we are able to accurately estimate the bottom stress terms in most of our test cases. Additionally, we are able to improve forecasts of the model state in every instance. The results of this study show that statistical data assimilation is a promising approach to parameter estimation. / text
26

Dynamics Of Wall Bounded Turbulence

Tugluk, Ozan 01 January 2005 (has links) (PDF)
Karhunen-Lo`{e}ve decomposition is a well established tool, in areas such as signal processing, data compression and low-dimensional modeling. In computational fluid mechanics (CFD) too, KL decomposition can be used to achieve reduced storage requirements, or construction of relatively low-dimensional models. These relatively low-dimensional models, can be used to investigate the dynamics of the flow field in a qualitative manner. Employment of these reduced models is beneficial, as the they can be studied with even stringent computing resources. In addition, these models enable the identification and investigation of interactions between flowlets of different nature (the flow field is decomposed into these flowlets). However, one should not forget that, the reduced models do not necessarily capture the entire dynamics of the original flow, especially in the case of turbulent flows. In the presented study, a KL basis is used to construct reduced models of Navier-Stokes equations in the case of wall-bounded turbulent flow, using Galerkin projection. The resulting nonlinear dynamical systems are then used to investigate the dynamics of transition to turbulence in plane Poiseuille flow in a qualitative fashion. The KL basis used, is extracted from a flow filed obtained from a direct numerical simulation of plane Poiseuille flow.
27

Optimal Basis For Ultrasound Rf Apertures: Applications to Real-Time Compression and Beamforming

Kibria, Sharmin 01 January 2014 (has links) (PDF)
Modern medical ultrasound machines produce enormous amounts of data, as much as several gigabytes/sec in some systems. The challenges of generating, storing, processing and reproducing such voluminous data has motivated researchers to search for a feasible compression scheme for the received ultrasound radio frequency (RF) signals. Most of this work has concentrated on the digitized data available after sampling and A/D conversion. We are interested in the possibility of compression implemented directly on the received analog RF signals; hence, we focus on compression of the set of signals in a single receive aperture. We first investigate the model-free approaches to compression that have been proposed by previous researchers that involve applications of some of the well-known signal processing tools like Principal Component Analysis (PCA), wavelets, Fourier Transform, etc. We also consider Bandpass Prolate Spheroidal Functions (BPSFs) in this study. Then we consider the derivation of the optimal basis for the RF signals assuming a white noise model for spatial inhomogeneity field in tissue. We first derive an expression for the (time and space) autocorrelation function of the set of signals received in a linear aperture. This is then used to find the autocorrelation's eigenfunctions, which form an optimal basis for minimum mean-square error compression of the aperture signal set. We show that computation of the coefficients of the signal set with respect to the basis is approximated by calculation of real and imaginary part of the Fourier Series coefficients for the received signal at each aperture element, with frequencies slightly scaled by aperture position, followed by linear combinations of corresponding frequency components across the aperture. The combination weights at each frequency are determined by the eigenvectors of a matrix whose entries are averaged cross-spectral coefficients of the received signal set at that frequency. The principal eigenvector generates a combination that corresponds to a variation on the standard delay-and-sum beamformed aperture center line, while the combinations from other eigenvectors represent aperture information that is not contained in the beamformed line. We then consider how to use the autocorrelation's eigenfunctions and eigenvalues to generate a linear minimum mean-square error beamformer for the center line of each aperture. Finally, we compare the performances of the optimal compression basis and to that of the 2D Fourier Transform.
28

Stochastic Modeling of the Equilibrium Speed-Density Relationship

Wang, Haizhong 01 September 2010 (has links)
Fundamental diagram, a graphical representation of the relation among traffic flow, speed, and density, has been the foundation of traffic flow theory and transportation engineering for many years. For example, the analysis of traffic dynamics relies on input from this fundamental diagram to find when and where congestion builds up and how it dissipates; traffic engineers use a fundamental diagram to determine how well a highway facility serves its users and how to plan for new facilities in case of capacity expansion. Underlying a fundamental diagram is the relation between traffic speed and density which roughly corresponds to drivers’ speed choices under varying car-following distances. First rigorously documented by Greenshields some seventy-five years ago, such a relation has been explored in many follow-up studies, but these attempts are dominantly deterministic in nature, i.e. they model traffic speed as a function of traffic density. Though these functional speed-density models are able to coarsely explain how traffic slows down as more vehicles are crowded on highways, empirical observations show a wide-scattering of traffic speeds around the values predicted by these models. In addition, functional speed-density models lead to deterministic prediction of traffic dynamics, which lack the power to address the uncertainty brought about by random factors in traffic flow. Therefore, it appears more appropriate to view the speed-density relation as a stochastic process, in which a certain density level gives rise not only to an average value of traffic speed but also to its variation because of the randomness of drivers’ speed choices. The objective of this dissertation is to develop such a stochastic speed-density model to better represent empirical observations and provide a basis for a probabilistic prediction of traffic dynamics. It would be ideal if such a model is formulated with both mathematical elegance and empirical accuracy. The mathematical elegance of the model must include the features of: a single equation (single-regime) with physically meaningful parameters and must be easy to implement. The interpretation of empirical accuracy is twofold; on the one hand, the mean of the stochastic speeddensity model should match the average behavior of the empirical equilibrium speeddensity observations statistically. On the other hand, the magnitude of traffic speed variance is controlled by the variance function which is dependent on the response. Ultimately, it is expected that the stochastic speed-density model is able to reproduce the wide-scattering speed-density relation observed at a highway segment after being calibrated by a set of local parameters and, in return, the model can be used to perform probabilistic prediction of traffic dynamics at this location. The emphasis of this dissertation is on the former (i.e. the development, calibration, and validation of the stochastic speed-density model) with a few numerical applications of the model to demonstrate the latter (i.e. probabilistic prediction). Following the seminal Greenshields model, a great variety of deterministic speeddensity models have been proposed to mathematically represent the empirical speeddensity observations which underlie the fundamental diagram. Observed in the existing speed-density models was their deterministic nature striving to balance two competing goals: mathematical elegance and empirical accuracy. As the latest development of such a pursuit, we show that the stochastic speed-density model can be developed through discretizing a random traffic speed process using the Karhunen- Lo`eve expansion. The stochastic speed-density relationship model is largely motivated by the prevalent randomness exhibited in empirical observations that mainly comes from drivers, vehicles, roads, and environmental conditions. In a general setting, the proposed stochastic speed-density model has two components: deterministic and stochastic. For the deterministic component, we propose to use a family of logistic speed density models to track the average trend of empirical observations. In particular, the five-parameter logistic speed-density model arises as a natural candidate due to the following considerations: (1) The shape of the five-parameter logistic speed-density model can be adjusted by its physically meaningful parameters to match the average behavior of empirical observations. Statistically, the average behavior is modeled by the mean of empirical observations. (2) A three-parameter and four-parameter logistic speed-density model can be obtained by reducing the shape or scale parameter in the five-parameter model, but the counter-effect is the loss of empirical accuracy. (3) The five-parameter model yields the best accuracy compared to three-parameter and four-parameter model. The magnitude of the stochastic component is dominated by the variance of traffic speeds indexed by traffic density. The empirical traffic speed variance increases as density increases to around 25 - 30 veh/km, then starts decreasing as traffic density gets larger. It has been verified by empirical evidence that traffic speed variation shows a parabolic shape which makes the proposed variance function in a suitable formula to model its variation. The variance function is dependent on the logistic speed-density relationship with varying model parameters. A detailed analysis of empirical traffic speed variance can be found in Chapter 6. Modeling results show that by taking care of second-order statistics (i.e., variance and correlation) the proposed stochastic speed-density model is suitable for describing the observed phenomenon as well as for matching the empirical data. Following the results, a stochastic fundamental diagram of traffic flow can be established. On the application side, the stochastic speed-density relationship model can potentially be used for real-time on-line prediction and to explain phenomenons in a similar manner. This enables dynamic control and management systems to anticipate problems before they occur rather than simply reacting to existing conditions. Finally, we will summarize our findings and discuss our future research directions.
29

Practical Analysis Tools for Structures Subjected to Flow-Induced and Non-Stationary Random Loads

Scott, Karen Mary Louise 14 July 2011 (has links)
There is a need to investigate and improve upon existing methods to predict response of sensors due to flow-induced vibrations in a pipe flow. The aim was to develop a tool which would enable an engineer to quickly evaluate the suitability of a particular design for a certain pipe flow application, without sacrificing fidelity. The primary methods, found in guides published by the American Society of Mechanical Engineers (ASME), of simple response prediction of sensors were found to be lacking in several key areas, which prompted development of the tool described herein. A particular limitation of the existing guidelines deals with complex stochastic stationary and non-stationary modeling and required much further study, therefore providing direction for the second portion of this body of work. A tool for response prediction of fluid-induced vibrations of sensors was developed which allowed for analysis of low aspect ratio sensors. Results from the tool were compared to experimental lift and drag data, recorded for a range of flow velocities. The model was found to perform well over the majority of the velocity range showing superiority in prediction of response as compared to ASME guidelines. The tool was then applied to a design problem given by an industrial partner, showing several of their designs to be inadequate for the proposed flow regime. This immediate identification of unsuitable designs no doubt saved significant time in the product development process. Work to investigate stochastic modeling in structural dynamics was undertaken to understand the reasons for the limitations found in fluid-structure interaction models. A particular weakness, non-stationary forcing, was found to be the most lacking in terms of use in the design stage of structures. A method was developed using the Karhunen Loeve expansion as its base to close the gap between prohibitively simple (stationary only) models and those which require too much computation time. Models were developed from SDOF through continuous systems and shown to perform well at each stage. Further work is needed in this area to bring this work full circle such that the lessons learned can improve design level turbulent response calculations. / Ph. D.
30

Minimally Corrective, Approximately Recovering Priors to Correct Expert Judgement in Bayesian Parameter Estimation

May, Thomas Joseph 23 July 2015 (has links)
Bayesian parameter estimation is a popular method to address inverse problems. However, since prior distributions are chosen based on expert judgement, the method can inherently introduce bias into the understanding of the parameters. This can be especially relevant in the case of distributed parameters where it is difficult to check for error. To minimize this bias, we develop the idea of a minimally corrective, approximately recovering prior (MCAR prior) that generates a guide for the prior and corrects the expert supplied prior according to that guide. We demonstrate this approach for the 1D elliptic equation or the elliptic partial differential equation and observe how this method works in cases with significant and without any expert bias. In the case of significant expert bias, the method substantially reduces the bias and, in the case with no expert bias, the method only introduces minor errors. The cost of introducing these small errors for good judgement is worth the benefit of correcting major errors in bad judgement. This is particularly true when the prior is only determined using a heuristic or an assumed distribution. / Master of Science

Page generated in 0.0473 seconds