• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 31
  • 31
  • 10
  • 10
  • 8
  • 8
  • 7
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A Field-Wise Retrieval Algorithm for SeaWinds

Richards, Stephen L. 14 May 2003 (has links)
In the spring of 1999 NASA will launch the scatterometer SeaWinds, beginning a 3 year mission to measure the ocean winds. SeaWinds is different from previous spaceborne scatterometers in that it employs a rotating pencil-beam antenna as opposed to fixed fan-beam antennas. The scanning beam provides greater coverage but causes the wind retrieval accuracy to vary across the swath. This thesis develops a filed-wise wind retrieval algorithm to improve the overall wind retrieval accuracy for use with SeaWinds data. In order to test the field-wise wind retrieval algorithm, methods for simulating wind fields are developed. A realistic approach interpolates the NASA Scatterometer (NSCAT) estimates to fill a SeaWinds swath using optimal interpolation along with linear wind filed models. The two stages of the field-wise wind retrieval algorithm are filed-wise estimation and field-wise ambiguity selection. Field-wise estimation is implemented using a 22 parameter Karhunen-Loeve (KL) wind field model in conjunction with a maximum likelihood objective function. An augmented multi-start global optimization is developed which uses information from the point-wise estimates to aid in a global search of the objective function. The local minima in the objective function are located using the augmented multi-start search techniques and are stored as field-wise ambiguities. The ambiguity selection algorithm uses a field-wise median filter to select the field-wise ambiguity closest to the true wind in each region. Point-wise nudging is used to further improve the filed-wise estimate using information from the point-wise estimates. Combined, these two techniques select a good estimate of the wind 95% of the time. The overall performance of the field-wise wind retrieval algorithm is compared with the performance of the current point-wise techniques. Field-wise estimation techniques are shown to be potentially better than point-wise techniques. The field-wise estimates are also shown to be very useful tools in point-wise ambiguity selection since 95.8%-96.6% of the point-wise estimates closest to the field-wise estimates are the correct aliases.
22

Curve Estimation and Signal Discrimination in Spatial Problems

Rau, Christian, rau@maths.anu.edu.au January 2003 (has links)
In many instances arising prominently, but not exclusively, in imaging problems, it is important to condense the salient information so as to obtain a low-dimensional approximant of the data. This thesis is concerned with two basic situations which call for such a dimension reduction. The first of these is the statistical recovery of smooth edges in regression and density surfaces. The edges are understood to be contiguous curves, although they are allowed to meander almost arbitrarily through the plane, and may even split at a finite number of points to yield an edge graph. A novel locally-parametric nonparametric method is proposed which enjoys the benefit of being relatively easy to implement via a `tracking' approach. These topics are discussed in Chapters 2 and 3, with pertaining background material being given in the Appendix. In Chapter 4 we construct concomitant confidence bands for this estimator, which have asymptotically correct coverage probability. The construction can be likened to only a few existing approaches, and may thus be considered as our main contribution. ¶ Chapter 5 discusses numerical issues pertaining to the edge and confidence band estimators of Chapters 2-4. Connections are drawn to popular topics which originated in the fields of computer vision and signal processing, and which surround edge detection. These connections are exploited so as to obtain greater robustness of the likelihood estimator, such as with the presence of sharp corners. ¶ Chapter 6 addresses a dimension reduction problem for spatial data where the ultimate objective of the analysis is the discrimination of these data into one of a few pre-specified groups. In the dimension reduction step, an instrumental role is played by the recently developed methodology of functional data analysis. Relatively standar non-linear image processing techniques, as well as wavelet shrinkage, are used prior to this step. A case study for remotely-sensed navigation radar data exemplifies the methodology of Chapter 6.
23

Influence de la variabilité spatiale en interaction sismique sol-structure

Savin, Eric 24 November 1999 (has links) (PDF)
Les études des phénomènes d'interaction sismique sol-structure pour le dimensionnement d'ouvrages en génie civil sont fondées sur deux hypothèses simplificatrices fortes : l'homogénéité latérale du sol sous la fondation, et la représentation du mouvement sismique par des ondes planes à incidences verticales ou inclinées. Pour de grands ouvrages reposant sur des radiers flexibles étendus, ces hypothèses ne sont plus valides, d'autant que les observations in situ des ondes sismiques font apparaître une variabilité spatiale importante même sur de courtes distances et indépendamment de l'effet du passage d'onde. Dans ce travail sont développés les outils de modélisation et d'analyse numérique permettant de tenir compte de cette variabilité ainsi que de celle des caractéristiques mécaniques du sol. Une approche probabiliste est retenue, et leur influence est directement reliée à la réponse de la structure au séisme par une formulation intégrale incorporant ces deux aspects simultanément. Pour une mise en oeuvre numérique efficace, la dimension aléatoire des fluctuations - grandes ou petites - aléatoires des caractéristiques mécaniques du sol est réduite par l'introduction du développement de Karhunen-Loeve de l'opérateur de raideur dynamique associé. Cette technique est aussi appliquée au champ sismique incident. Les résultats obtenus pour des cas complexes réalistes permettent de mettre en évidence certains phénomènes dont l'appréhension apparaît indispensable dans le cadre d'une étude industrielle. Ils donnent notamment quelques indications utiles sur la sensibilité de la réponse de la structure à la variabilité spatiale du mouvement sismique ou des paramètres du sol.
24

Numerical Study Of Rayleigh Benard Thermal Convection Via Solenoidal Bases

Yildirim, Cihan 01 March 2011 (has links) (PDF)
Numerical study of transition in the Rayleigh-B&#039 / enard problem of thermal convection between rigid plates heated from below under the influence of gravity with and without rotation is presented. The first numerical approach uses spectral element method with Fourier expansion for horizontal extent and Legendre polynomal for vertical extent for the purpose of generating a database for the subsequent analysis by using Karhunen-Lo&#039 / eve (KL) decomposition. KL decompositions is a statistical tool to decompose the dynamics underlying a database representing a physical phenomena to its basic components in the form of an orthogonal KL basis. The KL basis satisfies all the spatial constraints such as the boundary conditions and the solenoidal (divergence-free) character of the underlying flow field as much as carried by the flow database. The optimally representative character of the orthogonal basis is used to investigate the convective flow for different parameters, such as Rayleigh and Prandtl numbers. The second numerical approach uses divergence free basis functions that by construction satisfy the continuity equation and the boundary conditions in an expansion of the velocity flow field. The expansion bases for the thermal field are constructed to satisfy the boundary conditions. Both bases are based on the Legendre polynomials in the vertical direction in order to simplify the Galerkin projection procedure, while Fourier representation is used in the horizontal directions due to the horizontal extent of the computational domain taken as periodic. Dual bases are employed to reduce the governing Boussinesq equations to a dynamical system for the time dependent expansion coefficients. The dual bases are selected so that the pressure term is eliminated in the projection procedure. The resulting dynamical system is used to study the transitional regimes numerically. The main difference between the two approaches is the accuracy with which the solenoidal character of the flow is satisfied. The first approach needs a numerically or experimentally generated database for the generation of the divergence-free KL basis. The degree of the accuracy for the KL basis in satisfying the solenoidal character of the flow is limited to that of the database and in turn to the numerical technique used. This is a major challenge in most numerical simulation techniques for incompressible flow in literature. It is also dependent on the parameter values at which the underlying flow field is generated. However the second approach is parameter independent and it is based on analytically solenoidal basis that produces an almost exactly divergence-free flow field. This level of accuracy is especially important for the transition studies that explores the regions sensitive to parameter and flow perturbations.
25

Bayesian Uncertainty Quantification for Large Scale Spatial Inverse Problems

Mondal, Anirban 2011 August 1900 (has links)
We considered a Bayesian approach to nonlinear inverse problems in which the unknown quantity is a high dimension spatial field. The Bayesian approach contains a natural mechanism for regularization in the form of prior information, can incorporate information from heterogeneous sources and provides a quantitative assessment of uncertainty in the inverse solution. The Bayesian setting casts the inverse solution as a posterior probability distribution over the model parameters. Karhunen-Lo'eve expansion and Discrete Cosine transform were used for dimension reduction of the random spatial field. Furthermore, we used a hierarchical Bayes model to inject multiscale data in the modeling framework. In this Bayesian framework, we have shown that this inverse problem is well-posed by proving that the posterior measure is Lipschitz continuous with respect to the data in total variation norm. The need for multiple evaluations of the forward model on a high dimension spatial field (e.g. in the context of MCMC) together with the high dimensionality of the posterior, results in many computation challenges. We developed two-stage reversible jump MCMC method which has the ability to screen the bad proposals in the first inexpensive stage. Channelized spatial fields were represented by facies boundaries and variogram-based spatial fields within each facies. Using level-set based approach, the shape of the channel boundaries was updated with dynamic data using a Bayesian hierarchical model where the number of points representing the channel boundaries is assumed to be unknown. Statistical emulators on a large scale spatial field were introduced to avoid the expensive likelihood calculation, which contains the forward simulator, at each iteration of the MCMC step. To build the emulator, the original spatial field was represented by a low dimensional parameterization using Discrete Cosine Transform (DCT), then the Bayesian approach to multivariate adaptive regression spline (BMARS) was used to emulate the simulator. Various numerical results were presented by analyzing simulated as well as real data.
26

Detection of Human Emotion from Noise Speech

Nallamilli, Sai Chandra Sekhar Reddy, Kandi, Nihanth January 2020 (has links)
Detection of a human emotion from human speech is always a challenging task. Factors like intonation, pitch, and loudness of signal vary from different human voice. So, it's important to know the exact pitch, intonation and loudness of a speech for making it a challenging task for detection. Some voices exhibit high background noise which will affect the amplitude or pitch of the signal. So, knowing the detailed properties of a speech to detect emotion is mandatory. Detection of emotion in humans from speech signals is a recent research field. One of the scenarios where this field has been applied is in situations where the human integrity and security are at risk In this project we are proposing a set of features based on the decomposition signals from discrete wavelet transform to characterize different types of negative emotions such as anger, happy, sad, and desperation. The features are measured in three different conditions: (1) the original speech signals, (2) the signals that are contaminated with noise or are affected by the presence of a phone channel, and (3) the signals that are obtained after processing using an algorithm for Speech Enhancement Transform. According to the results, when the speech enhancement is applied, the detection of emotion in speech is increased and compared to results obtained when the speech signal is highly contaminated with noise. Our objective is to use Artificial neural network because the brain is the most efficient and best machine to recognize speech. The brain is built with some neural network. At the same time, Artificial neural networks are clearly advanced with respect to several features, such as their nonlinearity and high classification capability. If we use Artificial neural networks to evolve the machine or computer that it can detect the emotion. Here we are using feedforward neural network which is suitable for classification process and using sigmoid function as activation function. The detection of human emotion from speech is achieved by training the neural network with features extracted from the speech. To achieve this, we need proper features from the speech. So, we must remove background noise in the speech. We can remove background noise by using filters. wavelet transform is the filtering technique used to remove the background noise and enhance the required features in the speech.
27

Wind Scatterometry with Improved Ambiguity Selection and Rain Modeling

Draper, David W. 23 December 2003 (has links) (PDF)
Although generally accurate, the quality of SeaWinds on QuikSCAT scatterometer ocean vector winds is compromised by certain natural phenomena and retrieval algorithm limitations. This dissertation addresses three main contributers to scatterometer estimate error: poor ambiguity selection, estimate uncertainty at low wind speeds, and rain corruption. A quality assurance (QA) analysis performed on SeaWinds data suggests that about 5% of SeaWinds data contain ambiguity selection errors and that scatterometer estimation error is correlated with low wind speeds and rain events. Ambiguity selection errors are partly due to the "nudging" step (initialization from outside data). A sophisticated new non-nudging ambiguity selection approach produces generally more consistent wind than the nudging method in moderate wind conditions. The non-nudging method selects 93% of the same ambiguities as the nudged data, validating both techniques, and indicating that ambiguity selection can be accomplished without nudging. Variability at low wind speeds is analyzed using tower-mounted scatterometer data. According to theory, below a threshold wind speed, the wind fails to generate the surface roughness necessary for wind measurement. A simple analysis suggests the existence of the threshold in much of the tower-mounted scatterometer data. However, the backscatter does not "go to zero" beneath the threshold in an uncontrolled environment as theory suggests, but rather has a mean drop and higher variability below the threshold. Rain is the largest weather-related contributer to scatterometer error, affecting approximately 4% to 10% of SeaWinds data. A simple model formed via comparison of co-located TRMM PR and SeaWinds measurements characterizes the average effect of rain on SeaWinds backscatter. The model is generally accurate to within 3 dB over the tropics. The rain/wind backscatter model is used to simultaneously retrieve wind and rain from SeaWinds measurements. The simultaneous wind/rain (SWR) estimation procedure can improve wind estimates during rain, while providing a scatterometer-based rain rate estimate. SWR also affords improved rain flagging for low to moderate rain rates. QuikSCAT-retrieved rain rates correlate well with TRMM PR instantaneous measurements and TMI monthly rain averages. SeaWinds rain measurements can be used to supplement data from other rain-measuring instruments, filling spatial and temporal gaps in coverage.
28

The Pdf Of Irradiance For A Free-space Optical Communications Channel: A Physics Based Model

Wayne, David 01 January 2010 (has links)
An accurate PDF of irradiance for a FSO channel is important when designing a laser radar, active laser imaging, or a communications system to operate over the channel. Parameters such as detector threshold level, probability of detection, mean fade time, number of fades, BER, and SNR are derived from the PDF and determine the design constraints of the receiver, transmitter, and corresponding electronics. Current PDF models of irradiance, such as the Gamma-Gamma, do not fully capture the effect of aperture averaging; a reduction in scintillation as the diameter of the collecting optic is increased. The Gamma-Gamma PDF of irradiance is an attractive solution because the parameters of the distribution are derived strictly from atmospheric turbulence parameters; propagation path length, Cn2, l0, and L0. This dissertation describes a heuristic physics-based modeling technique to develop a new PDF of irradiance based upon the optical field. The goal of the new PDF is three-fold: capture the physics of the turbulent atmosphere, better describe aperture averaging effects, and relate parameters of the new model to measurable atmospheric parameters. The modeling decomposes the propagating electromagnetic field into a sum of independent random-amplitude spatial plane waves using an approximation to the Karhunen-Loeve expansion. The scattering effects of the turbulence along the propagation path define the random-amplitude of each component of the expansion. The resulting PDF of irradiance is a double finite sum containing a Bessel function. The newly developed PDF is a generalization of the Gamma-Gamma PDF, and reduces to such in the limit. An experiment was setup and performed to measure the PDF of irradiance for several receiver aperture sizes under moderate to strong turbulence conditions. The propagation path was instrumented with scintillometers and anemometers to characterize the turbulence conditions. The newly developed PDF model and the GG model were compared to histograms of the experimental data. The new PDF model was typically able to match the data as well or better than the GG model under conditions of moderate aperture averaging. The GG model fit the data better than the new PDF under conditions of significant aperture averaging. Due to a limiting scintillation index value of 3, the new PDF was not compared to the GG for point apertures under strong turbulence; a regime where the GG is known to fit data well.
29

Uncertainty Quantification in Dynamic Problems With Large Uncertainties

Mulani, Sameer B. 13 September 2006 (has links)
This dissertation investigates uncertainty quantification in dynamic problems. The Advanced Mean Value (AMV) method is used to calculate probabilistic sound power and the sensitivity of elastically supported panels with small uncertainty (coefficient of variation). Sound power calculations are done using Finite Element Method (FEM) and Boundary Element Method (BEM). The sensitivities of the sound power are calculated through direct differentiation of the FEM/BEM/AMV equations. The results are compared with Monte Carlo simulation (MCS). An improved method is developed using AMV, metamodel, and MCS. This new technique is applied to calculate sound power of a composite panel using FEM and Rayleigh Integral. The proposed methodology shows considerable improvement both in terms of accuracy and computational efficiency. In systems with large uncertainties, the above approach does not work. Two Spectral Stochastic Finite Element Method (SSFEM) algorithms are developed to solve stochastic eigenvalue problems using Polynomial chaos. Presently, the approaches are restricted to problems with real and distinct eigenvalues. In both the approaches, the system uncertainties are modeled by Wiener-Askey orthogonal polynomial functions. Galerkin projection is applied in the probability space to minimize the weighted residual of the error of the governing equation. First algorithm is based on inverse iteration method. A modification is suggested to calculate higher eigenvalues and eigenvectors. The above algorithm is applied to both discrete and continuous systems. In continuous systems, the uncertainties are modeled as Gaussian processes using Karhunen-Loeve (KL) expansion. Second algorithm is based on implicit polynomial iteration method. This algorithm is found to be more efficient when applied to discrete systems. However, the application of the algorithm to continuous systems results in ill-conditioned system matrices, which seriously limit its application. Lastly, an algorithm to find the basis random variables of KL expansion for non-Gaussian processes, is developed. The basis random variables are obtained via nonlinear transformation of marginal cumulative distribution function using standard deviation. Results are obtained for three known skewed distributions, Log-Normal, Beta, and Exponential. In all the cases, it is found that the proposed algorithm matches very well with the known solutions and can be applied to solve non-Gaussian process using SSFEM. / Ph. D.
30

Numerical Complexity Analysis of Weak Approximation of Stochastic Differential Equations

Tempone Olariaga, Raul January 2002 (has links)
The thesis consists of four papers on numerical complexityanalysis of weak approximation of ordinary and partialstochastic differential equations, including illustrativenumerical examples. Here by numerical complexity we mean thecomputational work needed by a numerical method to solve aproblem with a given accuracy. This notion offers a way tounderstand the efficiency of different numerical methods. The first paper develops new expansions of the weakcomputational error for Itˆo stochastic differentialequations using Malliavin calculus. These expansions have acomputable leading order term in a posteriori form, and arebased on stochastic flows and discrete dual backward problems.Beside this, these expansions lead to efficient and accuratecomputation of error estimates and give the basis for adaptivealgorithms with either deterministic or stochastic time steps.The second paper proves convergence rates of adaptivealgorithms for Itˆo stochastic differential equations. Twoalgorithms based either on stochastic or deterministic timesteps are studied. The analysis of their numerical complexitycombines the error expansions from the first paper and anextension of the convergence results for adaptive algorithmsapproximating deterministic ordinary differential equations.Both adaptive algorithms are proven to stop with an optimalnumber of time steps up to a problem independent factor definedin the algorithm. The third paper extends the techniques to theframework of Itˆo stochastic differential equations ininfinite dimensional spaces, arising in the Heath Jarrow Mortonterm structure model for financial applications in bondmarkets. Error expansions are derived to identify differenterror contributions arising from time and maturitydiscretization, as well as the classical statistical error dueto finite sampling. The last paper studies the approximation of linear ellipticstochastic partial differential equations, describing andanalyzing two numerical methods. The first method generates iidMonte Carlo approximations of the solution by sampling thecoefficients of the equation and using a standard Galerkinfinite elements variational formulation. The second method isbased on a finite dimensional Karhunen- Lo`eve approximation ofthe stochastic coefficients, turning the original stochasticproblem into a high dimensional deterministic parametricelliptic problem. Then, adeterministic Galerkin finite elementmethod, of either h or p version, approximates the stochasticpartial differential equation. The paper concludes by comparingthe numerical complexity of the Monte Carlo method with theparametric finite element method, suggesting intuitiveconditions for an optimal selection of these methods. 2000Mathematics Subject Classification. Primary 65C05, 60H10,60H35, 65C30, 65C20; Secondary 91B28, 91B70. / QC 20100825

Page generated in 0.0543 seconds