Spelling suggestions: "subject:"[een] MULTIRESOLUTION"" "subject:"[enn] MULTIRESOLUTION""
81 |
Foveated Sampling Architectures for CMOS Image SensorsSaffih, Fayçal January 2005 (has links)
Electronic imaging technologies are faced with the challenge of power consumption when transmitting large amounts of image data from the acquisition imager to the display or processing devices. This is especially a concern for portable applications, and becomes more prominent in increasingly high-resolution, high-frame rate imagers. Therefore, new sampling techniques are needed to minimize transmitted data, while maximizing the conveyed image information. <br /><br /> From this point of view, two approaches have been proposed and implemented in this thesis: <ol> <li> A system-level approach, in which the classical 1D row sampling CMOS imager is modified to a 2D ring sampling pyramidal architecture, using the same standard three transistor (3T) active pixel sensor (APS). </li> <li> A device-level approach, in which the classical orthogonal architecture has been preserved while altering the APS device structure, to design an expandable multiresolution image sensor. </li> </ol> A new scanning scheme has been suggested for the pyramidal image sensor, resulting in an intrascene foveated dynamic range (FDR) similar in profile to that of the human eye. In this scheme, the inner rings of the imager have a higher dynamic range than the outer rings. The pyramidal imager transmits the sampled image through 8 parallel output channels, allowing higher frame rates. The human eye is known to have less sensitivity to oblique contrast. Using this fact on the typical oblique distribution of fixed pattern noise, we demonstrate lower perception of this noise than the orthogonal FPN distribution of classical CMOS imagers. <br /><br /> The multiresolution image sensor principle is based on averaging regions of low interest from frame-sampled image kernels. One pixel is read from each kernel while keeping pixels in the region of interest at their high resolution. This significantly reduces the transferred data and increases the frame rate. Such architecture allows for programmability and expandability of multiresolution imaging applications.
|
82 |
Multiresolutional partial least squares and principal component analysis of fluidized bed dryingFrey, Gerald M. 14 April 2005 (has links)
Fluidized bed dryers are used in the pharmaceutical industry for the batch drying of pharmaceutical granulate. Maintaining optimal hydrodynamic conditions throughout the drying process is essential to product quality. Due to the complex interactions inherent in the fluidized bed drying process, mechanistic models capable of identifying these optimal modes of operation are either unavailable or limited in their capabilities. Therefore, empirical models based on experimentally generated data are relied upon to study these systems.<p> Principal Component Analysis (PCA) and Partial Least Squares (PLS) are multivariate statistical techniques that project data onto linear subspaces that are the most descriptive of variance in a dataset. By modeling data in terms of these subspaces, a more parsimonious representation of the system is possible. In this study, PCA and PLS are applied to data collected from a fluidized bed dryer containing pharmaceutical granulate. <p>System hydrodynamics were quantified in the models using high frequency pressure fluctuation measurements. These pressure fluctuations have previously been identified as a characteristic variable of hydrodynamics in fluidized bed systems. As such, contributions from the macroscale, mesoscale, and microscales of motion are encoded into the signals. A multiresolutional decomposition using a discrete wavelet transformation was used to resolve these signals into components more representative of these individual scales before modeling the data. <p>The combination of multiresolutional analysis with PCA and PLS was shown to be an effective approach for modeling the conditions in the fluidized bed dryer. In this study, datasets from both steady state and transient operation of the dryer were analyzed. The steady state dataset contained measurements made on a bed of dry granulate and the transient dataset consisted of measurements taken during the batch drying of granulate from approximately 33 wt.% moisture to 5 wt.%. Correlations involving several scales of motion were identified in both studies.<p> In the steady state study, deterministic behavior related to superficial velocity, pressure sensor position, and granulate particle size distribution was observed in PCA model parameters. It was determined that these properties could be characterized solely with the use of the high frequency pressure fluctuation data. Macroscopic hydrodynamic characteristics such as bubbling frequency and fluidization regime were identified in the low frequency components of the pressure signals and the particle scale interactions of the microscale were shown to be correlated to the highest frequency signal components. PLS models were able to characterize the effects of superficial velocity, pressure sensor position, and granulate particle size distribution in terms of the pressure signal components. Additionally, it was determined that statistical process control charts capable of monitoring the fluid bed hydrodynamics could be constructed using PCA<p>In the transient drying experiments, deterministic behaviors related to inlet air temperature, pressure sensor position, and initial bed mass were observed in PCA and PLS model parameters. The lowest frequency component of the pressure signal was found to be correlated to the overall temperature effects during the drying cycle. As in the steady state study, bubbling behavior was also observed in the low frequency components of the pressure signal. PLS was used to construct an inferential model of granulate moisture content. The model was found to be capable of predicting the moisture throughout the drying cycle. Preliminary statistical process control models were constructed to monitor the fluid bed hydrodynamics throughout the drying process. These models show promise but will require further investigation to better determine sensitivity to process upsets.<p> In addition to PCA and PLS analyses, Multiway Principal Component Analysis (MPCA) was used to model the drying process. Several key states related to the mass transfer of moisture and changes in temperature throughout the drying cycle were identified in the MPCA model parameters. It was determined that the mass transfer of moisture throughout the drying process affects all scales of motion and overshadows other hydrodynamic behaviors found in the pressure signals.
|
83 |
Optimizable Multiresolution Quadratic Variation Filter For High-frequency Financial DataSen, Aykut 01 February 2009 (has links) (PDF)
As the tick-by-tick data of financial transactions become easier to reach, processing that much of information in an efficient and correct way to estimate the integrated volatility gains importance. However, empirical findings show that, this much of data may become unusable due to microstructure effects. Most common way to get over this problem is to sample the data in equidistant intervals of calendar, tick or business time scales. The comparative researches
on that subject generally assert that, the most successful sampling scheme is a calendar time sampling which samples the data every 5 to 20 minutes. But this generally means throwing out more than 99 percent of the data. So it is obvious that a more efficient sampling method is needed. Although there are some researches on using alternative techniques, none of them is proven to be the best.
Our study is concerned with a sampling scheme that uses the information in different scales of frequency and is less prone to microstructure effects. We introduce a new concept of business intensity, the sampler of which is named Optimizable Multiresolution Quadratic Variation Filter. Our filter uses multiresolution analysis techniques to decompose the data into different scales and quadratic variation to build up the new business time scale. Our empirical findings show that our filter is clearly less prone to microstructure effects than any other common sampling method.
We use the classified tick-by-tick data for Turkish Interbank FX market. The market is closed for nearly 14 hours of the day, so big jumps occur between closing and opening prices. We also propose a new smoothing algorithm to reduce the effects of those jumps.
|
84 |
Statistische Multiresolutions-Schätzer in linearen inversen Problemen - Grundlagen und algorithmische Aspekte / Statistical Multiresolution Estimatiors in Linear Inverse Problems - Foundations and Algorithmic AspectsMarnitz, Philipp 27 October 2010 (has links)
No description available.
|
85 |
Schemes and Strategies to Propagate and Analyze Uncertainties in Computational Fluid Dynamics ApplicationsGeraci, Gianluca 05 December 2013 (has links) (PDF)
In this manuscript, three main contributions are illustrated concerning the propagation and the analysis of uncertainty for computational fluid dynamics (CFD) applications. First, two novel numerical schemes are proposed : one based on a collocation approach, and the other one based on a finite volume like representation in the stochastic space. In both the approaches, the key element is the introduction of anon-linear multiresolution representation in the stochastic space. The aim is twofold : reducing the dimensionality of the discrete solution and applying a time-dependent refinement/coarsening procedure in the combined physical/stochastic space. Finally, an innovative strategy, based on variance-based analysis, is proposed for handling problems with a moderate large number of uncertainties in the context of the robust design optimization. Aiming to make more robust this novel optimization strategies, the common ANOVA-like approach is also extended to high-order central moments (up to fourth order). The new approach is more robust, with respect to the original variance-based one, since the analysis relies on new sensitivity indexes associated to a more complete statistic description.
|
86 |
Multiscale methods in signal processing for adaptive opticsMaji, Suman Kumar 14 November 2013 (has links) (PDF)
In this thesis, we introduce a new approach to wavefront phase reconstruction in Adaptive Optics (AO) from the low-resolution gradient measurements provided by a wavefront sensor, using a non-linear approach derived from the Microcanonical Multiscale Formalism (MMF). MMF comes from established concepts in statistical physics, it is naturally suited to the study of multiscale properties of complex natural signals, mainly due to the precise numerical estimate of geometrically localized critical exponents, called the singularity exponents. These exponents quantify the degree of predictability, locally, at each point of the signal domain, and they provide information on the dynamics of the associated system. We show that multiresolution analysis carried out on the singularity exponents of a high-resolution turbulent phase (obtained by model or from data) allows a propagation along the scales of the gradients in low-resolution (obtained from the wavefront sensor), to a higher resolution. We compare our results with those obtained by linear approaches, which allows us to offer an innovative approach to wavefront phase reconstruction in Adaptive Optics.
|
87 |
Improved subband-based and normal-mesh-based image codingXu, Di 19 December 2007 (has links)
Image coding is studied, with the work consisting of two distinct parts. Each part focuses on different coding paradigm.
The first part of the research examines subband coding of images. An optimization-based method for the design of high-performance separable filter banks for image coding is proposed. This method yields linear-phase perfect-reconstruction systems with high coding gain, good frequency selectivity, and certain prescribed vanishing-moment properties. Several filter banks designed with the proposed method are presented and shown to work extremely well for image coding, outperforming the well-known 9/7 filter bank (from the JPEG-2000 standard) in most cases. Several families of perfect reconstruction filter banks exist, where the filter banks in each family have some common structural properties. New filter banks in each family
are designed with the proposed method. Experimental results show that these new filter banks outperform previously known filter banks from the same family.
The second part of the research explores normal meshes as a tool for image coding, with a particular interest in the normal-mesh-based image coder of Jansen, Baraniuk, and Lavu. Three modifications to this coder are proposed, namely, the use of a data-dependent base mesh, an alternative representation for normal/vertical offsets, and a different scan-conversion scheme based on bicubic interpolation. Experimental results show that our proposed changes lead to improved coding performance in terms of both objective and subjective image quality measures.
|
88 |
Multiresolution strategies for the numerical solution of optimal control problemsJain, Sachin 26 March 2008 (has links)
Optimal control problems are often characterized by discontinuities or switchings in the control variables. One way of accurately capturing the irregularities in the solution is to use a high resolution (dense) uniform grid. This requires a large amount of computational resources both in terms of CPU time and memory. Hence, in order to accurately capture any irregularities in the solution using a few computational resources, one can refine the mesh locally in the region close to an irregularity instead of refining the mesh uniformly over the whole domain. Therefore, a novel multiresolution scheme for data compression has been designed which is shown to outperform similar data compression schemes. Specifically, we have shown that the proposed approach results in fewer grid points in the grid compared to a common multiresolution data compression scheme.
The validity of the proposed mesh refinement algorithm has been verified by solving several challenging initial-boundary value problems for evolution equations in 1D. The examples have demonstrated the stability and robustness of the proposed algorithm. Next, a direct multiresolution-based approach for solving trajectory optimization problems is developed. The original optimal control problem is transcribed into a nonlinear programming (NLP) problem that is solved using standard NLP codes. The novelty of the proposed approach hinges on the automatic calculation of a suitable, nonuniform grid over which the NLP problem is solved, which tends to increase numerical efficiency and robustness. Control and/or state constraints are handled with ease, and without any additional computational complexity. The proposed algorithm is based on a simple and intuitive method to balance several conflicting objectives, such as accuracy of the solution, convergence, and speed of the computations. The benefits of the proposed algorithm over uniform grid implementations are demonstrated with the help of several nontrivial examples. Furthermore, two sequential multiresolution trajectory optimization algorithms for solving problems with moving targets and/or dynamically changing environments have been developed.
|
89 |
Não-estacionariedade de séries temporais turbulentas e a grande variabilidade dos fluxos nas baixas freqüências / Time series non-stationarity and the large low frequency turbulent flux variabilityMartins, Luís Gustavo Nogueira 11 August 2011 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Turbulent flow high complexity makes it difficult to describe complex phenomena,
such as the transport of vector and scalar quantities at the lower atmosphere,
making the analysis of experimental data, such as time series, largely employed. The
method mostly used by the micrometeorological community to quantify such turbulent
transport is associated with the determination of the statistical covariance between
two variables. It is known that the determination of statistical quantities for very long
temporal windows leads to a large flux uncertainty. At the same time, the theory indicates
that the association between fluxes and statistical covariance is only valid for
temporally stationary series. The aim of the present study is to test the hypothesis that
the estimate uncertainty is directly related to the series non-stationarity. To better understand
this issue, we use a methodology based on a group of parametric and nonparametric
statistical tests. The tests considered here are the T-test, F-test, median
test, U-test and run test. Furthermore, the test results are compared with the outputs
of two signal decomposition procedures: multiresolution analysis and empirical mode
decomposition. The results suggest that the flux variability over large temporal scales
characterizes the existence of temporal trends and low frequency components in the
time series considered, so that it is more associated with an observational limitation
of the analysis than with non-stationarity, as this concept should be the property of an
ensemble, rather than of a single realization. Such limitation suggests the definition of
a practical single order stationarity, associated with temporal trends and low frequency
components whose energy is similar or larger to that of the turbulent fluctuations. For
that reason, we affirm that the interactions test is, among all considered, the best suited
for analyzing atmospheric data, because it is the most sensible to the existence
of temporal trends. Furthermore, such test allows obtaining a temporal scale beyond
which mesoscale events become important. / A complexidade de escoamentos turbulentos causa dificuldade para a descrição de
fenômenos complexos, como o transporte de grandezas vetoriais e escalares na baixa atmosfera,
fazendo com que a análise de dados experimentais, principalmente séries temporais,
seja amplamente utilizada. O método mais utilizado pela comunidade micrometeorológica
para quantificar esse transporte pela turbulência está associado à determinação da
covariância entre duas variáveis. Sabe-se que a determinação de quantidades estatísticas
para janelas temporais muito longas resulta em uma grande incerteza nos valores dos fluxos
obtidos através desse método. Ao mesmo tempo, a teoria indica que o procedimento
de associar fluxos a covariâncias estatísticas só vale para séries temporalmente estacionárias.
O objetivo deste trabalho é testar a hipótese de que a incerteza das estimativas esteja
relacionada diretamente com a não-estacionariedade das séries temporais. Para entendermos
melhor isso, usamos uma metodologia baseada em um conjunto de testes estatísticos
paramétricos e não-paramétricos de hipótese nula. Os testes considerados são o teste-T,
teste-F, teste da mediana, teste-U e o teste das interações. Os resultados dos testes são
ainda comparados com os obtidos com dois métodos de decomposição de sinais: a análise
de multiresolução e a Decomposição Empírica de Modos. Os resultados sugerem que
a variabilidade dos fluxos nas grandes escalas temporais está associada diretamente com
a presença de tendências e componentes de baixa frequência nas séries analisadas, e que
este fato está mais ligado à limitação observacional em que a análise é realizada do que propriamente
com a não-estacionariedade, já que esta última é uma propriedade de ensemble e
não de apenas uma realização. Esta limitação sugere a definição de um conceito mais prático
de estacionariedade de primeira ordem, que seja associado à presença de tendências
ou componentes de baixa frequência com energias da ordem ou maiores que a energia das
escalas turbulentas. Por esse motivo podemos afirmar que na análise de dados atmosféricos
o teste das interações mostrou-se, entre todos os considerados, o mais sensível à presença
de tendências, permitindo inclusive a obtenção de uma escala temporal na qual os eventos
de meso/submesoescala ganham importância.
|
90 |
Utilização da transformada Wavelet para caracterização de distúrbios na qualidade da energia elétrica / Use of the Wavelet transform for the characterization of disturbances in the power qualityOdilon Delmont Filho 22 September 2003 (has links)
Este trabalho apresenta um estudo sobre transformada Wavelet aplicada à qualidade da energia elétrica com o intuito de detectar, localizar e classificar eventuais distúrbios que ocorrem no sistema elétrico. Inicialmente é apresentada uma introdução sobre qualidade da energia, mostrando fatos, evoluções e explicando o conceito dos principais fenômenos que interferem na qualidade da energia do sistema elétrico brasileiro, devido, principalmente, à grande demanda de aparelhos eletrônicos produzidos atualmente. Em seguida é mostrada uma revisão dos principais métodos e modelos aplicados atualmente no mundo a respeito do assunto. A transformada Wavelet vem como uma grande ajuda nesta área de análise de sinais, já que é capaz de extrair simultaneamente informações de tempo e freqüência, diferentemente da transformada de Fourier. A simulação dos diversos distúrbios ocorridos no sistema foi realizada através do software ATP (Alternative Transients Program), cujas características seguem corretamente um sistema de distribuição real da concessionária CPFL. Os distúrbios de tensão gerados e analisados foram detectados e localizados através da técnica de Análise Multiresolução e, posteriormente, classificados, utilizando para isto o método da Curva de Desvio Padrão / This dissertation presents a study of Wavelet transform applied to power quality in order to detect, locate and classify disturbances that may occur in the power system. Initially an introduction of power quality is presented, showing facts, evolutions and explaining the concept of the main phenomena that interfere the on power quality of the brazilian power system, due to, mainly, a great demand for electronic devices produced nowadays. A revision of the main methods and models currently applied in the world regarding this subject is also show. The Wavelet transform comes as a great support in the area of signal assessment, as it can extract information about time and frequency simultaneously, differently from the Fourier transform. The simulation of the diverse disturbances occurred in the system was accomplished through ATP software (Alternative Transients Program), whose characteristics correctly follow a system of real distribution of CPFL eletric utility. The generated and analyzed voltage disturbances were detected and located by Multiresolution Analysis technique and later classified by the method of the Standard Deviation
|
Page generated in 0.0338 seconds