Spelling suggestions: "subject:"postfiltering"" "subject:"andfiltering""
91 |
Efficient high-dimensional filtering for image and video processingGastal, Eduardo Simões Lopes January 2015 (has links)
Filtragem é uma das mais importantes operações em processamento de imagens e vídeos. Em particular, filtros de altas dimensões são ferramentas fundamentais para diversas aplicações, tendo recebido recentemente significativa atenção de pesquisadores da área. Infelizmente, implementações ingênuas desta importante classe de filtros são demasiadamente lentas para muitos usos práticos, especialmente tendo em vista o aumento contínuo na resolução de imagens capturadas digitalmente. Esta dissertação descreve três novas abordagens para filtragem eficiente em altas dimensões: a domain transform, os adaptive manifolds, e uma formulação matemática para a aplicação de filtros recursivos em sinais amostrados não-uniformemente. A domain transform, representa o estado-da-arte em termos de algoritmos para filtragem utilizando métrica geodésica. A inovação desta abordagem é a utilização de um procedimento simples de redução de dimensionalidade para implementar eficientemente filtros de alta dimensão. Isto nos permite a primeira demonstração de filtragem com preservação de arestas em tempo real para vídeos coloridos de alta resolução (full HD). Os adaptive manifolds, representam o estado-da-arte em termos de algoritmos para filtragem utilizando métrica Euclidiana. A inovação desta abordagem é a ideia de subdividir o espaço de alta dimensão em fatias não-lineares de mais baixa dimensão, as quais são filtradas independentemente e finalmente interpoladas para obter uma filtragem de alta dimensão com métrica Euclidiana. Com isto obtemos diversos avanços em relação a técnicas anteriores, como filtragem mais rápida e requerendo menos memória, além da derivação do primeiro filtro Euclidiano com custo linear tanto no número de pixels da imagem (ou vídeo) quanto na dimensionalidade do espaço onde o filtro está operando. Finalmente, introduzimos uma formulação matemática que descreve a aplicação de um filtro recursivo em sinais amostrados de maneira não-uniforme. Esta formulação estende a ideia de filtragem geodésica para filtros recursivos arbitrários (tanto passa-baixa quanto passa-alta e passa-banda). Esta extensão fornece maior controle sobre as respostas desejadas para os filtros, as quais podem então ser melhor adaptadas para aplicações específicas. Como exemplo, demonstramos—pela primeira vez na literatura—filtros geodésicos com formato Gaussiano, Laplaciana do Gaussiano, Butterworth, e Cauer, dentre outros. Com a possibilidade de se trabalhar com filtros arbitrários, nosso método permite uma nova variedade de efeitos para aplicações em imagens e vídeos. / Filtering is arguably the single most important operation in image and video processing. In particular, high-dimensional filters are a fundamental building block for several applications, having recently received considerable attention from the research community. Unfortunately, naive implementations of such an important class of filters are too slow for many practical uses, specially in light of the ever increasing resolution of digitally captured images. This dissertation describes three novel approaches to efficiently perform high-dimensional filtering: the domain transform, the adaptive manifolds, and a mathematical formulation for recursive filtering of non-uniformly sampled signals. The domain transform defines an isometry between curves on the 2D image manifold in 5D and the real line. It preserves the geodesic distance between points on these curves, adaptively warping the input signal so that high-dimensional geodesic filtering can be efficiently performed in linear time. Its computational cost is not affected by the choice of the filter parameters; and the resulting filters are the first to work on color images at arbitrary scales in real time, without resorting to subsampling or quantization. The adaptive manifolds compute the filter’s response at a reduced set of sampling points, and use these for interpolation at all input pixels, so that high-dimensional Euclidean filtering can be efficiently performed in linear time. We show that for a proper choice of sampling points, the total cost of the filtering operation is linear both in the number of pixels and in the dimension of the space in which the filter operates. As such, ours is the first high-dimensional filter with such a complexity. We present formal derivations for the equations that define our filter, providing a sound theoretical justification. Finally, we introduce a mathematical formulation for linear-time recursive filtering of non-uniformly sampled signals. This formulation enables, for the first time, geodesic edge-aware evaluation of arbitrary recursive infinite impulse response filters (not only low-pass), which allows practically unlimited control over the shape of the filtering kernel. By providing the ability to experiment with the design and composition of new digital filters, our method has the potential do enable a greater variety of image and video effects. The high-dimensional filters we propose provide the fastest performance (both on CPU and GPU) for a variety of real-world applications. Thus, our filters are a valuable tool for the image and video processing, computer graphics, computer vision, and computational photography communities.
|
92 |
Efficient high-dimensional filtering for image and video processingGastal, Eduardo Simões Lopes January 2015 (has links)
Filtragem é uma das mais importantes operações em processamento de imagens e vídeos. Em particular, filtros de altas dimensões são ferramentas fundamentais para diversas aplicações, tendo recebido recentemente significativa atenção de pesquisadores da área. Infelizmente, implementações ingênuas desta importante classe de filtros são demasiadamente lentas para muitos usos práticos, especialmente tendo em vista o aumento contínuo na resolução de imagens capturadas digitalmente. Esta dissertação descreve três novas abordagens para filtragem eficiente em altas dimensões: a domain transform, os adaptive manifolds, e uma formulação matemática para a aplicação de filtros recursivos em sinais amostrados não-uniformemente. A domain transform, representa o estado-da-arte em termos de algoritmos para filtragem utilizando métrica geodésica. A inovação desta abordagem é a utilização de um procedimento simples de redução de dimensionalidade para implementar eficientemente filtros de alta dimensão. Isto nos permite a primeira demonstração de filtragem com preservação de arestas em tempo real para vídeos coloridos de alta resolução (full HD). Os adaptive manifolds, representam o estado-da-arte em termos de algoritmos para filtragem utilizando métrica Euclidiana. A inovação desta abordagem é a ideia de subdividir o espaço de alta dimensão em fatias não-lineares de mais baixa dimensão, as quais são filtradas independentemente e finalmente interpoladas para obter uma filtragem de alta dimensão com métrica Euclidiana. Com isto obtemos diversos avanços em relação a técnicas anteriores, como filtragem mais rápida e requerendo menos memória, além da derivação do primeiro filtro Euclidiano com custo linear tanto no número de pixels da imagem (ou vídeo) quanto na dimensionalidade do espaço onde o filtro está operando. Finalmente, introduzimos uma formulação matemática que descreve a aplicação de um filtro recursivo em sinais amostrados de maneira não-uniforme. Esta formulação estende a ideia de filtragem geodésica para filtros recursivos arbitrários (tanto passa-baixa quanto passa-alta e passa-banda). Esta extensão fornece maior controle sobre as respostas desejadas para os filtros, as quais podem então ser melhor adaptadas para aplicações específicas. Como exemplo, demonstramos—pela primeira vez na literatura—filtros geodésicos com formato Gaussiano, Laplaciana do Gaussiano, Butterworth, e Cauer, dentre outros. Com a possibilidade de se trabalhar com filtros arbitrários, nosso método permite uma nova variedade de efeitos para aplicações em imagens e vídeos. / Filtering is arguably the single most important operation in image and video processing. In particular, high-dimensional filters are a fundamental building block for several applications, having recently received considerable attention from the research community. Unfortunately, naive implementations of such an important class of filters are too slow for many practical uses, specially in light of the ever increasing resolution of digitally captured images. This dissertation describes three novel approaches to efficiently perform high-dimensional filtering: the domain transform, the adaptive manifolds, and a mathematical formulation for recursive filtering of non-uniformly sampled signals. The domain transform defines an isometry between curves on the 2D image manifold in 5D and the real line. It preserves the geodesic distance between points on these curves, adaptively warping the input signal so that high-dimensional geodesic filtering can be efficiently performed in linear time. Its computational cost is not affected by the choice of the filter parameters; and the resulting filters are the first to work on color images at arbitrary scales in real time, without resorting to subsampling or quantization. The adaptive manifolds compute the filter’s response at a reduced set of sampling points, and use these for interpolation at all input pixels, so that high-dimensional Euclidean filtering can be efficiently performed in linear time. We show that for a proper choice of sampling points, the total cost of the filtering operation is linear both in the number of pixels and in the dimension of the space in which the filter operates. As such, ours is the first high-dimensional filter with such a complexity. We present formal derivations for the equations that define our filter, providing a sound theoretical justification. Finally, we introduce a mathematical formulation for linear-time recursive filtering of non-uniformly sampled signals. This formulation enables, for the first time, geodesic edge-aware evaluation of arbitrary recursive infinite impulse response filters (not only low-pass), which allows practically unlimited control over the shape of the filtering kernel. By providing the ability to experiment with the design and composition of new digital filters, our method has the potential do enable a greater variety of image and video effects. The high-dimensional filters we propose provide the fastest performance (both on CPU and GPU) for a variety of real-world applications. Thus, our filters are a valuable tool for the image and video processing, computer graphics, computer vision, and computational photography communities.
|
93 |
Application of digital filtering techniques for reducing and analyzing in-situ seismic time seriesBaziw, Erick John January 1988 (has links)
The introduction of digital filtering is a new and exciting approach in analyzing in-situ seismic data. Digital filters are also in the same spirit as the electric cone which replaced the mechanical cone in CPT* testing. That is, it is desirable to automate CPT testing in order to make it less operator dependent and increase the reliability and accuracy.
In CPT seismic cone testing seismic waves are generated at the surface and recorded downhole with velocity or acceleration transducers. The seismic receivers record the different seismic wavelets (e.g., SV-waves, P-waves) allowing one to determine shear and compression wave velocities. In order to distinguish the different seismic events, an instrument with fast response time is desired (i.e., high natural frequency and low damping). This type of instrument is characteristic of an accelerometer. The fast response time (small time constant) of an accelerometer results in a very sensitive instrument
with corresponding noisy time domain characteristics. One way to separate events is to characterize the signal frequencies and remove unwanted frequencies. Digital filtering is ideal for this application.
The techniques of digital filtering introduced in this research are based on frequency domain filtering, where Fast Fourier, Butterworth Filter, and crosscorrelation algorithms are implemented. One based on time domain techniques, where a Kalman Filter is designed to model'the instrument and the physical environment. The crosscorrelation method allows one to focus on a specific wavelet and use all the information of the wavelets present averaging out any noises or irregularities and relying upon dominant responses. The Kalman Filter was applied in a manner in which it modelled the sensors used and the physical environment of the body waves and noise generation. The KF was investigated for its possible application to obtaining accurate estimates on the P-wave and S-wave amplitudes and arrival times. The KF is a very flexible tool which allows one to model the problem considered accurately. In addition, the KF works in the time domain which removes many of the limitations of the frequency domain techniques. The crosscorrelation filter concepts are applied by a program referred to as CROSSCOR. CROSSCOR is a graphics interactive program which displays the frequency spectrums, unfiltered and filtered time series and crosscorrelations on a mainframe graphics terminal which has been adapted to run on the IBM P.C. CROSSCOR was tested for performance by analyzing synthetic and real data. The results from the analysis on both synthetic and real data indicate that CROSSCOR is an accurate and user friendly tool which greatly assists one in obtaining seismic velocities.
The performance of the Kalman Filter was analyzed by generating a source wavelet and passing it through the second order instrumentation. The second order response is then fed into the KF with the arrival time and maximum amplitude being determined. The filter was found to perform well and it has much promise in respect that if it is finely turned, it would be possible to obtain arrival times and amplitudes on line resulting in velocities and damping characteristics,
respectively.
* Cone Penetration Test / Applied Science, Faculty of / Civil Engineering, Department of / Graduate
|
94 |
New incoherent scatter radar measurement techniques and data analysis methodsDamtie, B. (Baylie) 16 April 2004 (has links)
Abstract
This dissertation presents new incoherent scatter radar measurement techniques and data analysis methods. The measurements used in the study were collected by connecting a computer-based receiver to the EISCAT (European Incoherent SCATter) radar on Svalbard. This hardware consists of a spectrum analyzer, a PCI-bus-based programmable digital I/O card and a desktop computer with a large-capacity hard disk. It takes in the 70-MHz signal from the ESR (Eiscat Svalbard Radar) signal path and carries out down-conversion, AD conversion, quadrature detection, and finally stores the output samples effective sampling rate is 1 MHz, large enough to span all the frequency channels used in the experiment. Hence the total multichannel signal was stored instead of separate lagged products for each frequency channel, which is the procedure in the standard hardware. This solution has some benefits including elimination of ground clutter with only a small loss in statistical accuracy. The capability of our hardware in storing the incoherent scatter radar signals directly allows us to use very flexible and versatile signal processing methods, which include clutter suppression, filtering, decoding, lag prole calculation, inversion and optimal height integration. The performance of these incoherent scatter radar measurement techniques and data analysis methods are demonstrated by employing an incoherent scatter experiment that applies a new binary phase code. Each bit of this code has been further coded by a 5-bit Barker code. In the analysis, stochastic inversion has been used for the first time in decoding Barker-coded incoherent scatter measurements, and this method takes care of the ambiguity problems associated with the measurements. Finally, we present new binary phase codes with corresponding sidelobe-free decoding filters that maximize the signal-to-noise ratio (SNR) and at the same time eliminate unwanted sidelobes completely. / Original papers
The original papers are not included in the electronic version of the dissertation.
Lehtinen, M., Markkanen, J., Väänänen, A., Huuskonen, A., Damtie, B., Nygrén, T., & Rahkola, J. (2002). A new incoherent scatter technique in the EISCAT Svalbard Radar. Radio Science, 37(4), 3-1-3–14. https://doi.org/10.1029/2001rs002518
Damtie, B., Nygrén, T., Lehtinen, M. S., & Huuskonen, A. (2002). High resolution observations of sporadic-E layers within the polar cap ionosphere using a new incoherent scatter radar experiment. Annales Geophysicae, 20(9), 1429–1438. https://doi.org/10.5194/angeo-20-1429-2002
Damtie, B., Lehtinen, M. S., & Nygrén, T. (2004). Decoding of Barker-coded incoherent scatter measurements by means of mathematical inversion. Annales Geophysicae, 22(1), 3–13. https://doi.org/10.5194/angeo-22-3-2004
Lehtinen, M. S., Damtie, B., & Nygrén, T. (2004). Optimal binary phase codes and sidelobe-free decoding filters with application to incoherent scatter radar. Annales Geophysicae, 22(5), 1623–1632. https://doi.org/10.5194/angeo-22-1623-2004
|
95 |
litsift: Automated Text Categorization in Bibliographic SearchFaulstich, Lukas C., Stadler, Peter F., Thurner, Caroline, Witwer, Christina 07 January 2019 (has links)
In bioinformatics there exist research topics that cannot be uniquely characterized by a set of key words because relevant key words are (i) also heavily used in other contexts and (ii) often omitted in relevant documents because the context is clear to the target audience. Information retrieval interfaces such as entrez/Pubmed produce either low precision or low recall in this case. To yield a high recall at a reasonable precision, the results of a broad information retrieval search have to be filtered to remove irrelevant documents. We use automated text categorization for this purpose. In this study we use the topic of conserved secondary RNA structures in viral genomes as running example. Pubmed result sets for two virus groups, Picornaviridae and Flaviviridae, have been manually labeled by human experts. We
evaluated various classifiers from the Weka toolkit together with different feature selection methods to assess whether classifiers trained on documents dedicated to one virus group can be successfully applied to filter literature on other virus groups. Our results indicate that in this domain a bibliographic search tool trained on a reference corpus may significantly reduce the amount of time needed for extensive literature recherches.
|
96 |
Personal news video recommendations based on implicit feedback : An evaluation of different recommender systems with sparse data / Personliga rekommendationer av nyhetsvideor baserade på implicita dataAndersson, Morgan January 2018 (has links)
The amount of video content online will nearly triple in quantity by 2021 compared to 2016. The implementation of sophisticated filters is of paramount importance to manage this information flow. The research question of this thesis asks to what extent it is possible to generate personal recommendations, based on the data that news videos implies. The objective is to evaluate how different recommender systems compare to complete random, each other and how they are received by users in a test environment. This study was performed during the spring of 2018, and explore four different algorithms. These recommender systems include a content-based, a collaborative-filter, a hybrid model and a popularity model as a baseline. The dataset originates from a news media startup called Newstag, who provide video news on a global scale. The data is sparse and includes implicit feedback only. Three offline experiments and a user test were performed. The metric that guided the algorithms offline performance was their recall at 5 and 10, due to the fact that the top list of recommended items are of most interest. A comparison was done on different amounts of meta-data included during training. Another test explored respective algorithms performance as the density of the data increased. In the user test, a mean opinion score was calculated based on the quality of recommendations that each of the algorithms generated for the test subjects. The user test also included randomly sampled news videos to compare with as a baseline. The results indicate that for this specific setting and data set, the content-based recommender system performed best in both the recall at five and ten, as well as in the user test. All of the algorithms outperformed the random baseline. / Mängden video som finns tillgänglig på internet förväntas att tredubblas år 2021 jämfört med 2016. Detta innebär ett behov av sofistikerade filter för att kunna hantera detta informationsflöde. Detta examensarbete ämnar att svara på till vilken grad det går att generera personliga rekommendationer baserat på det data som nyhetsvideo innebär. Syftet är att utvärdera och jämföra olika rekommendationssystem och hur de står sig i ett användartest. Studien utfördes under våren 2018 och utvärderar fyra olika algoritmer. Dessa olika rekommendationssystem innefattar tekniker som content-based, collaborative-filter, hybrid och en popularitetsmodell används som basvärde. Det dataset som används är glest och har endast implicita attribut. Tre experiment utförs samt ett användartest. Mätpunkten för algoritmernas prestanda utgjordes av recall at 5 och recall at 10, dvs. att man mäter hur väl algoritmerna lyckas generera värdefulla rekommendationer i en topp-fem respektive topp-10-lista av videoklipp. Detta då det är av intresse att ha de mest relevanta videorna högst upp i sin lista av resultat. En jämförelse gjordes mellan olika mängd metadata som inkluderades vid träning. Ett annat test gick ut på att utforska hur algoritmerna presterar då datasetet blir mindre glest. I användartestet användes en utvärderingsmetod kallad mean-opinion-score och denna räknades ut per algoritm genom att testanvändare gav betyg på respektive rekommendation, baserat på hur intressant videon var för dem. Användartestet inkluderade även slumpmässigt generade videos för att kunna jämföras i form av basvärde. Resultaten indikerar, för detta dataset, att algoritmen content-based presterar bäst både med hänsyn till recall at 5 & 10 samt den totala poängen i användartestet. Alla algoritmer presterade bättre än slumpen.
|
97 |
Iterative Memoryless Non-linear Estimators of Correlation for Complex-Valued Gaussian Processes that Exhibit Robustness to Impulsive NoiseTamburello, Philip Michael 04 February 2016 (has links)
The autocorrelation function is a commonly used tool in statistical time series analysis. Under the assumption of Gaussianity, the sample autocorrelation function is the standard method used to estimate this function given a finite number of observations. Non-Gaussian, impulsive observation noise following probability density functions with thick tails, which often occurs in practice, can bias this estimator, rendering classical time series analysis methods ineffective.
This work examines the robustness of two estimators of correlation based on memoryless nonlinear functions of observations, the Phase-Phase Correlator (PPC) and the Median- of-Ratios Estimator (MRE), which are applicable to complex-valued Gaussian random pro- cesses. These estimators are very fast and easy to implement in current processors. We show that these estimators are robust from a bias perspective when complex-valued Gaussian pro- cesses are contaminated with impulsive noise at the expense of statistical efficiency at the assumed Gaussian distribution. Additionally, iterative versions of these estimators named the IMRE and IPPC are developed, realizing an improved bias performance over their non- iterative counterparts and the well-known robust Schweppe-type Generalized M-estimator utilizing a Huber cost function (SHGM).
An impulsive noise suppression technique is developed using basis pursuit and a priori atom weighting derived from the newly developed iterative estimators. This new technique is proposed as an alternative to the robust filter cleaner, a Kalman filter-like approach that relies on linear prediction residuals to identity and replace corrupted observations. It does not have the same initialization issues as the robust filter cleaner.
Robust spectral estimation methods are developed using these new estimators and impulsive noise suppression techniques. Results are obtained for synthetic complex-valued Guassian processes and real-world digital television signals collected using a software defined radio. / Ph. D.
|
98 |
Robust Kalman Filters Using Generalized Maximum Likelihood-Type EstimatorsGandhi, Mital A. 10 January 2010 (has links)
Estimation methods such as the Kalman filter identify best state estimates based on certain optimality criteria using a model of the system and the observations. A common assumption underlying the estimation is that the noise is Gaussian. In practical systems though, one quite frequently encounters thick-tailed, non-Gaussian noise. Statistically, contamination by this type of noise can be seen as inducing outliers among the data and leads to significant degradation in the KF. While many nonlinear methods to cope with non-Gaussian noise exist, a filter that is robust in the presence of outliers and maintains high statistical efficiency is desired. To solve this problem, a new robust Kalman filter framework is proposed that bounds the influence of observation, innovation, and structural outliers in a discrete linear system. This filter is designed to process the observations and predictions together, making it very effective in suppressing multiple outliers. In addition, it consists of a new prewhitening method that incorporates a robust multivariate estimator of location and covariance. Furthermore, the filter provides state estimates that are robust to outliers while maintaining a high statistical efficiency at the Gaussian distribution by applying a generalized maximum likelihood-type (GM) estimator. Finally, the filter incorporates the correct error covariance matrix that is derived using the GM-estimator's influence function.
This dissertation also addresses robust state estimation for systems that follow a broad class of nonlinear models that possess two or more equilibrium points. Tracking state transitions from one equilibrium point to another rapidly and accurately in such models can be a difficult task, and a computationally simple solution is desirable. To that effect, a new robust extended Kalman filter is developed that exploits observational redundancy and the nonlinear weights of the GM-estimator to track the state transitions rapidly and accurately.
Through simulations, the performances of the new filters are analyzed in terms of robustness to multiple outliers and estimation capabilities for the following applications: tracking autonomous systems, enhancing actual speech from cellular phones, and tracking climate transitions. Furthermore, the filters are compared with the state-of-the-art, i.e. the <i>H<sub>â </sub></i>-filter for tracking an autonomous vehicle and the extended Kalman filter for sensing climate transitions. / Ph. D.
|
99 |
Personalisierte Filterung von Nachrichten aus semistrukturierten QuellenEixner, Thomas 09 July 2009 (has links) (PDF)
Durch die Vielzahl von heterogenen Informationsquellen sehen sich viele Nutzer einer kaum überschaubaren Informationsflut gegenüber. Aus diesem Grund werden durch diese Arbeit die gängigen Nachrichtenformate analysiert und der aktuelle Stand der Technik im Bereich der Nachrichtenaggregatoren dargelegt. Dabei werden diese Analysen immer mit Blick auf die Möglichkeiten einer personalisierten Filterung der Inhalte durchgeführt. Anschließend wird eine im Rahmen dieser Arbeit entstandene Infrastruktur für die Aggregation, personalisierte Filterung und kollaborative Empfehlung von Inhalten aus heterogenen Nachrichtenquellen vorgestellt. Dabei wird detailiert auf die zu Grunde liegenden Konzepte eingegangen und deren praktische Umsetzung beschrieben.
|
100 |
Unscented Filter for OFDM Joint Frequency Offset and Channel EstimationIltis, Ronald A. 10 1900 (has links)
ITC/USA 2006 Conference Proceedings / The Forty-Second Annual International Telemetering Conference and Technical Exhibition / October 23-26, 2006 / Town and Country Resort & Convention Center, San Diego, California / OFDM is a preferred physical layer for an increasing number of telemetry and LAN applications. However, joint estimation of the multipath channel and frequency offset in OFDM
remains a challenging problem. The Unscented Kalman Filter (UKF) is presented to solve
the offset/channel tracking problem. The advantages of the UKF are that it is less susceptible to divergence than the EKF, and does not require computation of a Jacobian matrix.
A hybrid analysis/simulation approach is developed to rapidly evaluate UKF performance
in terms of symbol-error rate and channel/offset error for the 802.11a OFDM format.
|
Page generated in 0.1004 seconds