Spelling suggestions: "subject:"convolution"" "subject:"konvolution""
31 |
On Convolution Squares of Singular MeasuresChan, Vincent January 2010 (has links)
We prove that if $1 > \alpha > 1/2$, then there exists a probability measure $\mu$ such that the Hausdorff dimension of its support is $\alpha$ and $\mu*\mu$ is a Lipschitz function of class $\alpha-1/2$.
|
32 |
A Fast Cubic-Spline Interpolation and Its ApplicationsWang, Lung-Jen 15 March 2001 (has links)
In this dissertation, a new cubic-spline interpolation (CSI) for both one-dimensional and two-dimensional signals is developed to sub-sample signal, image and video compression data. This new interpolation scheme that is based on the least-squares method with a cubic-spline function can be implemented by the fast Fourier transform (FFT). The result is a simpler and faster interpolation scheme than can be obtained by other conventional means. It is shown by computer simulation that such a new CSI yields a very accurate algorithm for smoothing. Linear interpolation, linear-spline interpolation, cubic-convolution interpolation and cubic B-spline interpolation tend to be inferior in performance.
In addition it is shown in this dissertation that the CSI scheme can be performed by a fast and efficient computation. The proposed method uses a simpler technique in the decimation process. It requires substantially fewer additions and multiplications than the original CSI algorithm. Moreover, a new type of overlap-save scheme is utilized to solve the boundary-condition problems that occur between two neighboring subimages in the actual image. It is also shown in this dissertation that a very efficient 9-point Winograd discrete Fourier transform (Winograd DFT) can be used to replace the FFT needed to implement the CSI scheme.
Furthermore, the proposed fast new CSI scheme is used along with the Joint Photographic Experts Group (JPEG) standard to design a modified JPEG encoder- decoder for image data compression. As a consequence, for higher compression ratios the proposed modified JPEG encoder-decoder obtains a better quality of reconstructed image and also requires less computational time than both the conventional JPEG method and the America on Line (AOL) algorithm. Finally, the new fast CSI scheme is applied to the JPEG 2000, MPEG-1 and MPEG-2 algorithms, respectively. A computer simulation shows that in the encoding and decoding, the proposed modified JPEG 2000 encoder-decoder speeds up the JPEG 2000 standard, respectively, and still obtains a good quality of reconstructed image that is similar to JPEG 2000 standard for high compression ratios. Additionally, the reconstructed video using the modified MPEG encoder-decoder indicates a better quality than the conventional MPEG-1 and MPEG-2 algorithms for high compression ratios or low-bit rates.
|
33 |
On Convolution Squares of Singular MeasuresChan, Vincent January 2010 (has links)
We prove that if $1 > \alpha > 1/2$, then there exists a probability measure $\mu$ such that the Hausdorff dimension of its support is $\alpha$ and $\mu*\mu$ is a Lipschitz function of class $\alpha-1/2$.
|
34 |
Multispectral Reduction of Two-Dimensional TurbulenceRoberts, Malcolm Ian WIlliam Unknown Date
No description available.
|
35 |
Aproximação numérica à convolução de Mellin via mistura de exponenciais / Numerical approximation to Mellin convolution by mixtures of exponentialsJorge Luis Torrejón Matos 09 October 2015 (has links)
A finalidade deste trabalho e calcular a composição de modelos no FBST (the Full Bayesian Signicance Test) descrito por Borges e Stern [6]. Nosso objetivo foi encontrar um método de aproximação numérica mais eficiente que consiga substituir o método de condensação descrita por Kaplan. Três técnicas foram comparadas: a primeira é a aproximação da convolução de Mellin usando discretização e condensação descrita por Kaplan [11], a segunda é a aproximação da convolução de Mellin usando mistura de exponenciais, descrita por Dufresne [8], para calcular a convolução de Fourier mediante a aproximação de mistura de convoluções exponenciais, usando a estrutura algébrica descrita por Hogg [10], mais a aplicação do operador descrito por Collins [7], para transformar a convolução de Fourier para a convolução de Mellin, a terceira é a aproximação da convolução de Mellin usando mistura de exponenciais, descrita por Dufresne [8], para aproximar diretamente via mistura de exponenciais a convolução de Fourier, mais a aplicação do operador descrito por Collins [7], para transformar a convolução de Fourier para a convolução de Mellin. / The purpose of this work is to calculate the compositional models of FBST (the Full Bayesian Signicance Test) studied by Borges and Stern [6]. The objective of this work was to find an approximation method numerically eficient that can replace the condensation methods described by Kaplan. Three techniques were compared: First, the approximation of Mellin convolution using discretization and condensation described by Kaplan [11], second, the approximation of Mellin convolution using mixtures of exponentials, described by Dufresne [8], to calculate the Fourier convolution by approximation of mixtures of exponential convolutions, using the algebraic structure described by Hogg [10], and then to apply the operator described by Collins [7], to transform the usual convolution to Mellin convolution, third, the approximation of Mellin convolution using mixtures of exponentials, described by Dufresne [8], to calculate the Fourier convolution by direct approximation of mixtures of exponentials, and then to apply the operator described by Collins [7], to transform the usual convolution to Mellin convolution.
|
36 |
A Parallel FPGA Implementation of Image ConvolutionStröm, Henrik January 2016 (has links)
Image convolution is a common algorithm that can be found in most graphics editors. It is used to filter images by multiplying and adding pixel values with coefficients in a filter kernel. Previous research work have implemented this algorithm on different platforms, such as FPGAs, CUDA, C etc. The performance of these implementations have then been compared against each other. When the algorithm has been implemented on an FPGA it has almost always been with a single convolution. The goal of this thesis was to investigate and in the end present one possible way to implement the algorithm with 16 parallel convolutions on a Xilinx Spartan 6 LX9 FPGA and then compare the performance with results from previous work. The final system performs better than multi-threaded implementations on both a GPU and CPU.
|
37 |
OBJECT DETECTION IN DEEP LEARNINGHaoyu Shi (8100614) 10 December 2019 (has links)
<p>Through the computing advance and GPU (Graphics Processing
Unit) availability for math calculation, the deep learning field becomes more
popular and prevalent. Object detection with deep learning, which is the part
of image processing, plays an important role in automatic vehicle drive and
computer vision. Object detection includes object localization and object
classification. Object localization involves that the computer looks through
the image and gives the correct coordinates to localize the object. Object
classification is that the computer classification targets into different
categories. The traditional image object detection pipeline idea is from
Fast/Faster R-CNN [32] [58]. The region proposal network
generates the contained objects areas and put them into classifier. The first
step is the object localization while the second step is the object
classification. The time cost for this pipeline function is not efficient.
Aiming to address this problem, You Only Look Once (YOLO) [4] network is born. YOLO is the
single neural network end-to-end pipeline with the image processing speed being
45 frames per second in real time for network prediction. In this thesis, the
convolution neural networks are introduced, including the state of art
convolutional neural networks in recently years. YOLO implementation details
are illustrated step by step. We adopt the YOLO network for our applications
since the YOLO network has the faster convergence rate in training and provides
high accuracy and it is the end to end architecture, which makes networks easy
to optimize and train. </p>
|
38 |
DeepCNPP: Deep Learning Architecture to Distinguish the Promoter of Human Long Non-Coding RNA Genes and Protein-Coding GenesAlam, Tanvir, Islam, Mohammad Tariqul, Househ, Mowafa, Belhaouari, Samir Brahim, Kawsar, Ferdaus Ahmed 01 January 2019 (has links)
Promoter region of protein-coding genes are gradually being well understood, yet no comparable studies exist for the promoter of long non-coding RNA (lncRNA) genes which has emerged as a global potential regulator in multiple cellular process and different diseases for human. To understand the difference in the transcriptional regulation pattern of these genes, previously, we proposed a machine learning based model to classify the promoter of protein-coding genes and lncRNA genes. In this study, we are presenting DeepCNPP (deep coding non-coding promoter predictor), an improved model based on deep learning (DL) framework to classify the promoter of lncRNA genes and protein-coding genes. We used convolution neural network (CNN) based deep network to classify the promoter of these two broad categories of human genes. Our computational model, built upon the sequence information only, was able to classify these two groups of promoters from human at a rate of 83.34% accuracy and outperformed the existing model. Further analysis and interpretation of the output from DeepCNPP architecture will enable us to understand the difference in transcription regulatory pattern for these two groups of genes.
|
39 |
Noise and Degradation Reduction for Signal and Image Processing via Non-adaptive Convolution FilteringBjerke, Benjamin A. 13 August 2013 (has links)
Noise and degradation reduction is of significant importance in virtually all systems where these phenomena are present, specifically in the fields of signal and image processing. The effect of image processing on target detection is of significant interest because noise and degradations can greatly reduce the effectiveness of detection algorithms, due to the presence of high intensity noise which is often mistaken as a target. In signal processing, noise in vibration data, or any time-series data, can reduce the accuracy of measurement and can prevent the passing of useful information.
Many filters that have been developed are designed to reduce a single class of noise, such as Wiener and Frost filters. When these filters are applied to types of noise that they were not designed for, the effect of the noise reduction can be greatly reduced. The proposed Two-Stage Non-Adaptive Convolution (TSNAC) filter significantly reduces both additive and multiplicative noise in these two unique systems.
The performance of these filters is compared through several Image Quality (IQ) metrics. It will be shown that the proposed TSNAC filter reduces noise and degradations more effectively in both SAR images and synthetic vibration data than the competing filters. It will show higher IQ scores, greater computational efficiency in target detection, and significant improvement in signal restoration of simulated vibration data. / Master of Science
|
40 |
Forecasting retweet count during elections using graph convolution neural networksVijayan, Raghavendran 31 May 2018 (has links)
Indiana University-Purdue University Indianapolis (IUPUI)
|
Page generated in 0.0496 seconds