Spelling suggestions: "subject:"convolutional.""
1 |
Approximate identities for certain dual classesRobinson, Symon Philip January 1996 (has links)
No description available.
|
2 |
The Convolution RingMcCormick, Robert E. 08 1900 (has links)
This paper deals with the development of the convolution ring and the construction of a field from this ring.
|
3 |
Interfacing the One-Dimensional Scanning of an Image with the Applications of Two-Dimensional OperatorsUllman, Shimon 01 April 1980 (has links)
To interface between the one-dimensional scanning of an image, and the applications of a two-dimensional operator, an intermediate storage is required. For a square image of size n2, and a square operator of size m2, the minimum intermediate storage is shown to be n .(m-1). An interface of this size can be conveniently realized by using a serpentine delay line. New kinds of imagers would be required to reduce the size of the intermediate storage below n.(m-1).
|
4 |
Efficient and Consistent Convolutional Neural Networks for Computer VisionCaleb Tung (16649301) 27 July 2023 (has links)
<p>Convolutional Neural Networks (CNNs) are machine learning models that are commonly used for computer vision tasks like image classification and object detection. State-of-the-art CNNs achieve high accuracy by using many convolutional filters to extract features from the input images for correct predictions. This high accuracy is achieved at the cost of high computational intensity. Large, accurate CNNs typically require powerful Graphics Processing Units (GPUs) to train and deploy, while attempts at creating smaller, less computationally-intense CNNs lose accuracy. In fact, maintaining consistent accuracy is a challenge for even the state-of-the-art CNNs. This presents a problem: the vast energy expenditure demanded by CNN training raises concerns about environmental impact and sustainability, while the computational intensity of CNN inference makes it challenging for low-power devices (e.g. embedded, mobile, Internet-of-Things) to deploy the CNNs on their limited hardware. Further, when reliable network is limited or when extremely low latency is required, the cloud cannot be used to offload computing from the low-power device, forcing a need to research methods to deploy CNNs on the device itself: to improve energy efficiency and mitigate consistency and accuracy losses of CNNs.</p>
<p>This dissertation investigates causes of CNN accuracy inconsistency and energy consumption. We further propose methods to improve both, enabling CNN deployment on low-power devices. Our methods do not require training to avoid the high energy costs associated with training.</p>
<p>To address accuracy inconsistency, we first design a new metric to properly capture such behavior. We conduct a study of modern object detectors to find that they all exhibit inconsistent behavior. That is, when two images are similar, an object detector can sometimes produce completely different predictions. Malicious actors exploit this to cause CNNs to mispredict, while image distortions caused by camera equipment and natural phenomena can also cause mispredictions. Regardless the cause of the misprediction, we find that modern accuracy metrics do not capture this behavior, and we create a new consistency metric to measure the behavior. Finally, we demonstrate the use of image processing techniques to improve CNN consistency on modern object detection datasets.</p>
<p>To improve CNN energy efficiency and reduce inference latency, we design the focused convolution operation. We observe that in a given image, many pixels are often irrelevant to the computer vision task -- if the pixels are deleted, the CNN can still give the correct prediction. We design a method to use a depth mapping neural network to identify which pixels are irrelevant in modern computer vision datasets. Next, we design the focused convolution to automatically ignore any pixels marked irrelevant outside the Area of Interest (AoI). By replacing the standard convolutional operations in CNNs with our focused convolutions, we find that ignoring those irrelevant pixels can save up to 45% energy and inference latency. </p>
<p>Finally, we improve the focused convolutions, allowing for (1) energy-efficient, automated AoI generation within the CNN itself and (2) improved memory alignment and better utilization of parallel processing hardware. The original focused convolution required AoI generation in advance, using a computationally-intense depth mapping method. Our AoI generation technique automatically filters the features from the early layers of a CNN using a threshold. The threshold is determined using an Accuracy vs Latency curve search method. The remaining layers will apply focused convolutions to the AoI to reduce energy use. This will allow focused convolutions to be deployed within any pretrained CNN for various observed use cases. No training is required.</p>
|
5 |
Computational geometry using fourier analysisHussain, R. January 1998 (has links)
No description available.
|
6 |
Multivariable system controller tuning techniques based on sensitivity measuresGong, Mingrui January 1996 (has links)
No description available.
|
7 |
Implementing and Comparing Image Convolution Methods on an FPGA at the Register-Transfer LevelHernandez, Anna C 13 August 2019 (has links)
Whether it's capturing a car's license plate on the highway or detecting someone's facial features to tag friends, computer vision and image processing have found their way into many facets of our lives. Image and video processing algorithms ultimately tailor towards one of two goals: to analyze data and produce output in as close to real-time as possible, or to take in and operate on large swaths of information offline. Image convolution is a mathematical method with which we can filter an image to highlight or make clearer desired information. The most popular uses of image convolution accentuate edges, corners, and facial features for analysis. The goal of this project was to investigate various image convolution algorithms and compare them in terms of hardware usage, power utilization, and ability to handle substantial amounts of data in a reasonable amount of time. The algorithms were designed, simulated, and synthesized for the Zynq-7000 FPGA, selected both for its flexibility and low power consumption.
|
8 |
The Winograd Convolution MethodWallén Kiessling, Alexander, Svalstedt, Viktor January 2023 (has links)
The convolution operation is a powerful tool which is widely used in many disciplines.Lately is has seen much use in the area of computer vision, particularly with convolutionalneural networks. For these use cases, convolutions need to be run repeatedly many timeswhich necessitates specialized hardware. Our work empirically investigates the efficiencyof some of the most prominent convolution methods used, such as the Fast FourierTransform and the Winograd method, and compares these to a baseline convolutionimplementation. These comparisons are made in both one and two dimensions, and forseveral different floating point data types.
|
9 |
Deconvolving Maps of Intra-Cardiac Elecrical PotentialPalmer, Keryn 26 July 2012 (has links)
Atrial fibrillation (AF) is the most common arrhythmia encountered in clinical practice, occurring in 1% of the adult population of North America. Although AF does not typically lead to risk of immediate mortality, it is a potent risk factor for ischemic stroke. When left untreated AF reduces quality of life, functional status, cardiac performance and is associated with higher medical costs and an increased risk of death. Catheter ablation is a commonly used treatment method for those who suffer from drugrefractory AF. Prior to ablation, intra-cardiac mapping can be used to determine the activation sequence of cardiac tissue, which may be useful in deciding where to place ablation lesions. However, the electrical potential that is recorded during mapping is not a direct reflection of the current density across the tissue because the potential recorded at each point above the heart tissue is influenced by every cell in the tissue. This causes the recorded potential to be a blurred version of the true tissue current density. The potential that is observed can be described as the convolution of the true current density with a point spread function. Accordingly, deconvolution can, in principle, be used in order to improve the resolution of potential maps. However, because the number of electrodes which can be deployed transvenously is limited by practical restrictions, the recorded potential field is a sparsely sampled version of the actual potential field. Further, an electrode array cannot sample over the entire atrial surface, so the potential map that is observed is a truncated version of the global electrical activity. Here, we investigate the effects of electrode sampling density and edge extension on the ability of deconvolution to improve the resolution of measured electrical potentials within the atria of the heart. In particular, we identify the density of sensing electrodes that are required to allow deconvolution to provide improved estimation of the true current density when compared to the observed potential field.
|
10 |
Neutriks proizvodi i konvolucije distribucija i primeneJolevska-Tuneska Biljana 16 January 2003 (has links)
No description available.
|
Page generated in 0.076 seconds