• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 90
  • 33
  • 22
  • 7
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 184
  • 39
  • 38
  • 32
  • 28
  • 27
  • 24
  • 24
  • 23
  • 23
  • 20
  • 18
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

All-in-Focus Image Reconstruction Through AutoEncoder Methods

Al Nasser, Ali 07 1900 (has links)
Focal stacking is a technique that allows us to create images with a large depth of field, where everything in the scene is sharp and clear. However, creating such images is not easy, as it requires taking multiple pictures at different focus settings and then blending them together. In this paper, we present a novel approach to blending a focal stack using a special type of autoencoder, which is a neural network that can learn to compress and reconstruct data. Our autoencoder consists of several parts, each of which processes one input image and passes its information to the final part, which fuses them into one output image. Unlike other methods, our approach is capable of inpainting and denoising resulting in sharp, clean all-in-focus images. Our approach does not require any prior training or a large dataset, which makes it fast and effective. We evaluate our method on various kinds of images and compare it with other widely used methods. We demonstrate that our method can produce superior focal stacked images with higher accuracy and quality. This paper reveals a new and promising way of using a neural network to aid in microphotography, microscopy, and visual computing, by enhancing the quality of focal stacked images.
62

A New Algorithm for Efficient Software Implementation of Reed-Solomon Encoders for Wireless Sensor Networks

Emelko, Glenn A. 01 April 2009 (has links)
No description available.
63

CONVOLUTIONAL CODED GENERALIZED DIRECT SEQUENCE SPREAD SPECTRUM

Venn, Madan R. 29 May 2008 (has links)
No description available.
64

Identification and Cancellation of Harmonic Disturbances in Radio Telescopes

Franke, Timothy Joseph 03 June 2015 (has links)
No description available.
65

An Analysis of Critically Enabling Technologies for Force and Power Limiting of Industrial Robotics

Smith, Gregory 28 August 2017 (has links)
No description available.
66

Improved Feature-Selection for Classification Problems using Multiple Auto-Encoders

Guo, Xinyu 29 May 2018 (has links)
No description available.
67

Joint random linear network coding and convolutional code with interleaving for multihop wireless network

Susanto, Misfa, Hu, Yim Fun, Pillai, Prashant January 2013 (has links)
No / Abstract: Error control techniques are designed to ensure reliable data transfer over unreliable communication channels that are frequently subjected to channel errors. In this paper, the effect of applying a convolution code to the Scattered Random Network Coding (SRNC) scheme over a multi-hop wireless channel was studied. An interleaver was implemented for bit scattering in the SRNC with the purpose of dividing the encoded data into protected blocks and vulnerable blocks to achieve error diversity in one modulation symbol while randomising errored bits in both blocks. By combining the interleaver with the convolution encoder, the network decoder in the receiver would have enough number of correctly received network coded blocks to perform the decoding process efficiently. Extensive simulations were carried out to study the performance of three systems: 1) SRNC with convolutional encoding, 2) SRNC; and 3) A system without convolutional encoding nor interleaving. Simulation results in terms of block error rate for a 2-hop wireless transmission scenario over an Additive White Gaussian Noise (AWGN) channel were presented. Results showed that the system with interleaving and convolutional code achieved better performance with coding gain of at least 1.29 dB and 2.08 dB on average when the block error rate is 0.01 when compared with system II and system III respectively.
68

Fast Split Arithmetic Encoder Architectures and Perceptual Coding Methods for Enhanced JPEG2000 Performance

Varma, Krishnaraj M. 11 April 2006 (has links)
JPEG2000 is a wavelet transform based image compression and coding standard. It provides superior rate-distortion performance when compared to the previous JPEG standard. In addition JPEG2000 provides four dimensions of scalability-distortion, resolution, spatial, and color. These superior features make JPEG2000 ideal for use in power and bandwidth limited mobile applications like urban search and rescue. Such applications require a fast, low power JPEG2000 encoder to be embedded on the mobile agent. This embedded encoder needs to also provide superior subjective quality to low bitrate images. This research addresses these two aspects of enhancing the performance of JPEG2000 encoders. The JPEG2000 standard includes a perceptual weighting method based on the contrast sensitivity function (CSF). Recent literature shows that perceptual methods based on subband standard deviation are also effective in image compression. This research presents two new perceptual weighting methods that combine information from both the human contrast sensitivity function as well as the standard deviation within a subband or code-block. These two new sets of perceptual weights are compared to the JPEG2000 CSF weights. The results indicate that our new weights performed better than the JPEG2000 CSF weights for high frequency images. Weights based solely on subband standard deviation are shown to perform worse than JPEG2000 CSF weights for all images at all compression ratios. Embedded block coding, EBCOT tier-1, is the most computationally intensive part of the JPEG2000 image coding standard. Past research on fast EBCOT tier-1 hardware implementations has concentrated on cycle efficient context formation. These pass-parallel architectures require that JPEG2000's three mode switches be turned on. While turning on the mode switches allows for arithmetic encoding from each coding pass to run independent of each other (and thus in parallel), it also disrupts the probability estimation engine of the arithmetic encoder, thus sacrificing coding efficiency for improved throughput. In this research a new fast EBCOT tier-1 design is presented: it is called the Split Arithmetic Encoder (SAE) process. The proposed process exploits concurrency to obtain improved throughput while preserving coding efficiency. The SAE process is evaluated using three methods: clock cycle estimation, multithreaded software implementation, a field programmable gate array (FPGA) hardware implementation. All three methods achieve throughput improvement; the hardware implementation exhibits the largest speedup, as expected. A high speed, task-parallel, multithreaded, software architecture for EBCOT tier-1 based on the SAE process is proposed. SAE was implemented in software on two shared-memory architectures: a PC using hyperthreading and a multi-processor non-uniform memory access (NUMA) machine. The implementation adopts appropriate synchronization mechanisms that preserve the algorithm's causality constraints. Tests show that the new architecture is capable of improving throughput as much as 50% on the NUMA machine and as much as 19% on a PC with two virtual processing units. A high speed, multirate, FPGA implementation of the SAE process is also proposed. The mismatch between the rate of production of data by the context formation (CF) module and the rate of consumption of data by the arithmetic encoder (AE) module is studied in detail. Appropriate choices for FIFO sizes and FIFO write and read capabilities are made based on the statistics obtained from test runs of the algorithm. Using a fast CF module, this implementation was able to achieve as much as 120% improvement in throughput. / Ph. D.
69

Spike Processing Circuit Design for Neuromorphic Computing

Zhao, Chenyuan 13 September 2019 (has links)
Von Neumann Bottleneck, which refers to the limited throughput between the CPU and memory, has already become the major factor hindering the technical advances of computing systems. In recent years, neuromorphic systems started to gain increasing attention as compact and energy-efficient computing platforms. Spike based-neuromorphic computing systems require high performance and low power neural encoder and decoder to emulate the spiking behavior of neurons. These two spike-analog signals converting interface determine the whole spiking neuromorphic computing system's performance, especially the highest performance. Many state-of-the-art neuromorphic systems typically operate in the frequency range between 〖10〗^0KHz and 〖10〗^2KHz due to the limitation of encoding/decoding speed. In this dissertation, all these popular encoding and decoding schemes, i.e. rate encoding, latency encoding, ISI encoding, together with related hardware implementations have been discussed and analyzed. The contributions included in this dissertation can be classified into three main parts: neuron improvement, three kinds of ISI encoder design, two types of ISI decoder design. Two-path leakage LIF neuron has been fabricated and modular design methodology is invented. Three kinds of ISI encoding schemes including parallel signal encoding, full signal iteration encoding, and partial signal encoding are discussed. The first two types ISI encoders have been fabricated successfully and the last ISI encoder will be taped out by the end of 2019. Two types of ISI decoders adopted different techniques which are sample-and-hold based mixed-signal design and spike-timing-dependent-plasticity (STDP) based analog design respectively. Both these two ISI encoders have been evaluated through post-layout simulations successfully. The STDP based ISI encoder will be taped out by the end of 2019. A test bench based on correlation inspection has been built to evaluate the information recovery capability of the proposed spiking processing link. / Doctor of Philosophy / Neuromorphic computing is a kind of specific electronic system that could mimic biological bodies’ behavior. In most cases, neuromorphic computing system is built with analog circuits which have benefits in power efficient and low thermal radiation. Among neuromorphic computing system, one of the most important components is the signal processing interface, i.e. encoder/decoder. To increase the whole system’s performance, novel encoders and decoders have been proposed in this dissertation. In this dissertation, three kinds of temporal encoders, one rate encoder, one latency encoder, one temporal decoder, and one general spike decoder have been proposed. These designs could be combined together to build high efficient spike-based data link which guarantee the processing performance of whole neuromorphic computing system.
70

Application of Multifunctional Doppler LIDAR for Non-contact Track Speed, Distance, and Curvature Assessment

Munoz, Joshua 08 December 2015 (has links)
The primary focus of this research is evaluation of feasibility, applicability, and accuracy of Doppler Light Detection And Ranging (LIDAR) sensors as non-contact means for measuring track speed, distance traveled, and curvature. Speed histories, currently measured with a rotary, wheel-mounted encoder, serve a number of useful purposes, one significant use involving derailment investigations. Distance calculation provides a spatial reference system for operators to locate track sections of interest. Railroad curves, using an IMU to measure curvature, are monitored to maintain track infrastructure within regulations. Speed measured with high accuracy leads to high-fidelity distance and curvature data through utilization of processor clock rate and left-and right-rail speed differentials during curve navigation, respectively. Wheel-mounted encoders, or tachometers, provide a relatively low-resolution speed profile, exhibit increased noise with increasing speed, and are subject to the inertial behavior of the rail car which affects output data. The IMU used to measure curvature is dependent on acceleration and yaw rate sensitivity and experiences difficulty in low-speed conditions. Preliminary system tests onboard a 'Hy-Rail' utility vehicle capable of traveling on rail show speed capture is possible using the rails as the reference moving target and furthermore, obtaining speed profiles from both rails allows for the calculation of speed differentials in curves to estimate degrees curvature. Ground truth distance calibration and curve measurement were also carried out. Distance calibration involved placement of spatial landmarks detected by a sensor to synchronize distance measurements as a pre-processing procedure. Curvature ground truth measurements provided a reference system to confirm measurement results and observe alignment variation throughout a curve. Primary testing occurred onboard a track geometry rail car, measuring rail speed over substantial mileage in various weather conditions, providing high-accuracy data to further calculate distance and curvature along the test routes. Tests results indicate the LIDAR system measures speed at higher accuracy than the encoder, absent of noise influenced by increasing speed. Distance calculation is also high in accuracy, results showing high correlation with encoder and ground truth data. Finally, curvature calculation using speed data is shown to have good correlation with IMU measurements and a resolution capable of revealing localized track alignments. Further investigations involve a curve measurement algorithm and speed calibration method independent from external reference systems, namely encoder and ground truth data. The speed calibration results show a high correlation with speed data from the track geometry vehicle. It is recommended that the study be extended to provide assessment of the LIDAR's sensitivity to car body motion in order to better isolate the embedded behavior in the speed and curvature profiles. Furthermore, in the interest of progressing the system toward a commercially viable unit, methods for self-calibration and pre-processing to allow for fully independent operation is highly encouraged. / Ph. D.

Page generated in 0.0265 seconds