• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 183
  • 30
  • 14
  • 10
  • 8
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 309
  • 309
  • 79
  • 64
  • 58
  • 47
  • 47
  • 42
  • 40
  • 37
  • 36
  • 32
  • 31
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

A Scalable, Secure, and Energy-Efficient Image Representation for Wireless Systems

Woo, Tim January 2004 (has links)
The recent growth in wireless communications presents a new challenge to multimedia communications. Digital image transmission is a very common form of multimedia communication. Due to limited bandwidth and broadcast nature of the wireless medium, it is necessary to compress and encrypt images before they are sent. On the other hand, it is important to efficiently utilize the limited energy in wireless devices. In a wireless device, two major sources of energy consumption are energy used for computation and energy used for transmission. Computation energy can be reduced by minimizing the time spent on compression and encryption. Transmission energy can be reduced by sending a smaller image file that is obtained by compressing the original highest quality image. Image quality is often sacrificed in the compression process. Therefore, users should have the flexibility to control the image quality to determine whether such a tradeoff is acceptable. It is also desirable for users to have control over image quality in different areas of the image so that less important areas can be compressed more, while retaining the details in important areas. To reduce computations for encryption, a partial encryption scheme can be employed to encrypt only the critical parts of an image file, without sacrificing security. This thesis proposes a scalable and secure image representation scheme that allows users to select different image quality and security levels. The binary space partitioning (BSP) tree presentation is selected because this representation allows convenient compression and scalable encryption. The Advanced Encryption Standard (AES) is chosen as the encryption algorithm because it is fast and secure. Our experimental result shows that our new tree construction method and our pruning formula reduces execution time, hence computation energy, by about 90%. Our image quality prediction model accurately predicts image quality to within 2-3dB of the actual image PSNR.
242

Image and Texture Analysis using Biorthogonal Angular Filter Banks

Gonzalez Rosiles, Jose Gerardo 09 July 2004 (has links)
In this thesis we develop algorithms for the processing of textures and images using a ladder-based biorthogonal directional filter bank (DFB). This work is based on the DFB originally proposed by Bamberger and Smith. First we present a novel implementation of this filter bank using ladder structures. This new DFB provides non-trivial FIR perfect reconstruction systems which are computationally very efficient. Furthermore we address the lack of shift-invariance in the DFB by presenting a novel undecimated DFB that preserves the computational simplicity of its maximally decimated counterpart. Finally, we study the use of the DFB in combination with pyramidal structures to form polar-separable image decompositions. Using the proposed filter banks we develop and evaluate algorithms for texture classification, segmentation and synthesis. We perform a comparative study with other image representations and find that the DFB provides some of the best results reported on the data sets used. Using the proposed directional pyramids we adapt wavelet thresholding algorithms. We find that our decompositions provide better edge and contour preservation than the best results reported using the undecimated discrete wavelet transform. Finally, we apply the developed algorithms to the analysis and processing of synthetic aperture radar (SAR) imagery. SAR image analysis is impaired by the presence of speckle noise. Our first objective will be to study the removal of speckle to enhance the visual quality of the image. Additionally, we implement land cover segmentation and classification algorithms taking advantage of the textural characteristics of SAR images. Finally, we propose a model-based SAR image compression algorithm in which the speckle component is separated from the structural features of a scene. The speckle component is captured with a texture model and the scene component is coded with a wavelet coder at very low bit rates. The resulting decompressed images have a better perceptual quality than SAR images compressed without removing speckle.
243

Automatic Construction Algorithms for Supervised Neural Networks and Applications

Tsai, Hsien-Leing 28 July 2004 (has links)
The reseach on neural networks has been done for six decades. In this period, many neural models and learning rules have been proposed. Futhermore, they were popularly and successfully applied to many applications. They successfully solved many problems that traditional algorithms could not solve efficiently . However, applying multilayer neural networks to applications, users are confronted with the problem of determining the number of hidden layers and the number of hidden neurons in each hidden layer. It is too difficult for users to determine proper neural network architectures. However, it is very significant, because neural network architectures always influence critically their performance. We may solve problems efficiently, only when we has proper neural network architectures. To overcome this difficulty, several approaches have been proposed to generate the architecture of neural networks recently. However, they still have some drawbacks. The goal of our research is to discover better approachs to automatically determine proper neural network architectures. We propose a series of approaches in this thesis. First, we propose an approach based on decision trees. It successfully determines neural network architectures and greatly decreases learning time. However, it can deal only with two-class problems and it generates bigger neural network architectures. Next, we propose an information entropy based approach to overcome the above drawbacks. It can generate easily multi-class neural networks for standard domain problems. Finally, we expand the above method for sequential domain and structured domain problems. Therefore, our approaches can be applied to many applications. Currently, we are trying to work on quantum neural networks. We are also interested in ART neural networks. They are also incremental neural models. We apply them to digital signal processing. We propose a character recognition application, a spoken word recognition application, and an image compression application. All of them have good performances.
244

An Improved C-Fuzzy Decision Tree and its Application to Vector Quantization

Chiu, Hsin-Wei 27 July 2006 (has links)
In the last one hundred years, the mankind has invented a lot of convenient tools for pursuing beautiful and comfortable living environment. Computer is one of the most important inventions, and its operation ability is incomparable with the mankind. Because computer can deal with a large amount of data fast and accurately, people use this advantage to imitate human thinking. Artificial intelligence is developed extensively. Methods, such as all kinds of neural networks, data mining, fuzzy logic, etc., apply to each side fields (ex: fingerprint distinguishing, image compressing, antennal designing, etc.). We will probe into to prediction technology according to the decision tree and fuzzy clustering. The fuzzy decision tree proposed the classification method by using fuzzy clustering method, and then construct out the decision tree to predict for data. However, in the distance function, the impact of the target space was proportional inversely. This situation could make problems in some dataset. Besides, the output model of each leaf node represented by a constant restricts the representation capability about the data distribution in the node. We propose a more reasonable definition of the distance function by considering both input and target differences with weighting factor. We also extend the output model of each leaf node to a local linear model and estimate the model parameters with a recursive SVD-based least squares estimator. Experimental results have shown that our improved version produces higher recognition rates and smaller mean square errors for classification and regression problems, respectively.
245

Numerical Methods for Wilcoxon Fractal Image Compression

Jau, Pei-Hung 28 June 2007 (has links)
In the thesis, the Wilcoxon approach to linear regression problems is combined with the fractal image compression to form a novel Wilcoxon fractal image compression. When the original image is corrupted by noise, we argue that the fractal image compression scheme should be insensitive to those outliers present in the corrupted image. This leads to the new concept of robust fractal image compression. The proposed Wilcoxon fractal image compression is the first attempt toward the design of robust fractal image compression. Four different numerical methods, i.e., steepest decent, line minimization based on quadratic interpolation, line minimization based on cubic interpolation, and least absolute deviation, will be proposed to solve the associated linear Wilcoxon regression problem. From the simulation results, it will be seen that, compared with the traditional fractal image compression, Wilcoxon fractal image compression has very good robustness against outliers caused by salt-and-pepper noise. However, it does not show great improvement of the robustness against outliers caused by Gaussian noise.
246

Development Of A Methodology For Geospatial Image Streaming

Kivci, Erdem Turker 01 September 2010 (has links) (PDF)
Serving geospatial data collected from remote sensing methods (satellite images, areal photos, etc.) have become crutial in many geographic information system (GIS) applications such as disaster management, municipality applications, climatology, environmental observations, military applications, etc. Even in today&rsquo / s highly developed information systems, geospatial image data requies huge amount of physical storage spaces and such characteristics of geospatial image data make its usage limited in above mentioned applications. For this reason, web-based GIS applications can benefit from geospatial image streaming through web-based architectures. Progressive transmission of geospatial image and map data on web-based architectures is implemented with the developed image streaming methodology. The software developed allows user interaction in such a way that the users will visualize the images according to their level of detail. In this way geospatial data is served to the users in an efficient way. The main methods used to transmit geospatial images are serving tiled image pyramids and serving wavelet based compressed bitstreams. Generally, in GIS applications, tiled image pyramids that contain copies of raster datasets at different resolutions are used rather than differences between resolutions. Thus, redundant data is transmitted from GIS server with different resolutions of a region while using tiled image pyramids. Wavelet based methods decreases redundancy. On the other hand methods that use wavelet compressed bitsreams requires to transform the whole dataset before the transmission. A hybrid streaming methodology is developed to decrease the redundancy of tiled image pyramids integrated with wavelets which does not require transforming and encoding whole dataset. Tile parts&rsquo / coefficients produced with the methodlogy are encoded with JPEG 2000, which is an efficient technology to compress images at wavelet domain.
247

A Portable DARC Fax Service / En Bärbar Faxtjänst För DARC

Husberg, Björn January 2002 (has links)
<p>DARC is a technique for data broadcasting over the FM radio network. Sectra Wireless Technologies AB has developed a handheld DARC receiver known as the Sectra CitySurfer. The CitySurfer is equipped with a high-resolution display along with buttons and a joystick that allows the user to view and navigate through various types of information received over DARC. </p><p>Sectra Wireless Technologies AB has, among other services, also developed a paging system that enables personal message transmission over DARC. The background of this thesis is a wish to be able to send fax documents using the paging system and to be able to view received fax documents in the CitySurfer. </p><p>The presented solution is a central PC-based fax server. The fax server is responsible for receiving standard fax transmissions and converting the fax documents before redirecting them to the right receiver in the DARC network. The topics discussed in this thesis are fax document routing, fax document conversion and fax server system design.</p>
248

Object-based unequal error protection

Marka, Madhavi. January 2002 (has links)
Thesis (M.S.) -- Mississippi State University. Department of Electrical and Computer Engineering. / Title from title screen. Includes bibliographical references.
249

Hiperbolinio vaizdų filtravimo skirtingo matavimo erdvėse analizė / Analysis of hyperbolic image filtering in spaces of different dimensionality

Puida, Mantas 27 May 2004 (has links)
This Master degree paper analyses hyperbolic image filtering in spaces of different dimensionality. It investigates the problem of optimal filtering space selection. Several popular image compression methods (both lossless and lossy) are reviewed. This paper analyses the problems of image smoothness parameter discovering, image dimensionality changing, hyperbolic image filtering and filtering efficiency evaluation and provides the solution methods of the problems. Schemes for the experimental examination of theoretical propositions and hypotheses are prepared. This paper comprehensively describes experiments with one-, two- and threedimensional images and the results of the experiments. Conclusions about the efficiency of hyperbolic image filtering in other than "native" image space are based on the results of the experiments. The criterion for the selection of optimal image filtering space is evaluated. Guidelines for further research are also discussed. The presentation Specific Features of Hyperbolic Image Filtering, which was based on this Master degree paper, was made at the conference Mathematics and Mathematical Modeling (KTU – 2004). This text is available in appendixes.
250

Evaluation and Hardware Implementation of Real-Time Color Compression Algorithms

Ojani, Amin, Caglar, Ahmet January 2008 (has links)
A major bottleneck, for performance as well as power consumption, for graphics hardware in mobile devices is the amount of data that needs to be transferred to and from memory. In, for example, hardware accelerated 3D graphics, a large part of the memory accesses are due to large and frequent color buffer data transfers. In a graphic hardware block color data is typically processed using RGB color format. For both 3D graphic rasterization and image composition several pixels needs to be read from and written to memory to generate a pixel in the frame buffer. This generates a lot of data traffic on the memory interfaces which impacts both performance and power consumption. Therefore it is important to minimize the amount of color buffer data. One way of reducing the memory bandwidth required is to compress the color data before writing it to memory and decompress it before using it in the graphics hardware block. This compression/decompression must be done “on-the-fly”, i.e. it has to be very fast so that the hardware accelerator does not have to wait for data. In this thesis, we investigated several exact (lossless) color compression algorithms from hardware implementation point of view to be used in high throughput hardware. Our study shows that compression/decompression datapath is well implementable even with stringent area and throughput constraints. However memory interfacing of these blocks is more critical and could be dominating.

Page generated in 0.4438 seconds