• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • 1
  • Tagged with
  • 6
  • 6
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Stochastic Optimization Models for Rapid Detection of Viruses in Cellphone Networks

Lee, Jinho, doctor of operations research and industrial engineering 20 November 2012 (has links)
We develop a class of models to represent the dynamics of a virus spreading in a cellphone network, employing a taxonomy that includes five key characteristics. Based on the resulting dynamics governing the spread, we present optimization models to rapidly detect the virus, subject to resource limitations. We consider two goals, maximizing the probability of detecting a virus by a time threshold and minimizing the expected time to detection, which can be applied to all spread models we consider. We establish a submodularity result for these two objective functions that ensures that a greedy heuristic yields a well-known constant-factor (63%) approximation. We relate the latter optimization problem, under a specific virus-spread mechanism from our class of models, to a classic facility-location model. Using data from a large carrier, we build several base cellphone contact networks of different scale. We then rescale these base networks using the so-called c-core decomposition that removes vertices of low degree in a recursive way. We further show that this down-sampling strategy preserves, in general, the topological properties of the base networks, based on testing several measures. For the objective that maximizes the probability that we detect a virus by a time threshold, we provide a sample-average optimization model that yields an asymptotically-optimal design for locating the detection devices, as the number of samples grows large. To choose a relevant time threshold, we perform a simulation for some spread models. We then test the performance of our proposed solution methods by solving the presented optimization models for some spread dynamics using some of the contact networks built after the c-core decomposition. The computational results show that the greedy algorithm is an efficient way to solve the corresponding sample-average approximation model, and the greedy solutions outperform some other simple solution approaches. / text
2

Optimization of Sampling Structure Conversion Methods for Color Mosaic Displays

Zheng, Xiang January 2014 (has links)
Although many devices can be used to capture images of high resolution, there is still a need to show these images on displays with low resolution. Existing methods of subpixel-based down-sampling are reviewed in this thesis and their limitations are described. A new approach to optimizing sampling structure conversion for color mosaic displays is developed. Full color images are filtered by a set of optimal filters before down-sampling, resulting in better image quality according to the SCIELAB measure, a spatial extension of the CIELAB metric measuring perceptual color difference. The typical RGB stripe display pattern is tested to get the optimal filters using least-squares filter design. The new approach is also implemented on a widely used two-dimensional display pattern, the Pentile RGBG. Clear images are produced and color fringing artifacts are reduced. Quality of down-sampled images are compared using SCIELAB and by visual inspection.
3

Reduced Area Discrete-Time Down-Sampling Filter Embedded With Windowed Integration Samplers

Raviprakash, Karthik 2010 August 1900 (has links)
Developing a flexible receiver, which can be reconfigured to multiple standards, is the key to solving the problem of embedding numerous and ever-changing functionalities in mobile handsets. Difficulty in efficiently reconfiguring analog blocks of a receiver chain to multiple standards calls for moving the ADC as close to the antenna as possible so that most of the processing is done in DSP. Different standards are sampled at different frequencies and a programmable anti-aliasing filtering is needed here. Windowed integration samplers have an inherent sinc filtering which creates nulls at multiples of fs. The attenuation provided by sinc filtering for a bandwidth B is directly proportional to the sampling frequency fs and, in order to meet the anti-aliasing specifications, a high sampling rate is needed. ADCs operating at such a high oversampling rate dissipate power for no good use. Hence, there is a need to develop a programmable discrete-time down-sampling circuit with high inherent anti-aliasing capabilities. Currently existing topologies use large numbers of switches and capacitors which occupy a lot of area.A novel technique in reducing die area on a discrete-time sinc2 ↓2 filter for charge sampling is proposed. An SNR comparison of the conventional and the proposed topology reveals that the new technique saves 25 percent die area occupied by the sampling capacitors of the filter. The proposed idea is also extended to implement higher downsampling factors and a greater percentage of area is saved as the down-sampling factor is increased. The proposed filter also has the topological advantage over previously reported works of allowing the designers to use active integration to charge the capacitance, which is critical in obtaining high linearity. A novel technique to implement a discrete-time sinc3 ↓2 filter for windowed integration samplers is also proposed. The topology reduces the idle time of the integration capacitors at the expense of a small complexity overhead in the clock generation, thereby saving 33 percent of the die area on the capacitors compared to the currently existing topology. Circuit Level simulations in 45 nm CMOS technlogy show a good agreement with the predicted behaviour obtained from the analaysis.
4

The Study of Aerial Imageries Stitching Based on SIFT Algorithm

Huang, Han-che 01 August 2009 (has links)
The ultimate goal of the development of aerial photogrammery is to acquire rapidly and accurately the ground measurements. However, traditional photogrammetric technologies, particularly in the continuous digital images stitching technique, is still very limited. In the past, the ground control points were used as the references for the image registration, however, it is very time and resource consuming, as well as human visual capability constraint. Accuracy and efficiency are two key factors which need to be enhanced to meet the practical requirement for aerial imageries stitching. The SIFT (Sale Invariant Feature Transform) algorithm was used in the computer vision to perform feature extraction in good condition. The extracted SIFT features are invariant to image scale, rotation, noise and change in illumination, and it is a robust and abundant feature extraction algorithm. SIFT algorithm extracts feature points from multi-scale space. For a large scale aerial image containing huge amount of image contents, it will spend a lot of time to extract features from imagery. Therefore, this study proposes a new method, called Inter-Grid Down-Sampling (IGDS) method, to reduce the image size and relative amount of image information to improve the computing efficiency. The correspondent extracted features are matched in the adjacent images with additional RANSAC outlier removal procedure to select correct and characteristic feature points. Finally the Hugin-Panorama Photo Stitching software is used to stitch all the continuous photogrammetric images for producing a panorama imagery of all flight lines. The experiment results indicate that sub-pixel accuracy for extracted feature points can be obtained when the down-sampling factor 3 was selected for the IGDS method, and it only needs half of the computing time. Compared to the Nearest-Neighbor Interpolation and Cubic Interpolation methods to reduce the image size, the IGDS method can increase more feature extraction efficiency without scarifying the location accuracy. When threshold value for SIFT was set between 0.4 to 0.6, we can achieve the largest correct matching rate. In addition, the RANSAC outlier removal procedure can effectively select the best matching feature points both in numbers and locations. For image stitching, the Hugin-panorama photo stitching software can effectively be used to match feature points and do geometric correction and color adjustment to obtain a consistent panorama imagery. Finally, the proposed method in this study can derive a low-variant in resolution and measurements significance for a stitching image from continuous aerial images.
5

Point Cloud-Based Analysis and Modelling of Urban Environments and Transportation Corridors

Yun-Jou Lin (5929979) 03 January 2019 (has links)
3D point cloud processing has been a critical task due to the increasing demand of a variety of applications such as urban planning and management, as-built mapping of industrial sites, infrastructure monitoring, and road safety inspection. Point clouds are mainly acquired from two sources, laser scanning and optical imaging systems. However, the original point clouds usually do not provide explicit semantic information, and the collected data needs to undergo a sequence of processing steps to derive and extract the required information. Moreover, according to application requirements, the outcomes from the point cloud processing could be different. This dissertation presents two tiers of data processing. The first tier proposes an adaptive data processing framework to deal with multi-source and multi-platform point clouds. The second tier introduces two point clouds processing strategies targeting applications mainly from urban environments and transportation corridors.<div><br></div><div>For the first tier of data processing, the internal characteristics (e.g., noise level and local point density) of data should be considered first since point clouds might come from a variety of sources/platforms. The acquired point clouds may have a large number of points. Data processing (e.g., segmentation) of such large datasets is time-consuming. Hence, to attain high computational efficiency, this dissertation presents a down-sampling approach while considering the internal characteristics of data and maintaining the nature of the local surface. Moreover, point cloud segmentation is one of the essential steps in the initial data processing chain to derive the semantic information and model point clouds. Therefore, a multi-class simultaneous segmentation procedure is proposed to partition point cloud into planar, linear/cylindrical, and rough features. Since segmentation outcomes could suffer from some artifacts, a series of quality control procedures are introduced to evaluate and improve the quality of the results.<br></div><div><br></div><div>For the second tier of data processing, this dissertation focuses on two applications for high human activity areas, urban environments and transportation corridors. For urban environments, a new framework is introduced to generate digital building models with accurate right-angle, multi-orientation, and curved boundary from building hypotheses which are derived from the proposed segmentation approach. For transportation corridors, this dissertation presents an approach to derive accurate lane width estimates using point clouds acquired from a calibrated mobile mapping system. In summary, this dissertation provides two tiers of data processing. The first tier of data processing, adaptive down-sampling and segmentation, can be utilized for all kinds of point clouds. The second tier of data processing aims at digital building model generation and lane width estimation applications.<br></div>
6

Audio editing in the time-frequency domain using the Gabor Wavelet Transform

Hammarqvist, Ulf January 2011 (has links)
Visualization, processing and editing of audio, directly on a time-frequency surface, is the scope of this thesis. More precisely the scalogram produced by a Gabor Wavelet transform is used, which is a powerful alternative to traditional techinques where the wave form is the main visual aid and editting is performed by parametric filters. Reconstruction properties, scalogram design and enhancements as well audio manipulation algorithms are investigated for this audio representation.The scalogram is designed to allow a flexible choice of time-frequency ratio, while maintaining high quality reconstruction. For this mean, the Loglet is used, which is observed to be the most suitable filter choice.  Re-assignmentare tested, and a novel weighting function using partial derivatives of phase is proposed.  An audio interpolation procedure is developed and shown to perform well in listening tests.The feasibility to use the transform coefficients directly for various purposes is investigated. It is concluded that Pitch shifts are hard to describe in the framework while noise thresh holding works well. A downsampling scheme is suggested that saves on operations and memory consumption as well as it speeds up real world implementations significantly. Finally, a Scalogram 'compression' procedure is developed, allowing the caching of an approximate scalogram.

Page generated in 0.0733 seconds