• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 654
  • 207
  • 60
  • 60
  • 53
  • 45
  • 12
  • 11
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • Tagged with
  • 1313
  • 1313
  • 208
  • 202
  • 157
  • 140
  • 139
  • 131
  • 115
  • 114
  • 113
  • 110
  • 108
  • 108
  • 100
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Barcode Mapping in Warehouses

Matziaris, Spyridon January 2016 (has links)
Automation in warehouses has been improved in a very certain manner, combining sensors for perception of the environment and mapping of the warehouse. The most common characteristic, which makes the products and the pallet rack cells discriminative, are the barcodes placed on them. This means that the warehouse management system should successfully perceive all the necessary information of the detected barcodes, which also includes their position in the warehouse and build a barcode map of the environment. For this process a barcode reader is needed, with extended capabilities such as estimation of the 3-dimensional coordinates of the barcodes. The main idea of this research was the development of a system to be used in future work, placed on the roof of a forklift, which will be able to detect the barcodes and localize itself using an existing map of the warehouse and update new information in this map. However, the purpose of this project was the investigation of a suitable system for barcode mapping. The main challenge of this project was the development of a barcode reader, which fulfills all the referred capabilities and the comparison with a commercial reader in order to evaluate the performance of the system. In this project a barcode reader was developed using software libraries and a camera for industrial use. The performance of the system was compared with a commercial barcode reader. Moreover, an algorithm was implemented, for estimation of the position of each detected barcode, with reference to the position of the camera's lens.  According to the results of all the investigations the performance of the developed system was quiet satisfying and promising. The comparison of the two systems proved that the commercial barcode reader had a better performance than the implemented system. However, it lacked the ability to provide the required information for mapping also the flexibility for integration with other systems. Overall, the developed system proved to be suitable for integration with a warehouse management system for barcode mapping of the environment.
72

Subjective analysis of image coding errors

26 February 2009 (has links)
D.Ing. / The rapid use of digital images and the necessity to compress them, has created the need for the development of image quality metrics. Subjective evaluation is the most accurate of the image quality evaluation methods, but it is time consuming, tedious and expensive. In the mean time widely used objective evaluations such as the mean squared error measure has proven that they do not assess the image quality the way a human observer does. Since the human observer is the final receiver of most visual information, taking the way humans perceive visual information will be greatly beneficial for the development of an objective image quality metric that will reflect the subjective evaluation of distorted images. Many attempts have been carried out in the past, which tried to develop distortion metrics that model the processes of the human visual system, and many promising results have been achieved. However most of these metrics were developed with the use of simple visual stimuli, and most of these models were based on the visibility threshold measures, which are not representative of the distortion introduced in complex natural compressed images. In this thesis, a new image quality metric based on the human visual system properties as related to image perception is proposed. This metric provides an objective image quality measure for the subjective quality of coded natural images with suprathreshold degradation. This proposed model specifically takes into account the structure of the natural images, by analyzing the images into their different components, namely: the edges, texture and background (smooth) components, as these components influence the formation of perception in the HVS differently. Hence the HVS sensitivity to errors in images depends on weather these errors lie in more active areas of the image, such as strong edges or texture, or in the less active areas such as the smooth areas. These components are then summed to obtain the combined image which represents the way the HVS is postulated to perceive the image. Extensive subjective evaluation was carried out for the different image components and the combined image, obtained for the coded images at different qualities. The objective (RMSE) for these images was also calculated. A transformation between the subjective and the objective quality measures was performed, from which the objective metric that can predict the human perception of image quality was developed. The metric was shown to provide an accurate prediction of image quality, which agrees well with the prediction provided by the expensive and lengthy process of subjective evaluation. Furthermore it has the desired properties of the RMSE of being easier and cheaper to implement. Therefore, this metric will be useful for evaluating error mechanisms present in proposed coding schemes.
73

THREE DIMENSIONAL SEGMENTATION AND DETECTION OF FLUORESCENCE MICROSCOPY IMAGES

David J. Ho (5929748) 10 June 2019 (has links)
Fluorescence microscopy is an essential tool for imaging subcellular structures in tissue. Two-photon microscopy enables imaging deeper into tissue using near-infrared light. The use of image analysis and computer vision tools to detect and extract information from the images is still challenging due to the degraded microscopy volumes by blurring and noise during the image acquisition and the complexity of subcellular structures presented in the volumes. In this thesis we describe methods for segmentation and detection of fluorescence microscopy images in 3D. We segment tubule boundaries by distinguishing them from other structures using three dimensional steerable filters. These filters can capture strong directional tendencies of the voxels on a tubule boundary. We also describe multiple three dimensional convolutional neural networks (CNNs) to segment nuclei. Training the CNNs usually require a large set of labeled images which is extremely difficult to obtain in biomedical images. We describe methods to generate synthetic microscopy volumes and to train our 3D CNNs using these synthetic volumes without using any real ground truth volumes. The locations and sizes of the nuclei are detected using of our CNNs, known as the Sphere Estimation Network. Our methods are evaluated using real ground truth volumes and are shown to outperform other techniques.
74

Automatic segmentation and registration techniques for 3D face recognition. / Automatic segmentation and registration techniques for three-dimensional face recognition / CUHK electronic theses & dissertations collection

January 2008 (has links)
A 3D range image acquired by 3D sensing can explicitly represent a three-dimensional object's shape regardless of the viewpoint and lighting variations. This technology has great potential to resolve the face recognition problem eventually. An automatic 3D face recognition system consists of three stages: facial region segmentation, registration and recognition. The success of each stage influences the system's ultimate decision. Lately, research efforts are mainly devoted to the last recognition stage in 3D face recognition research. In this thesis, our study mainly focuses on segmentation and registration techniques, with the aim of providing a more solid foundation for future 3D face recognition research. / Then we propose a fully automatic registration method that can handle facial expressions with high accuracy and robustness for 3D face image alignment. In our method, the nose region, which is relatively more rigid than other facial regions in the anatomical sense, is automatically located and analyzed for computing the precise location of a symmetry plane. Extensive experiments have been conducted using the FRGC (V1.0 and V2.0) benchmark 3D face dataset to evaluate the accuracy and robustness of our registration method. Firstly, we compare its results with two other registration methods. One of these methods employs manually marked points on visualized face data and the other is based on the use of a symmetry plane analysis obtained from the whole face region. Secondly, we combine the registration method with other face recognition modules and apply them in both face identification and verification scenarios. Experimental results show that our approach performs better than the other two methods. For example, 97.55% Rank-1 identification rate and 2.25% EER score are obtained by using our method for registration and the PCA method for matching on the FRGC V1.0 dataset. All these results are the highest scores ever reported using the PCA method applied to similar datasets. / We firstly propose an automatic 3D face segmentation method. This method is based on deep understanding of 3D face image. Concepts of proportions of the facial and nose regions are acquired from anthropometrics for locating such regions. We evaluate this segmentation method on the FRGC dataset, and obtain a success rate as high as 98.87% on nose tip detection. Compared with results reported by other researchers in the literature, our method yields the highest score. / Tang, Xinmin. / Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3616. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 109-117). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
75

Deformable surface recovery and its applications. / 可變形曲面恢復及應用 / CUHK electronic theses & dissertations collection / Ke bian xing qu mian hui fu ji ying yong

January 2009 (has links)
As for the 3D deformable surface recovery, the key challenge arises from the difficulty in estimating a large number of 3D shape parameters from noisy observations. In this thesis, 3D deformable surface tracking is formulated into an unconstrained quadratic problem that can be solved very efficiently by resolving a set of sparse linear equations. Furthermore, the robust progressive finite Newton method developed for nonrigid surface detection is employed to handle the large outliers. / For the appearance-based method, a deformable Lucas-Kanade algorithm is proposed which triangulates the template image into small patches and constrains the deformation through the second order derivatives of the mesh vertices. It is formulated into a sparse regularized least squares problem which is able to reduce the computational cost and the memory requirement. The inverse compositional algorithm is applied to efficiently solve the optimization problem. Furthermore, we present a fusion approach to take advantage of both the appearance information and the local features. / In addition to the methodologies studied and evaluated in computer vision, this thesis also investigates the nonrigid surface recovery in some real-world multimedia applications, such as Near-duplicate image retrieval and detection. In contrast to conventional approaches, the presented technique can recover an explicit mapping between two near-duplicate images with a few deformation parameters and find out the correct correspondences from noisy data effectively. To make the proposed technique applicable to large-scale applications, an effective multilevel ranking scheme is presented that filters out the irrelevant results in a coarse-to-fine manner. To overcome the extremely small training size challenge, a semi-supervised learning method is employed to improve the performance using unlabeled data. Extensive evaluations show that the presented method is clearly effective than conventional approaches. / Recovering deformable surfaces is an interesting and beneficial research problem for computer vision and image analysis. An effective deformable surface recovery technique can be applied in a variety of applications for surface reconstruction, digital entertainment, medical imaging and Augmented Reality. While considerable research efforts have been devoted to deformable surface modeling and fitting, there are only few schemes available to tackle the deformable surface recovery problem efficiently. This thesis proposes a set of methods to effectively solve the 2D nonrigid shape recovery and 3D deformable surface tracking based on a robust progressive optimization scheme. The presented techniques are also applied to a variety of real-world applications. / To tackle the 2D nonrigid shape recovery problem, this thesis first presents a novel progressive finite Newton optimization scheme, which is based on the local feature correspondences. The key of this approach is to formulate the nonrigid shape recovery as an unconstrained quadratic optimization problem which has a closed-form solution for a given set of observations. / Without resorting to an explicit deformable mesh model, the nonrigid surface detection can be treated as a generic regression problem. A novel velocity coherence constraint is imposed on the deformable shape model to regularize the ill-posed optimization problem. To handle the large outliers, a progressive optimization scheme is employed. / Zhu, Jianke. / Adviser: Michael R. Lyu. / Source: Dissertation Abstracts International, Volume: 70-09, Section: B, page: . / Thesis submitted in: December 2008. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (leaves 161-175). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
76

Audio-guided video based face recognition. / CUHK electronic theses & dissertations collection

January 2006 (has links)
Face recognition is one of the most challenging computer vision research topics since faces appear differently even for the same person due to expression, pose, lighting, occlusion and many other confounding factors in real life. During the past thirty years, a number of face recognition techniques have been proposed. However, all of these methods focus exclusively on image-based face recognition that uses a still image as input data. One problem with the image-based face recognition is that it is possible to use a pre-recorded face photo to confuse a camera to take it as a live subject. The second problem is that the image-based recognition accuracy is still too low in some practical applications comparing to other high accuracy biometric technologies. To alleviate these problems, video based face recognition has been proposed recently. One of the major advantages of video-based face recognition is to prevent the fraudulent system penetration by pre-recorded facial images. The great difficulty to forge a video sequence (possible but very difficult) in front of a live video camera may ensure that the biometric data come from the user at the time of authentication. Another key advantage of the video based method is that more information is available in a video sequence than in a single image. If the additional information can be properly extracted, we can further increase the recognition accuracy. / In this thesis, we develop a new video-to-video face recognition algorithm [86]. The major advantage of the video based method is that more information is available in a video sequence than in a single image. In order to take advantage of the large amount of information in the video sequence and at the same time overcome the processing speed and data size problems we develop several new techniques including temporal and spatial frame synchronization, multi-level subspace analysis, and multi-classifier integration for video sequence processing. An aligned video sequence for each person is first obtained by applying temporal and spatial synchronization, which effectively establishes the face correspondence using the information of both audio and video, then multi-level subspace analysis or multi-classifier integration is employed for further analysis based on the synchronized sequence. The method preserves all the temporal-spatial information contained in a video sequence. Near perfect classification results are obtained on the largest available XM2VTS face video database. In addition, using a similar framework, two kinds of much improved still image based face recognition algorithms [93][94] are developed by incorporating the Gabor representation, nonparametric feature extraction method, and multiple classifier integration techniques. Extensive experiments on two famous face databases (XM2VTS database and Purdue database) clearly show the superiority of our new algorithms. / by Li Zhifeng. / "March 2006." / Adviser: Xiaoou Tang. / Source: Dissertation Abstracts International, Volume: 67-11, Section: B, page: 6621. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (p. 105-114). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
77

Optimising alignment of a multi-element telescope

Kamga, Morgan M. 23 April 2013 (has links)
A thesis submitted to the Faculty of Science in fulfillment of the requirements of the degree of Doctor of Philosophy School of Computational and Applied Mathematics University of the Witwatersrand September 20, 2012 / In this thesis, we analyse reasons for poor image quality on the Southern African Large Telescope (SALT) and we analyse control methods of the segmented primary mirror. Errors in the control algorithm of SALT (circa 2007) are discovered. More powerful numerical procedures are developed and in particular, we show that singular value decomposition method is preferred over normal equations method as used on SALT. In addition, this method does not require physical constraints to some mirror parameters. Sufficiently accurate numerical procedures impose constraints on the precision of segment actuator displacements and edge sensors. We analyse the data filtering method on SALT and find that it is inadequate for control. We give a filtering method that achieves improved control. Finally, we give a new method (gradient flow) that gives acceptable control from arbitrary, imprecise initial alignment.
78

GPGPU : Bildbehandling på grafikkort

Hedborg, Johan January 2006 (has links)
<p>GPGPU is a collective term for research involving general computation on graphics cards. A modern graphics card typically provides more than ten times the computational power of an ordinary PC processor. This is a result of the high demands for speed and image quality in computer games.</p><p>This thesis investigates the possibility of exploiting this computational power for image processing purposes. Three well known methods where implemented on a graphics card: FFT (Fast Fourier Transform), KLT (Kanade Lucas Tomasi point tracking) and the generation of scale pyramids. All algorithms where successfully implemented and they are tree to ten times faster than correspondning optimized CPU implementation.</p>
79

Multidimensional MRI of Cardiac Motion : Acquisition, Reconstruction and Visualization

Sigfridsson, Andreas January 2006 (has links)
<p>Methods for measuring deformation and motion of the human heart in-vivo are crucial in the assessment of cardiac function. Applications ranging from basic physiological research, through early detection of disease to follow-up studies, all benefit from improved methods of measuring the dynamics of the heart. This thesis presents new methods for acquisition, reconstruction and visualization of cardiac motion and deformation, based on magnetic resonance imaging.</p><p>Local heart wall deformation can be quantified in a strain rate tensor field. This tensor field describes the local deformation excluding rigid body translation and rotation. The drawback of studying this tensor-valued quantity, as opposed to a velocity vector field, is the high dimensionality of the tensor. The problem of visualizing the tensor field is approached by combining a local visualization that displays all degrees of freedom for a single tensor with an overview visualization using a scalar field representation of the complete tensor field. The scalar field is obtained by iterated adaptive filtering of a noise field.</p><p>Several methods for synchronizing the magnetic resonance imaging acquisition to the heart beat have previously been used to resolve individual heart phases from multiple cardiac cycles. In the present work, one of these techniques is extended to resolve two temporal dimensions simultaneously, the cardiac cycle and the respiratory cycle. This is combined with volumetric imaging to produce a five-dimensional data set. Furthermore, the acquisition order is optimized in order to reduce eddy current artifacts.</p><p>The five-dimensional acquisition either requires very long scan times or can only provide low spatiotemporal resolution. A method that exploits the variation in temporal bandwidth over the imaging volume, k-t BLAST, is described and extended to two simultaneous temporal dimensions. The new method, k-t2 BLAST, allows simultaneous reduction of scan time and improvement of spatial resolution.</p> / Report code: LIU-TEK-LIC-2006:43
80

Dynamic Infrared Simulation : A Feasibility Study of a Physically Based Infrared Simulation Model

Dehlin, Jonas, Löf, Joakim January 2006 (has links)
<p>The increased usage of infrared sensors by pilots has created a growing demand for simulated environments based on infrared radiation. This has led to an increased need for Saab to refine their existing model for simulating real-time infrared imagery, resulting in the carrying through of this thesis. Saab develops the Gripen aircraft, and they provide training simulators where pilots can train in a realistic environment. The new model is required to be based on the real-world behavior of infrared radiation, and furthermore, unlike Saab's existing model, have dynamically changeable attributes.</p><p>This thesis seeks to develop a simulation model compliant with the requirements presented by Saab, and to develop the implementation of a test environment demonstrating the features and capabilities of the proposed model. All through the development of the model, the pilot training value has been kept in mind.</p><p>The first part of the thesis consists of a literature study to build a theoretical base for the rest of the work. This is followed by the development of the simulation model itself and a subsequent implementation thereof. The simulation model and the test implementation are evaluated as the final step conducted within the framework of this thesis.</p><p>The main conclusions of this thesis first of all includes that the proposed simulation model does in fact have its foundation in physics. It is further concluded that certain attributes of the model, such as time of day, are dynamically changeable as requested. Furthermore, the test implementation is considered to have been feasibly integrated with the current simulation environment.</p><p>A plan concluding how to proceed has also been developed. The plan suggests future work with the proposed simulation model, since the evaluation shows that it performs well in comparison to the existing model as well as other products on the market.</p>

Page generated in 0.1107 seconds