• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7515
  • 3242
  • 1872
  • 1071
  • 876
  • 613
  • 232
  • 180
  • 174
  • 174
  • 151
  • 132
  • 127
  • 105
  • 85
  • Tagged with
  • 19626
  • 6481
  • 3173
  • 2513
  • 2166
  • 1984
  • 1834
  • 1782
  • 1747
  • 1369
  • 1361
  • 1335
  • 1246
  • 1204
  • 1178
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Automatic caption generation for news images

Feng, Yansong January 2011 (has links)
This thesis is concerned with the task of automatically generating captions for images, which is important for many image-related applications. Automatic description generation for video frames would help security authorities manage more efficiently and utilize large volumes of monitoring data. Image search engines could potentially benefit from image description in supporting more accurate and targeted queries for end users. Importantly, generating image descriptions would aid blind or partially sighted people who cannot access visual information in the same way as sighted people can. However, previous work has relied on fine-gained resources, manually created for specific domains and applications In this thesis, we explore the feasibility of automatic caption generation for news images in a knowledge-lean way. We depart from previous work, as we learn a model of caption generation from publicly available data that has not been explicitly labelled for our task. The model consists of two components, namely extracting image content and rendering it in natural language. Specifically, we exploit data resources where images and their textual descriptions co-occur naturally. We present a new dataset consisting of news articles, images, and their captions that we required from the BBC News website. Rather than laboriously annotating images with keywords, we simply treat the captions as the labels. We show that it is possible to learn the visual and textual correspondence under such noisy conditions by extending an existing generative annotation model (Lavrenko et al., 2003). We also find that the accompanying news documents substantially complements the extraction of the image content. In order to provide a better modelling and representation of image content,We propose a probabilistic image annotation model that exploits the synergy between visual and textual modalities under the assumption that images and their textual descriptions are generated by a shared set of latent variables (topics). Using Latent Dirichlet Allocation (Blei and Jordan, 2003), we represent visual and textual modalities jointly as a probability distribution over a set of topics. Our model takes these topic distributions into account while finding the most likely keywords for an image and its associated document. The availability of news documents in our dataset allows us to perform the caption generation task in a fashion akin to text summarization; save one important difference that our model is not solely based on text but uses the image in order to select content from the document that should be present in the caption. We propose both extractive and abstractive caption generation models to render the extracted image content in natural language without relying on rich knowledge resources, sentence-templates or grammars. The backbone for both approaches is our topic-based image annotation model. Our extractive models examine how to best select sentences that overlap in content with our image annotation model. We modify an existing abstractive headline generation model to our scenario by incorporating visual information. Our own model operates over image description keywords and document phrases by taking dependency and word order constraints into account. Experimental results show that both approaches can generate human-readable captions for news images. Our phrase-based abstractive model manages to yield as informative captions as those written by the BBC journalists.
132

Body Image as Mediated by Age, Sex, and Relationship Status

Cooper, Caren C. (Caren Connie) 12 1900 (has links)
Traditionally, body image research has focused on young women. However, there are indications of cultural shifts which extend physical appearance pressures to both men and women, as well as to middle-aged and older adults. Two hundred and ten subjects were administered objective body image measures including the Figure Rating Scale, the Body Shape Questionnaire, and the Multidimensional Body-Self Relations Questionnaire, as well as projective measures including the Holtzman Inkblot Technique and the Draw-A-Person. The NEO-Five Factor Inventory and the Social Anxiety Subscale were also used to explore variables which might covary with body image. A 3 X 2 X 2 Multivariate Analysis of Covariance (MANCOVA) was utilized with social desirability as the covariate.
133

Subjective analysis of image coding errors

26 February 2009 (has links)
D.Ing. / The rapid use of digital images and the necessity to compress them, has created the need for the development of image quality metrics. Subjective evaluation is the most accurate of the image quality evaluation methods, but it is time consuming, tedious and expensive. In the mean time widely used objective evaluations such as the mean squared error measure has proven that they do not assess the image quality the way a human observer does. Since the human observer is the final receiver of most visual information, taking the way humans perceive visual information will be greatly beneficial for the development of an objective image quality metric that will reflect the subjective evaluation of distorted images. Many attempts have been carried out in the past, which tried to develop distortion metrics that model the processes of the human visual system, and many promising results have been achieved. However most of these metrics were developed with the use of simple visual stimuli, and most of these models were based on the visibility threshold measures, which are not representative of the distortion introduced in complex natural compressed images. In this thesis, a new image quality metric based on the human visual system properties as related to image perception is proposed. This metric provides an objective image quality measure for the subjective quality of coded natural images with suprathreshold degradation. This proposed model specifically takes into account the structure of the natural images, by analyzing the images into their different components, namely: the edges, texture and background (smooth) components, as these components influence the formation of perception in the HVS differently. Hence the HVS sensitivity to errors in images depends on weather these errors lie in more active areas of the image, such as strong edges or texture, or in the less active areas such as the smooth areas. These components are then summed to obtain the combined image which represents the way the HVS is postulated to perceive the image. Extensive subjective evaluation was carried out for the different image components and the combined image, obtained for the coded images at different qualities. The objective (RMSE) for these images was also calculated. A transformation between the subjective and the objective quality measures was performed, from which the objective metric that can predict the human perception of image quality was developed. The metric was shown to provide an accurate prediction of image quality, which agrees well with the prediction provided by the expensive and lengthy process of subjective evaluation. Furthermore it has the desired properties of the RMSE of being easier and cheaper to implement. Therefore, this metric will be useful for evaluating error mechanisms present in proposed coding schemes.
134

Image Completion: Comparison of Different Methods and Combination of Techniques

LeBlanc, Lawrence 20 May 2011 (has links)
Image completion is the process of filling missing regions of an image based on the known sections of the image. This technique is useful for repairing damaged images or removing unwanted objects from images. Research on this technique is plentiful. This thesis compares three different approaches to image completion. In addition, a new method is proposed which combines features from two of these algorithms to improve efficiency.
135

Conception d'un cadre d'optimisation de fonctions d'énergies : application au traitement d'images / New framework design for optimizing energy functions : application to image processing

Kouzana, Amira 14 December 2018 (has links)
Nous proposons une nouvelle formulation de minimisation de fonctions d’énergies pour la traitement de la vision sur toute la segmentation d'image. Le problème est modélisé comme étant un jeu stratégique non coopératif, et le processus d'optimisation est interprété comme étant la recherche de l'équilibre de nash. Ce problème reste un problème combinatoire sous cette forme d'où nous avons opté à le résoudre en utilisant un algorithme de Séparation-Évaluation. Pour illustrer la performance de la nouvelle approche, nous l'avons appliqué sur des fonctions de régularisation convexe ainsi que non convexe / We propose a new formulation of the energy minimisation paradigm for image segmentation. The segmentation problem is modeled as a non-cooperative strategic game, and the optimization process is interpreted as the search of a Nash equilibrium. The problem is expressed as a combinatorial problem, for which an efficient Branch and Bound algorithm is proposed to solve the problem exactly. To illustrate the performance of the proposed framework, it is applied on convex regularization model, as well as a non-convex regularized segmentation models
136

THREE DIMENSIONAL SEGMENTATION AND DETECTION OF FLUORESCENCE MICROSCOPY IMAGES

David J. Ho (5929748) 10 June 2019 (has links)
Fluorescence microscopy is an essential tool for imaging subcellular structures in tissue. Two-photon microscopy enables imaging deeper into tissue using near-infrared light. The use of image analysis and computer vision tools to detect and extract information from the images is still challenging due to the degraded microscopy volumes by blurring and noise during the image acquisition and the complexity of subcellular structures presented in the volumes. In this thesis we describe methods for segmentation and detection of fluorescence microscopy images in 3D. We segment tubule boundaries by distinguishing them from other structures using three dimensional steerable filters. These filters can capture strong directional tendencies of the voxels on a tubule boundary. We also describe multiple three dimensional convolutional neural networks (CNNs) to segment nuclei. Training the CNNs usually require a large set of labeled images which is extremely difficult to obtain in biomedical images. We describe methods to generate synthetic microscopy volumes and to train our 3D CNNs using these synthetic volumes without using any real ground truth volumes. The locations and sizes of the nuclei are detected using of our CNNs, known as the Sphere Estimation Network. Our methods are evaluated using real ground truth volumes and are shown to outperform other techniques.
137

Two approaches to sparsity for image restoration.

January 2013 (has links)
稀疏性在最近的圖像恢復技術發展中起到了重要作用。在這個碩士研究中,我們專注於兩種通過信號稀疏性假設相聯繫的圖像恢復問題。具體來講,在第一個圖像恢復問題中,信號本身在某些變換域是稀疏的,例如小波變換。在本研究的第二部分,信號並非傳統意義上的稀疏,但它可以用很少的幾個參數來表示--亦即信號具有稀疏的表示。我們希望通過講述一個「雙城記」,聯繫起這兩個稀疏圖像重建問題。 / 在第二章中,我們提出了一種創新的算法框架,用於解決信號稀疏假設下的圖像恢復問題。重建圖像的目標函數,由一個數據保真項和`1正則項組成。然而,我們不是直接估計重建的圖像,而是專注於如何獲得重建的這個過程。我們的策略是將這個重建過程表示成基本閾值函數的線性組合(LET):這些線性係數可以通過最小化目標函數解得。然後,可以更新閾值函數并迭代這個過程(i-LET)。這種線性參數化的主要優點是可以大幅降低問題的規模-每次我們只需解決一個線性係數維度大小的優化問題(通常小於十),而不是整個圖像大小的問題。如果閾值函滿足一定的條件,迭代LET算法可以保證全局的收斂性。多個測試圖像在不同噪音水平和不同卷積核類型的測試清楚地表明,我們提出的框架在所需運算時間和迭代循環次數方面,通常超越當今最好水平。 / 在第三章中,我們擴展了有限創新率採樣框架至某一種特定二維曲線。我們用掩模函數的解來間接定義這個二維曲線。這裡,掩模函數可以表示為有限數目的正弦信號加權求和。因此,從這個角度講,我們定義的二維曲線具有「有限創新率」(FRI)。由於與定義曲線相關聯的指示器圖像沒有帶寬限制,因而根據經典香農採樣定理,不能在有限數量的採樣基礎上獲得完全重建。然而,我們證明,仍然可以設計一個針對指示器圖像採樣的框架,實現完美重構。此外,對於這一方法的空間域解釋,使我們能夠拓展嚴格的FRI曲線模型用於描述自然圖像的邊緣,可以在各種圖像處理的問題中保持圖像的邊緣。我們用一個潛在的在圖像上採樣中的應用作為示例。 / Sparsity has played an important role in recent developments of various image restoration techniques. In this MPhil study, we focus on two different types of image restoration problems, which are related by the sparsity assumptions. Specifically, in the first image restoration problem, the signal (i.e. the restored image) itself is sparse in some transformation domain, e.g. wavelet. While in the second part of this study, the signal is not sparse in the traditional sense but that it can be parametrized with a few parameters hence having a sparse representation. Our goal is to tell a "tale of two cities" and to show the connections between the two sparse image restoration problems in this thesis. / In Chapter 2, we proposed a novel algorithmic framework to solve image restoration problems under sparsity assumptions. As usual, the reconstructed image is the minimum of an objective functional that consists of a data fidelity term and an ℓ₁ regularization. However, instead of estimating the reconstructed image that minimizes the objective functional directly, we focus on the restoration process that maps the degraded measurements to the reconstruction. Our idea amounts to parameterizing the process as a linear combination of few elementary thresholding functions (LET) and solve for the linear weighting coefficients by minimizing the objective functional. It is then possible to update the thresholding functions and to iterate this process (i-LET). The key advantage of such a linear parametrization is that the problem size reduces dramatically--each time we only need to solve an optimization problem over the dimension of the linear coefficients (typically less than 10) instead of the whole image dimensio . With the elementary thresholding functions satisfying certain constraints, global convergence of the iterated LET algorithm is guaranteed. Experiments on several test images over a wide range of noise levels and different types of convolution kernels clearly indicate that the proposed framework usually outperform state-of-theart algorithms in terms of both CPU time and number of iterations. / In Chapter 3, we extended the sampling framework for signals with finite rate of innovation to a specific class of two-dimensional curves, which are defined implicitly as the roots of a mask function. Here the mask function has a parametric representation as weighted summation of a finite number of sinusoids, and therefore, has finite rate of innovation [1]. The associated indicator image of the defined curve is not bandlimited and cannot be perfectly reconstructed based on the classical Shannon's sampling theorem. Yet, we show that it is possible to devise a sampling scheme and have a perfect reconstruction from finite number of (noiseless) samples of the indicator image with the annihilating filter method (also known as Prony's method). Robust reconstruction algorithms with noisy samples are also developed. Furthermore, the new spatial domain interpretation of the annihilating filter enables us to generalize the exact FRI curve model to characterize edges of a natural image. We can impose the annihilation constraint to preserve edges in various image processing problems. We exemplified the effectiveness of the annihilation constraint with a potential application in image up-sampling. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Pan, Hanjie. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2013. / Includes bibliographical references (leaves 69-74). / Abstracts also in Chinese. / Acknowledgments --- p.iii / Abstract --- p.vii / Contents --- p.xii / List of Figures --- p.xv / List of Tables --- p.xvii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Sampling Sparse Signals --- p.1 / Chapter 1.2 --- Thesis Organizations and Contributions --- p.3 / Chapter 2 --- An Iterated Linear Expansion of Thresholds for ℓ₁-based Image Restoration --- p.5 / Chapter 2.1 --- Introduction --- p.5 / Chapter 2.1.1 --- Problem Description --- p.5 / Chapter 2.1.2 --- Approaches to Solve the Problem --- p.6 / Chapter 2.1.3 --- Proposed Approach --- p.8 / Chapter 2.1.4 --- Organization of the Chapter --- p.9 / Chapter 2.2 --- Basic Ingredients --- p.9 / Chapter 2.2.1 --- Iterative Reweighted Least Square Methods --- p.9 / Chapter 2.2.2 --- Linear Expansion of Thresholds (LET) --- p.11 / Chapter 2.3 --- Iterative LET Restoration --- p.15 / Chapter 2.3.1 --- Selection of i-LET Bases --- p.15 / Chapter 2.3.2 --- Convergence of the i-LET Scheme --- p.16 / Chapter 2.3.3 --- Examples of i-LET Bases --- p.18 / Chapter 2.4 --- Experimental Results --- p.23 / Chapter 2.4.1 --- Deconvolution with Decimated Wavelet Transform --- p.24 / Chapter 2.4.2 --- Deconvolution with Redundant Wavelet Transform --- p.28 / Chapter 2.4.3 --- Algorithm Complexity Analysis --- p.29 / Chapter 2.4.4 --- Choice of Regularization Weight λ --- p.30 / Chapter 2.4.5 --- Deconvolution with Cycle Spinnings --- p.30 / Chapter 2.5 --- Summary --- p.31 / Chapter 3 --- Sampling Curves with Finite Rate of Innovation --- p.33 / Chapter 3.1 --- Introduction --- p.33 / Chapter 3.2 --- Two-dimensional Curves with Finite Rate of Innovation --- p.34 / Chapter 3.2.1 --- FRI Curves --- p.34 / Chapter 3.2.2 --- Interior Indicator Image --- p.35 / Chapter 3.2.3 --- Acquisition of Indicator Image Samples --- p.36 / Chapter 3.3 --- Reconstruction of the Annihilable Curves --- p.37 / Chapter 3.3.1 --- Annihilating Filter Method --- p.37 / Chapter 3.3.2 --- Relate Fourier Transform with Spatial Domain Samples --- p.39 / Chapter 3.3.3 --- Reconstruction of Annihilation Coe cients --- p.39 / Chapter 3.3.4 --- Reconstruction with Model Mismatch --- p.42 / Chapter 3.3.5 --- Retrieval of the Annihilable Curve Amplitudes --- p.46 / Chapter 3.4 --- Dealing with Non-ideal Low-pass Filtered Samples --- p.48 / Chapter 3.5 --- Generalization of the FRI Framework for Natural Images --- p.49 / Chapter 3.5.1 --- Spatial Domain Interpretation of the Annihilation Equation --- p.50 / Chapter 3.5.2 --- Annihilable Curve Approximation of Image Edges --- p.51 / Chapter 3.5.3 --- Up-sampling with Annihilation Constraint --- p.53 / Chapter 3.6 --- Conclusion --- p.57 / Chapter 4 --- Conclusions --- p.59 / Chapter 4.1 --- Thesis Summary --- p.59 / Chapter 4.2 --- Perspectives --- p.60 / Chapter A --- Proofs and Derivations --- p.61 / Chapter A.1 --- Proof of Lemma 3 --- p.61 / Chapter A.2 --- Proof of Theorem 2 --- p.62 / Chapter A.3 --- Efficient Implementation of IRLS Inner Loop with Matlab --- p.63 / Chapter A.4 --- Derivations of the Sampling Formula (3.7) --- p.64 / Chapter A.5 --- Correspondence between the Spatial and Fourier Domain Samples --- p.65 / Chapter A.6 --- Optimal Post-filter Applied to Non-ideal Samples --- p.66 / Bibliography --- p.69
138

Reconstruction of high-resolution image from movie frames.

January 2003 (has links)
by Ling Kai Tung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 44-45). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.7 / Chapter 2 --- Fundamentals --- p.9 / Chapter 2.1 --- Digital image representation --- p.9 / Chapter 2.2 --- Motion Blur --- p.13 / Chapter 3 --- Methods for Solving Nonlinear Least-Squares Prob- lem --- p.15 / Chapter 3.1 --- Introduction --- p.15 / Chapter 3.2 --- Nonlinear Least-Squares Problem --- p.15 / Chapter 3.3 --- Gauss-Newton-Type Methods --- p.16 / Chapter 3.3.1 --- Gauss-Newton Method --- p.16 / Chapter 3.3.2 --- Damped Gauss-Newton Method --- p.17 / Chapter 3.4 --- Full Newton-Type Methods --- p.17 / Chapter 3.4.1 --- Quasi-Newton methods --- p.18 / Chapter 3.5 --- Constrained problems --- p.19 / Chapter 4 --- Reconstruction of High-Resolution Images from Movie Frames --- p.20 / Chapter 4.1 --- Introduction --- p.20 / Chapter 4.2 --- The Mathematical Model --- p.22 / Chapter 4.2.1 --- The Discrete Model --- p.23 / Chapter 4.2.2 --- Regularization --- p.24 / Chapter 4.3 --- Acquisition of Low-Resolution Movie Frames --- p.25 / Chapter 4.4 --- Experimental Results --- p.25 / Chapter 4.5 --- Concluding Remarks --- p.26 / Chapter 5 --- Constrained Total Least-Squares Computations for High-Resolution Image Reconstruction --- p.31 / Chapter 5.1 --- Introduction --- p.31 / Chapter 5.2 --- The Mathematical Model --- p.32 / Chapter 5.3 --- Numerical Algorithm --- p.37 / Chapter 5.4 --- Numerical Results --- p.39 / Chapter 5.5 --- Concluding Remarks --- p.39 / Bibliography --- p.44
139

Efficient photometric stereo on glossy surfaces with wide specular lobes.

January 2008 (has links)
Chung, Hin Shun. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 40-43). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Lambertian photometric stereo --- p.1 / Chapter 1.2 --- Non-Lambertian photometric stereo --- p.3 / Chapter 1.3 --- Large specular lobe problems --- p.4 / Chapter 2 --- Related Work --- p.9 / Chapter 2.1 --- Lambertian photometric stereo --- p.9 / Chapter 2.2 --- Non-Lambertian photometric stereo --- p.9 / Chapter 2.2.1 --- Analytic models to reconstruct non-Lambertian surface --- p.9 / Chapter 2.2.2 --- Reference object based --- p.10 / Chapter 2.2.3 --- Highlight removal before shape reconstruction --- p.11 / Chapter 2.2.4 --- Polarization based method --- p.12 / Chapter 2.2.5 --- Specularity fitting method --- p.12 / Chapter 2.2.6 --- Photometric stereo with shadow --- p.12 / Chapter 3 --- Our System --- p.13 / Chapter 3.1 --- Estimation of global parameters --- p.14 / Chapter 3.1.1 --- Shadow separation --- p.16 / Chapter 3.1.2 --- Separation edges of shadow and edges of foreground object --- p.16 / Chapter 3.1.3 --- Normal estimation using shadow boundary --- p.20 / Chapter 3.1.4 --- Global parameter estimation and refinement --- p.22 / Chapter 3.2 --- Surface shape and texture reconstruction --- p.24 / Chapter 3.3 --- Single material results --- p.25 / Chapter 4 --- Comparison between Our Method and Direct Specularity Fitting Method --- p.29 / Chapter 4.1 --- Summary of direct specularity fitting method [9] --- p.29 / Chapter 4.2 --- Comparison results --- p.31 / Chapter 5 --- Reconstructing Multiple-Material Surfaces --- p.33 / Chapter 5.1 --- Multiple material results --- p.34 / Chapter 6 --- Conclusion --- p.38 / Bibliography --- p.39 / Chapter A --- Proof of Surface Normal Projecting to Gradient of Cast Shadow Boundary --- p.43
140

Scanline calculation of radial influence for image processing

Ilbery, Peter William Mitchell, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2008 (has links)
Efficient methods for the calculation of radial influence are described and applied to two image processing problems, digital halftoning and mixed content image compression. The methods operate recursively on scanlines of image values, spreading intensity from scanline to scanline in proportions approximating a Cauchy distribution. For error diffusion halftoning, experiments show that this recursive scanline spreading provides an ideal pattern of distribution of error. Error diffusion using masks generated to provide this distribution of error alleviate error diffusion "worm" artifacts. The recursive scanline by scanline application of a spreading filter and a complementary filter can be used to reconstruct an image from its horizontal and vertical pixel difference values. When combined with the use of a downsampled image the reconstruction is robust to incomplete and quantized pixel difference data. Such gradient field integration methods are described in detail proceeding from representation of images by gradient values along contours through to a variety of efficient algorithms. Comparisons show that this form of gradient field integration by convolution provides reduced distortion compared to other high speed gradient integration methods. The reduced distortion can be attributed to success in approximating a radial pattern of influence. An approach to edge-based image compression is proposed using integration of gradient data along edge contours and regularly sampled low resolution image data. This edge-based image compression model is similar to previous sketch based image coding methods but allows a simple and efficient calculation of an edge-based approximation image. A low complexity implementation of this approach to compression is described. The implementation extracts and represents gradient data along edge contours as pixel differences and calculates an approximate image by performing integration of pixel difference data by scanline convolution. The implementation was developed as a prototype for compression of mixed content image data in printing systems. Compression results are reported and strengths and weaknesses of the implementation are identified.

Page generated in 0.0732 seconds