• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 8
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 32
  • 32
  • 20
  • 11
  • 11
  • 11
  • 9
  • 8
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Raman optical frequency comb generation in hydrogen-filled hollow-core fiber

Wu, Chunbai, 1980- 12 1900 (has links)
xiv, 138 p. : ill. (some col.) / In this dissertation, we demonstrate the generation of optical Raman frequency combs by a single laser pump pulse traveling in hydrogen-filled hollow-core optical fibers. This comb generation process is a cascaded stimulated Raman scattering effect, where higher-order sidebands are produced by lower orders scattered from hydrogen molecules. We observe more than 4 vibrational and 20 rotational Raman sidebands in the comb. They span more than three octaves in optical wavelength, largely thanks to the broadband transmission property of the fiber. We found that there are phase correlations between the generated Raman comb sidebands (spectral lines), although their phases are fluctuating from one pump pulse to another due to the inherit spontaneous initiation of Raman scattering. In the experiment, we generated two Raman combs independently from two fibers and simultaneously observed the single-shot interferences between Stokes and anti-Stokes components from the two fibers. The experimental results clearly showed the strong phase anti-correlation between first-order side bands. We also developed a quantum theory to describe this Raman comb generation process, and it predicts and explains the phase correlations we observe. The phase correlation that we found in optical Raman combs may allow us to synthesize single-cycle optical pulse trains, creating attosecond pulses. However, the vacuum fluctuation in stimulated Raman scattering will result in the fluctuation of carrier envelope phase of the pulse trains. We propose that we can stabilize the comb by simultaneously injecting an auxiliary optical beam, mutually coherent with the main Raman pump laser pulse, which is resonant with the third anti-Stokes field. / Committee in Charge: Dr. Steven van Enk, Chair; Dr. Michael G. Raymer; Dr. Daniel A. Steck; Dr. David M. Strom; Dr. Andrew H. Marcus
12

Movement sensor using image correlation on a multicore platform

Lind, Christoffer, Green, Jonas, Ingvarsson, Thomas January 2012 (has links)
The purpose of this study was to investigate the possibility to measure speed of a vehicle usingimage correlation. It was identified that a new solution of measuring the speed of a vehicle, astoday’s solution does not give the True Speed Over Ground, would open up possibilities of highprecision driving applications. It was also the intention to evaluate the performance of theproposed algorithm on a multicore platform. The study was commissioned by HalmstadUniversity.The investigation of image correlation as a method to measure speed of a vehicle was conductedby applying the proposed algorithm on a sequence of images. The result was compared toreference points in the image sequence to confirm the accuracy. The performance of the multicoreplatform was measured by counting the clock cycles it took to perform one measurement cycle ofthe algorithm.It was found out that using image correlation to measure speed has a positional accuracy of closeto a half percent. The results also revealed that one measurement cycle of the algorithm could beperformed in close to half a millisecond and the achieved parallel utilization of the multicoreplatform was close to eighty-seven percent.It was concluded that the algorithm performed well within the limit of acceptance. A conclusionabout the performance was that low execution time of a measurement cycle makes it possible toexecute the algorithm at a frequency of eighteen hundred Hertz. With a frequency that high, incombination with the camera settings proposed in the thesis, the algorithm would be able tomeasure speeds close to one thousand one hundred kilometers per hour.The authors recommend that future work should be focused on investigating the cameraparameters to be able to optimize both the memory and computational requirements of theapplication. It is also recommended to look closer at the algorithm and the possibilities ofdetecting transversal and angular changes as it would open up for other application areas,requiring more than just the speed.
13

Kalibrace a interpretace obrazových dat měřených zařízením LEEM / Calibration and interpretation of images measured by LEEM

Endstrasser, Zdeněk January 2021 (has links)
This thesis deals with the software development to calibration and interpretation of image data measured by a LEEM device. As the imaging technique is uniquely suited for in-situ studies of surface dynamical processes, the attention is mainly paid to methods enabling the evaluation of measurement time series. The phase correlation method based on Fourier transform of images is proposed to temperature shift correction between consecutive frames. The thesis describes the methods of additive and impulse noise filtering, image visualization, the filtration of secondary electrons and the determination of I-V curves from measured image data. Implemented methods are described not only in terms of their mathematical origin, but also with emphasis on the revealing of critical aspects associated with their use. The thesis also focuses on the application of the created algorithm to image data capturing the spatial and temporal evolution of 4,4’-biphenyl-dicarboxylic acid surface phases induced by sample annealing. Based on these evaluations, a suitable procedure is then determined to perform accurate detection and compensation of temperature shift said.
14

An automatic corneal subbasal nerve registration system using FFT and phase correlation techniques for an accurate DPN diagnosis

Al-Fahdawi, Shumoos, Qahwaji, Rami S.R., Al-Waisy, Alaa S., Ipson, Stanley S. January 2015 (has links)
Yes / Confocal microscopy is employed as a fast and non-invasive way to capture a sequence of images from different layers and membranes of the cornea. The captured images are used to extract useful and helpful clinical information for early diagnosis of corneal diseases such as, Diabetic Peripheral Neuropathy (DPN). In this paper, an automatic corneal subbasal nerve registration system is proposed. The main aim of the proposed system is to produce a new informative corneal image that contains structural and functional information. In addition a colour coded corneal image map is produced by overlaying a sequence of Cornea Confocal Microscopy (CCM) images that differ in their displacement, illumination, scaling, and rotation to each other. An automatic image registration method is proposed based on combining the advantages of Fast Fourier Transform (FFT) and phase correlation techniques. The proposed registration algorithm searches for the best common features between a number of sequenced CCM images in the frequency domain to produce the formative image map. In this generated image map, each colour represents the severity level of a specific clinical feature that can be used to give ophthalmologists a clear and precise representation of the extracted clinical features from each nerve in the image map. Moreover, successful implementation of the proposed system and the availability of the required datasets opens the door for other interesting ideas; for instance, it can be used to give ophthalmologists a summarized and objective description about a diabetic patient’s health status using a sequence of CCM images that have been captured from different imaging devices and/or at different times
15

Objektų vaizde sekimo technologijų tyrimas vaizdo transliavimo sistemoms / Investigation of image object tracking technologies for video streaming systems

Gudiškis, Andrius 16 June 2014 (has links)
Šiame darbe buvo ištirtas keleto pasirinktų objekto vaizde sekimo metodų pritaikomumas relaus laiko vaizdo transliavimo sistemose. Išsiaiškinta, kad standartinis blokelių sutapdinimo algoritmas, nors yra pats paprasčiausias iš tirtųjų, yra netinkamas naudoti tokiose sistemose dėl ilgai užtrunkančių skaičiavimų, kurie sukuria papildomą vaizdo vėlinimą. Fazinės koreliacijos metodo taikymo taip pat buvo atsisakyta, nes jo randami vaizdo poslinkio vektoriai yra pernelyg atsitiktiniai. Tiriant būdingųjų taškų paieška ir sutapdinimu grįstus algoritmus buvo išsiaiškinta, kad HARRIS randa daugiausiai taškų iš visų, FAST algoritmas veikia greičiausiai, nors randamų taškų skaičius ir tikslumas yra žymiai mažesnis, o SURF ir MSER algoritmai puikiai balansuoja tarp randamų taškų skaičiaus, jų tikslumo ir skaičiavimų vykdymo greičio. Padarėme išvadas, kad visi tirti būdingųjų taškų sutapdinimu ir paieška pagrįsti metodai, priklausomai nuo uždavinio pobūdžio, gali būti taikomi realaus laiko vaizdo transliacijos sistemos objektams vaizde sekti. / In this thesis we've investigated methods used in object tracking in video sequences, that could be applied in systems of real-time video streaming. We've found out that a standard block matching method, despite being the most simplistic one out of all methods investigated, is also the most time consuming and cannot be applied in real-time systems. Phase correlation method is much faster than block matching, but motion vectors calculated with this algorithm are too random and it would be impractical to use it in real-time object tracking. While investigating feature detection and matching methods we've concluded that HARRIS algorithm finds more feature points than others, FAST algorithm is the fastest, but not very accurate, SURF and MSER algorithms retains the balance between the calculation speed and accuracy of finding feature points. Hence all these algorithms could be applied in real-time video streaming systems to track objects, depending on the contents of the video sequence and the complexity of the task.
16

Registro de imagens por correlação de fase para geração de imagens coloridas em retinógrafos digitais utilizando câmera CCD monocromática / Image registration using phase correlation to generate color images in digital fundus cameras using monochromatic CCD camera

Stuchi, José Augusto 10 June 2013 (has links)
A análise da retina permite o diagnostico de muitas patologias relacionadas ao olho humano. A qualidade da imagem e um fator importante já que o médico normalmente examina os pequenos vasos da retina e a sua coloração. O equipamento normalmente utilizado para a visualização da retina e o retinógrafo digital, que utiliza sensor colorido com filtro de Bayer e luz (flash) branca. No entanto, esse filtro causa perda na resolução espacial, uma vez que e necessário um processo de interpolação matemática para a formação da imagem. Com o objetivo de melhorar a qualidade da imagem da retina, um retinógrafo com câmera CCD monocromática de alta resolução foi desenvolvido. Nele, as imagens coloridas são geradas pela combinação dos canais monocromáticos R (vermelho), G (verde) e B (azul), adquiridos com o chaveamento da iluminação do olho com LED vermelho, verde e azul, respectivamente. Entretanto, o pequeno período entre os flashes pode causar desalinhamento entre os canais devido a pequenos movimentos do olho. Assim, este trabalho apresenta uma técnica de registro de imagens, baseado em correlação de fase no domínio da frequência, para realizar precisamente o alinhamento dos canais RGB no processo de geração de imagens coloridas da retina. A validação do método foi realizada com um olho mecânico (phantom) para a geração de 50 imagens desalinhadas que foram corrigidas pelo método proposto e comparadas com as imagens alinhadas obtidas como referência (ground-truth). Os resultados mostraram que retinógrafo com câmera monocromática e o método de registro proposto nesse trabalho podem produzir imagens coloridas da retina com alta resolução espacial, sem a perda de qualidade intrínseca às câmeras CCD coloridas que utilizam o filtro de Bayer. / The analysis of retina allows the diagnostics of several pathologies related to the human eye. Image quality is an important factor since the physician often examines the small vessels of the retina and its color. The device usually used to observe the retina is the fundus camera, which uses color sensor with Bayer filter and white light. However, this filter causes loss of spatial resolution, since it is necessary a mathematical interpolation process to create the final image. Aiming at improving the retina image quality, a fundus camera with monochromatic CCD camera was developed. In this device, color images are generated by combining the monochromatic channels R (red), G (green) and B (blue), which were acquired by switching the eye illumination with red, green and blue light, respectively. However, the short period between the flashes may cause misalignment among the channels because of the small movements of the eye. Thus, this work presents an image registration technique based on phase correlation in the frequency domain, for accurately aligning the RGB channels on the process of generating retina color images. Validation of the method was performed by using a mechanical eye (phantom) for generating 50 misaligned images, which were aligned by the proposed method and compared to the aligned images obtained as references (ground-truth). Results showed that the fundus camera with monochromatic camera and the method proposed in this work can produce high spatial resolution images without the loss of quality intrinsic to color CCD cameras that uses Bayer filter.
17

Video object segmentation using phase-base detection of moving object boundaries

To, Thang Long, Information Technology & Electrical Engineering, Australian Defence Force Academy, UNSW January 2005 (has links)
A video sequence often contains a number of objects. For each object, the motion of its projection on the video frames is affected by its movement in 3-D space, as well as the movement of the camera. Video object segmentation refers to the task of delineating and distinguishing different objects that exist in a series of video frames. Segmentation of moving objects from a two-dimensional video is difficult due to the lack of depth information at the boundaries between different objects. As the motion incoherency of a region is intrinsically linked to the presence of such boundaries and vice versa, a failure to recognise a discontinuity in the motion field, or the use of an incorrect motion, often leads directly to errors in the segmentation result. In addition, many defects in a segmentation mask are also located in the vicinity of moving object boundaries, due to the unreliability of motion estimation in these regions. The approach to segmentation in this work comprises of three stages. In the first part, a phase-based method is devised for detection of moving object boundaries. This detection scheme is based on the characteristics of a phase-matched difference image, and is shown to be sensitive to even small disruptions to a coherent motion field. In the second part, a spatio-temporal approach for object segmentation is introduced, which involves a spatial segmentation in the detected boundary region, followed by a motion-based region-merging operation using three temporally adjacent video frames. In the third stage, a multiple-frame approach for stabilisation of object masks is introduced to alleviate the defects which may have existed earlier in a local segmentation, and to improve upon the temporal consistency of object boundaries in the segmentation masks along a sequence. The feasibility of the proposed work is demonstrated at each stage through examples carried out on a number of real video sequences. In the presence of another object motion, the phase-based boundary detection method is shown to be much more sensitive than direct measures such as sum-of-squared error on a motion-compensated difference image. The three-frame segmentation scheme also compares favourably with a recently proposed method initiated from a non-selective spatial segmentation. In addition, improvements in the quality of the object masks after the stabilisation stage are also observed both quantitatively and visually. The final segmentation result is then used in an experimental object-based video compression framework, which also shows improvements in efficiency over a contemporary video coding method.
18

Constructing Panoramic Scenes From Aerial Videos

Erdem, Elif 01 December 2007 (has links) (PDF)
In this thesis, we address the problem of panoramic scene construction in which a single image covering the entire visible area of the scene is constructed from an aerial image video. In the literature, there are several algorithms developed for construction of panoramic scene of a video sequence. These algorithms can be categorized as feature based and featureless algorithms. In this thesis, we concentrate on the feature based algorithms and comparison of these algorithms is performed for aerial videos. The comparison is performed on video sequences captured by non-stationary cameras, whose optical axis does not have to be the same. In addition, the matching and tracking performances of the algorithms are separately analyzed, their advantages-disadvantages are presented and several modifications are proposed.
19

Sensor orientation in image sequence analysis

Fulton, John R. Unknown Date (has links) (PDF)
This work investigates the process of automating reconstruction of buildings from video imagery. New metrics were developed to detect the least blurred images in a sequence for further processing. Phase correlation for point matching was investigated and new metrics were developed to identify successful matches. Direct relative orientation algorithms were investigated in-depth. A significant finding was a new 6-point algorithm which outperformed previously published algorithms for a number of calibrated camera and target geometries. The development of the new metrics and the outcomes from the comprehensive investigations conducted have contributed to a better understanding of the challenging problem of automatically reconstructing 3D objects from image sequences.
20

Registro de imagens por correlação de fase para geração de imagens coloridas em retinógrafos digitais utilizando câmera CCD monocromática / Image registration using phase correlation to generate color images in digital fundus cameras using monochromatic CCD camera

José Augusto Stuchi 10 June 2013 (has links)
A análise da retina permite o diagnostico de muitas patologias relacionadas ao olho humano. A qualidade da imagem e um fator importante já que o médico normalmente examina os pequenos vasos da retina e a sua coloração. O equipamento normalmente utilizado para a visualização da retina e o retinógrafo digital, que utiliza sensor colorido com filtro de Bayer e luz (flash) branca. No entanto, esse filtro causa perda na resolução espacial, uma vez que e necessário um processo de interpolação matemática para a formação da imagem. Com o objetivo de melhorar a qualidade da imagem da retina, um retinógrafo com câmera CCD monocromática de alta resolução foi desenvolvido. Nele, as imagens coloridas são geradas pela combinação dos canais monocromáticos R (vermelho), G (verde) e B (azul), adquiridos com o chaveamento da iluminação do olho com LED vermelho, verde e azul, respectivamente. Entretanto, o pequeno período entre os flashes pode causar desalinhamento entre os canais devido a pequenos movimentos do olho. Assim, este trabalho apresenta uma técnica de registro de imagens, baseado em correlação de fase no domínio da frequência, para realizar precisamente o alinhamento dos canais RGB no processo de geração de imagens coloridas da retina. A validação do método foi realizada com um olho mecânico (phantom) para a geração de 50 imagens desalinhadas que foram corrigidas pelo método proposto e comparadas com as imagens alinhadas obtidas como referência (ground-truth). Os resultados mostraram que retinógrafo com câmera monocromática e o método de registro proposto nesse trabalho podem produzir imagens coloridas da retina com alta resolução espacial, sem a perda de qualidade intrínseca às câmeras CCD coloridas que utilizam o filtro de Bayer. / The analysis of retina allows the diagnostics of several pathologies related to the human eye. Image quality is an important factor since the physician often examines the small vessels of the retina and its color. The device usually used to observe the retina is the fundus camera, which uses color sensor with Bayer filter and white light. However, this filter causes loss of spatial resolution, since it is necessary a mathematical interpolation process to create the final image. Aiming at improving the retina image quality, a fundus camera with monochromatic CCD camera was developed. In this device, color images are generated by combining the monochromatic channels R (red), G (green) and B (blue), which were acquired by switching the eye illumination with red, green and blue light, respectively. However, the short period between the flashes may cause misalignment among the channels because of the small movements of the eye. Thus, this work presents an image registration technique based on phase correlation in the frequency domain, for accurately aligning the RGB channels on the process of generating retina color images. Validation of the method was performed by using a mechanical eye (phantom) for generating 50 misaligned images, which were aligned by the proposed method and compared to the aligned images obtained as references (ground-truth). Results showed that the fundus camera with monochromatic camera and the method proposed in this work can produce high spatial resolution images without the loss of quality intrinsic to color CCD cameras that uses Bayer filter.

Page generated in 0.0813 seconds