• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 32
  • 11
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A novel 3D recovery method by dynamic (de)focused projection / Nouvelle méthode de reconstruction 3D par projection dynamique (dé)focalisée

Lertrusdachakul, Intuoun 30 November 2011 (has links)
Ce mémoire présente une nouvelle méthode pour l’acquisition 3D basée sur la lumière structurée. Cette méthode unifie les techniques de depth from focus (DFF) et depth from defocus (DFD) en utilisant une projection dynamique (dé)focalisée. Avec cette approche, le système d’acquisition d’images est construit de manière à conserver la totalité de l’objet nette sur toutes les images. Ainsi, seuls les motifs projetés sont soumis aux déformations de défocalisation en fonction de la profondeur de l’objet. Quand les motifs projetés ne sont pas focalisés, leurs Point Spread Function (PSF) sont assimilées à une distribution gaussienne. La profondeur finale est calculée en utilisant la relation entre les PSF de différents niveaux de flous et les variations de la profondeur de l’objet. Notre nouvelle estimation de la profondeur peut être utilisée indépendamment. Elle ne souffre pas de problèmes d’occultation ou de mise en correspondance. De plus, elle gère les surfaces sans texture et semi-réfléchissante. Les résultats expérimentaux sur des objets réels démontrent l’efficacité de notre approche, qui offre une estimation de la profondeur fiable et un temps de calcul réduit. La méthode utilise moins d’images que les approches DFF et contrairement aux approches DFD, elle assure que le PSF est localement unique / This paper presents a novel 3D recovery method based on structured light. This method unifies depth from focus (DFF) and depth from defocus (DFD) techniques with the use of a dynamic (de)focused projection. With this approach, the image acquisition system is specifically constructed to keep a whole object sharp in all of the captured images. Therefore, only the projected patterns experience different defocused deformations according to the object’s depths. When the projected patterns are out of focus, their Point Spread Function (PSF) is assumed to follow a Gaussian distribution. The final depth is computed by the analysis of the relationship between the sets of PSFs obtained from different blurs and the variation of the object’s depths. Our new depth estimation can be employed as a stand-alone strategy. It has no problem with occlusion and correspondence issues. Moreover, it handles textureless and partially reflective surfaces. The experimental results on real objects demonstrate the effective performance of our approach, providing reliable depth estimation and competitive time consumption. It uses fewer input images than DFF, and unlike DFD, it ensures that the PSF is locally unique.
22

Effect of Beam Characteristics and Process Parameters on the Penetration and Microstructure of Laser and Electron Beam Welds in Stainless Steel and Titanium

Hochanadel, Joris Erich January 2020 (has links)
No description available.
23

Bistatic SAR Polar Format Image Formation: Distortion Correction and Scene Size Limits

Mao, Davin 12 June 2017 (has links)
No description available.
24

Variable-aperture Photography

Hasinoff, Samuel William 19 January 2009 (has links)
While modern digital cameras incorporate sophisticated engineering, in terms of their core functionality, cameras have changed remarkably little in more than a hundred years. In particular, from a given viewpoint, conventional photography essentially remains limited to manipulating a basic set of controls: exposure time, focus setting, and aperture setting. In this dissertation we present three new methods in this domain, each based on capturing multiple photos with different camera settings. In each case, we show how defocus can be exploited to achieve different goals, extending what is possible with conventional photography. These methods are closely connected, in that all rely on analyzing changes in aperture. First, we present a 3D reconstruction method especially suited for scenes with high geometric complexity, for which obtaining a detailed model is difficult using previous approaches. We show that by controlling both the focus and aperture setting, it is possible compute depth for each pixel independently. To achieve this, we introduce the "confocal constancy" property, which states that as aperture setting varies, the pixel intensity of an in-focus scene point will vary in a scene-independent way that can be predicted by prior calibration. Second, we describe a method for synthesizing photos with adjusted camera settings in post-capture, to achieve changes in exposure, focus setting, etc. from very few input photos. To do this, we capture photos with varying aperture and other settings fixed, then recover the underlying scene representation best reproducing the input. The key to the approach is our layered formulation, which handles occlusion effects but is tractable to invert. This method works with the built-in "aperture bracketing" mode found on most digital cameras. Finally, we develop a "light-efficient" method for capturing an in-focus photograph in the shortest time, or with the highest quality for a given time budget. While the standard approach involves reducing the aperture until the desired region is in-focus, we show that by "spanning" the region with multiple large-aperture photos,we can reduce the total capture time and generate the in-focus photo synthetically. Beyond more efficient capture, our method provides 3D shape at no additional cost.
25

Variable-aperture Photography

Hasinoff, Samuel William 19 January 2009 (has links)
While modern digital cameras incorporate sophisticated engineering, in terms of their core functionality, cameras have changed remarkably little in more than a hundred years. In particular, from a given viewpoint, conventional photography essentially remains limited to manipulating a basic set of controls: exposure time, focus setting, and aperture setting. In this dissertation we present three new methods in this domain, each based on capturing multiple photos with different camera settings. In each case, we show how defocus can be exploited to achieve different goals, extending what is possible with conventional photography. These methods are closely connected, in that all rely on analyzing changes in aperture. First, we present a 3D reconstruction method especially suited for scenes with high geometric complexity, for which obtaining a detailed model is difficult using previous approaches. We show that by controlling both the focus and aperture setting, it is possible compute depth for each pixel independently. To achieve this, we introduce the "confocal constancy" property, which states that as aperture setting varies, the pixel intensity of an in-focus scene point will vary in a scene-independent way that can be predicted by prior calibration. Second, we describe a method for synthesizing photos with adjusted camera settings in post-capture, to achieve changes in exposure, focus setting, etc. from very few input photos. To do this, we capture photos with varying aperture and other settings fixed, then recover the underlying scene representation best reproducing the input. The key to the approach is our layered formulation, which handles occlusion effects but is tractable to invert. This method works with the built-in "aperture bracketing" mode found on most digital cameras. Finally, we develop a "light-efficient" method for capturing an in-focus photograph in the shortest time, or with the highest quality for a given time budget. While the standard approach involves reducing the aperture until the desired region is in-focus, we show that by "spanning" the region with multiple large-aperture photos,we can reduce the total capture time and generate the in-focus photo synthetically. Beyond more efficient capture, our method provides 3D shape at no additional cost.
26

Autonomic Imbalance - a Precursor to Myopia Development?

Chen, Jennifer C. January 2003 (has links)
While prolonged nearwork is considered to be an environmental risk factor associated with myopia development, an underlying genetic susceptibility to nearwork-induced accommodative adaptation may be one possible mechanism for human myopia development. As the control of accommodation by the autonomic system may be one such genetically predetermined system, this research sought to investigate whether an anomaly of the autonomic control of accommodation may be responsible for myopia development and progression. The emphasis of this work was determining the effect of altering the sympathetic input to the ciliary muscle on accommodation responses such as tonic accommodation and nearwork-induced accommodative adaptation in myopes and non-myopes. The first study of the thesis was based on observations of Gilmartin and Winfield (1995) which suggested that a deficit in the sympathetic inputs to the ciliary muscle may be associated with a propensity for myopia development. The effect of ß-antagonism with timolol application on accommodation characteristics was studied in different refractive error groups. Our results support the previous findings that a deficit of sympathetic facility during nearwork was not a feature of late-onset myopia. However it was found that classifying myopes according to stability of their myopia and their ethnic background was important and this allowed differentiation between accommodation responses and characteristics of the ciliary muscle autonomic inputs, with the greatest difference observed between Caucasian stable myopes and Asian progressing myopes. Progressing myopes, particularly those with an Asian background, demonstrated enhanced susceptibility to nearwork-induced accommodative adaptation and this was suggested to result from a possible parasympathetic dominance and a relative sympathetic deficit to the ciliary muscle. In contrast, stable myopes, particularly those with an Asian background, demonstrated minimal accommodation changes following nearwork (counter-adaptation in some cases), and increased accommodative adaptation with ß-antagonism, suggesting sympathetic dominance as the possible autonomic accommodation control profile. As ethnic background was found to be an important factor, a similar study was also conducted in a group of Hong Kong Chinese children to investigate if enhanced susceptibility to nearwork-induced changes in accommodation may explain in part the high prevalence of myopia in Hong Kong. Despite some minor differences in methodology between the two studies, the Hong Kong stable myopic children demonstrated counter-adaptive changes and greater accommodative adaptation with timolol, findings that were consistent with those of the adult Asian stable myopes. Both Asian progressing myopic children and adults also showed greater accommodative adaptation than the stable myopes and similar response profiles following ß-adrenergic antagonism. Thus a combination of genetically predetermined accommodation profiles that confer high susceptibility and extreme environmental pressures is a likely explanation for the increase in myopia over the past decades in Asian countries. The hypothesis that a sympathetic deficit is linked to myopia was also investigated by comparing the effect of â-stimulation with salbutamol, a ß-agonist, on accommodation with that of ß-antagonism using timolol. It was hypothesized that salbutamol would have the opposite effect of timolol, and that it would have a greater effect on subjects who demonstrated greater accommodative adaptation effects, i.e. the progressing myopes, compared to those who showed minimal changes in accommodation following nearwork. Consistent with the hypothesis, the effect of sympathetic stimulation with salbutamol application was only evident in the progressing myopes whom we hypothesized may have a parasympathetic dominance and a relative sympathetic deficit type of autonomic imbalance while it did not further enhance the rapid accommodative regression profile demonstrated by the stable myopes. Characteristics of the convergence system and the interaction between accommodation and convergence were also investigated in the Hong Kong children. No significant differences in response AC/A ratios between the emmetropic, stable and progressing myopic children were found and it was concluded that elevated AC/A ratios were not associated with higher myopic progression rate in this sample of Hong Kong children. However, ß-adrenergic antagonism with timolol application produced a greater effect on accommodative convergence (AC) in stable myopic children who presumably have a more adequate, robust sympathetic input to the ciliary muscle, but had little effect on AC of progressing myopic children. This finding again points to the possibility that the autonomic control of the accommodation and convergence systems may be different between stable and progressing myopia. The primary contribution of this study to the understanding of myopia development is that differences in the autonomic control of the ciliary muscle may be responsible for producing anomalous accommodation responses. This could have significant impact on retinal image quality and thus results in myopia development. This knowledge may be incorporated into computer models of accommodation and myopia development and provides scope for further investigation of the therapeutic benefits of autonomic agents for myopia control.
27

Robust micro/nano-positioning by visual servoing / Micro et nano-positionnement robuste par l'asservissement visuel

Cui, Le 26 January 2016 (has links)
Avec le développement des nanotechnologies, il est devenu possible et souhaitable de créer et d'assembler des nano-objets. Afin d'obtenir des processus automatisés robustes et fiables, la manipulation à l'échelle nanométrique est devenue, au cours des dernières années, une tâche primordiale. La vision est un moyen indispensable pour observer le monde à l'échelle micrométrique et nanométrique. Le contrôle basé sur la vision est une solution efficace pour les problèmes de contrôle de la robotique. Dans cette thèse, nous abordons la problématique du micro- et nano-positionnement par asservissement visuel via l'utilisation d'un microscope électronique à balayage (MEB). Dans un premier temps, la formation d'image MEB et les modèles géométriques de la vision appliqués aux MEB sont étudiés afin de présenter, par la suite, une méthode d'étalonnage de MEB par l'optimisation non-linéaire considérant les modèles de projection perspective et parallèle. Dans cette étude, il est constaté qu'il est difficile d'observer l'information de profondeur à partir de la variation de la position de pixel de l'échantillon dans l'image MEB à un grossissement élevé. Afin de résoudre le problème de la non-observabilité du mouvement dans l'axe de la profondeur du MEB, les informations de défocalisation d'image sont considérées comme caractéristiques visuelles pour commander le mouvement sur cet axe. Une méthode d'asservissement visuelle hybride est alors proposée pour effectuer le micro-positionnement en 6 degrés de liberté en utilisant les informations de défocalisation d'image et de photométrique d'image. Cette méthode est ensuite validée via l'utilisation d'un robot parallèle dans un MEB. Finalement, un système de contrôle en boucle fermée pour l'autofocus du MEB est introduit et validé par des expériences. Une méthode de suivi visuel et d'estimation de pose 3D, par la mise en correspondance avec un modèle de texture, est proposée afin de réaliser le guidage visuel dans un MEB. Cette méthode est robuste au flou d'image à cause de la défocalisation provoquée par le mouvement sur l'axe de la profondeur car le niveau de défocalisation est modélisée dans ce cadre de suivi visuel. / With the development of nanotechnology, it became possible to design and assemble nano-objects. For robust and reliable automation processes, handling and manipulation tasks at the nanoscale is increasingly required over the last decade. Vision is one of the most indispensable ways to observe the world in micrioscale and nanoscale. Vision-based control is an efficient solution for control problems in robotics. In this thesis, we address the issue of micro- and nano-positioning by visual servoing in a Scanning Electron Microscope (SEM). As the fundamental knowledge, the SEM image formation and SEM vision geometry models are studied at first. A nonlinear optimization process for SEM calibration has been presented considering both perspective and parallel projection model. In this study, it is found that it is difficult to observe the depth information from the variation of the pixel position of the sample in SEM image at high magnification. In order to solve the problem that the motion along the depth direction is not observable in a SEM, the image defocus information is considered as a visual feature to control the motion along the depth direction. A hybrid visual servoing scheme has been proposed for 6-DoF micro-positioning task using both image defocus information and image photometric information. It has been validated using a parallel robot in a SEM. Based on the similar idea, a closed-loop control scheme for SEM autofocusing task has been introduced and validated by experiments. In order to achieve the visual guidance in a SEM, a template-based visual tracking and 3D pose estimation framework has been proposed. This method is robust to the defocus blur caused by the motion along the depth direction since the defocus level is modeled in the visual tracking framework.
28

Controllable 3D Effects Synthesis in Image Editing

Yichen Sheng (18184378) 15 April 2024 (has links)
<p dir="ltr">3D effect synthesis is crucial in image editing to enhance realism or visual appeal. Unlike classical graphics rendering, which relies on complete 3D geometries, 3D effect synthesis in im- age editing operates solely with 2D images as inputs. This shift presents significant challenges, primarily addressed by data-driven methods that learn to synthesize 3D effects in an end-to-end manner. However, these methods face limitations in the diversity of 3D effects they can produce and lack user control. For instance, existing shadow generation networks are restricted to produc- ing hard shadows without offering any user input for customization.</p><p dir="ltr">In this dissertation, we tackle the research question: <i>how can we synthesize controllable and realistic 3D effects in image editing when only 2D information is available? </i>Our investigation leads to four contributions. First, we introduce a neural network designed to create realistic soft shadows from an image cutout and a user-specified environmental light map. This approach is the first attempt in utilizing neural network for realistic soft shadow rendering in real-time. Second, we develop a novel 2.5D representation Pixel Height, tailored for the nuances of image editing. This representation not only forms the foundation of a new soft shadow rendering pipeline that provides intuitive user control, but also generalizes the soft shadow receivers to be general shadow receivers. Third, we present the mathematical relationship between the Pixel Height representation and 3D space. This connection facilitates the reconstruction of normals or depth from 2D scenes, broadening the scope for synthesizing comprehensive 3D lighting effects such as reflections and refractions. A 3D-aware buffer channels are also proposed to improve the synthesized soft shadow quality. Lastly, we introduce Dr.Bokeh, a differentiable bokeh renderer that extends traditional bokeh effect algorithms with better occlusion modeling to correct flaws existed in existing methods. With the more precise lens modeling, we show that Dr.Bokeh not only achieves the state-of-the-art bokeh rendering quality, but also pushes the boundary of depth-from-defocus problem.</p><p dir="ltr">Our work in controllable 3D effect synthesis represents a pioneering effort in image editing, laying the groundwork for future lighting effect synthesis in various image editing applications. Moreover, the improvements to filtering-based bokeh rendering could significantly enhance com- mercial products, such as the portrait mode feature on smartphones.</p>
29

Fokusovací techniky optického měření 3D vlastností / Focus techniques of optical measurement of 3D features

Macháček, Jan January 2021 (has links)
This thesis deals with optical distance measurement and 3D scene measurement using focusing techniques with focus on confocal microscopy, depth from focus and depth from defocus. Theoretical part of the thesis is about different approaches to depth map generation and also about micro image defocusing technique for measuring refractive index of transparent materials. Then the camera calibration for focused techniques is described. In the next part of the thesis is described experimentally verification of depth from focus and depth from defocus techniques. For the first technique are shown results of depth map generation and for the second technique is shown comparison between measured distance values and real distance values. Finally, the discussed techniques are compared and evaluated.
30

General Defocus Particle Tracking / Generell ur-fokus partikel spårning

Anderberg, Joakim January 2023 (has links)
Three-dimensional particle tracking is a valuable tool in microfluidics for obtaining information about a system. General Defocus Particle Tracking (GDPT) is a straightforward method of 3D particle tracking that only requires a single-camera plane, making it applicable to existing equipment in a laboratory.  This project's aim was to evaluate the open-source module DefocusTracker which uses GDPT. DefocusTracker was used to track particles that were levitated in a microchip using ultrasonic standing waves.  The effects of different calibration methods used and the evaluation of the acoustic energy density over an active part of a piezo on an microchip device were investigated. Different procedures to generate a depth model from the calibration images showed that the choice of step length affects the accuracy of the depth model. A depth model created from the middle part of the field of view provides more accurate results compared to one made from the edge. Levitation experiments demonstrate that higher applied voltages result in a stronger acoustic energy density field over the field of view. The acoustic energy density field and pressure amplitude field show variations across the active piezo on the device, potentially due to a non-uniform thickness of the fluid layer and variations in energy delivery from the piezo. Overall, GDPT proves to be a useful method for evaluating unknown aspects of a microfluidic system under the influence of ultrasonic standing waves. / Tredimensionell partikelspårning är ett värdefullt verktyg inom mikrofluidik för att få ut information om ett system. Generell ur-fokus partikel spårning (GDPT) är en okomplicerad metod för 3D-partikelspårning som endast kräver ett plan med en kamera. Detta gör den tillämpningsbar på befintliga utrustningar i ett laboratorium.  Syftet med detta projekt var att utvärdera den öppna modulen DefocusTracker som använder sig av GDPT. DefocusTracker användes för att spåra partiklar som leviterades i ett mikrochip med hjälp av stående ultraljudsvågor.  Effekterna av olika kalibreringsmetoder och utvärderingen av den akustiska energitätheten över en aktiv del av en piezo på ett mikrochip undersöktes. Olika metoder för att generera en djup modell från kalibreringsbilderna visade att valet av steglängd påverkar djup modellens noggrannhet. En djup modell som skapats från mitten av synfältet ger mer användbara resultat jämfört med en som skapats från kanten. Levitationsexperiment visar att högre använda spänningar resulterar i ett starkare akustiskt energitäthetsfält över synfältet. Fältet för den akustiska energitätheten och tryckamplituden visar variationer över den aktiva piezon på enheten, vilket kan bero på en ojämn tjocklek på vätskelagret och variationer i energitillförseln från piezon.  Allmänt visar sig GDPT vara en användbar metod för att utvärdera okända aspekter av ett mikrofluidiskt system under påverkan av stående ultraljudsvågor.

Page generated in 0.0399 seconds