• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 215
  • 63
  • 60
  • 48
  • 35
  • 28
  • 7
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 558
  • 112
  • 109
  • 91
  • 88
  • 63
  • 61
  • 46
  • 45
  • 44
  • 42
  • 42
  • 40
  • 37
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Využití dutých světlovodů pro osvětlování / Use of Hollow Light Guides for Illumination

Zajíček, Josef January 2012 (has links)
This master’s thesis describes the using of hollow light guides for inner-space lightning. The project describes the legislation requirements for the inner-space day lightning. The usages of hollow light guides are also discussed. Other goal of the project is to describe computer programs, which are modeling the function of hollow light guides. The practical part of the thesis is describing the process and the results of the measuring, which was done on a model of a family house with hollow light guides.
202

Photometric registration of indoor real scenes using an RGB-D camera with application to mixed reality / Recalage photométrique de scènes réelles d’intérieurs à l’aide d’une caméra RGB-D avec application à la réalité mixte

Jiddi, Salma 11 January 2019 (has links)
L'objectif principale de la Réalité Mixte (RM) est de donner aux utilisateurs l'illusion que les objets virtuels et réels coexistent indistinctement dans le même espace. Une illusion efficace nécessite un recalage précis entre les deux mondes. Ce recalage doit être cohérent du point de vue géométrique et photométrique. Dans cette thèse, nous proposons de nouvelles méthodes de recalage photométrique pour estimer l'illumination et la réflectance de scènes réelles. Plus précisément, nous proposons des approches en nous attaquant à trois grands défis : (1) utilisation d'une seule caméra RGB-D. (2) estimation des propriétés de réflectance diffuse et spéculaire. (3) estimation de la position 3D et de la couleur de sources lumineuses dynamiques multiples. Dans notre première contribution, nous considérons des scènes réelles d’intérieurs où la géométrie et l'éclairage sont statiques. En observant la scène à partir d’une caméra mobile, des réflexions spéculaires peuvent être détectées tout au long de la séquence d'images RGB-D. Ces indices visuels sont très instructifs sur l'éclairage et la réflectance des surfaces des scènes. Par conséquent, nous les modélisons pour estimer à la fois les propriétés de réflectance diffuse et spéculaire ainsi que la position 3D de sources lumineuses multiples. Notre algorithme permet d'obtenir des résultats de RM convaincants tels que des ombres virtuelles réalistes ainsi qu'une suppression correcte de la spécularité réelle. Les ombres sont omniprésentes et représentent l’occultation de la lumière par la géométrie existante. Elles représentent donc des indices intéressants pour reconstituer les propriétés photométriques de la scène. La présence de texture dans ce contexte est un scénario critique. En effet, la séparation de la texture et des effets d'éclairage est souvent gérée par des approches qui nécessitent l’intervention de l'utilisateur ou qui ne répondent pas aux exigences du temps de traitement de la réalité mixte. Nous abordons ces limitations et proposons une méthode d'estimation de la position et de l'intensité des sources lumineuses. L'approche proposée gère les lumières dynamiques et fonctionne en temps quasi-réel. L'existence d'une source lumineuse est plus probable si elle est soutenue par plus d'un indice visuel. Nous abordons donc le problème de l'estimation des propriétés d’éclairage et de réflectance en analysant conjointement les réflexions spéculaires et les ombres projetées. L'approche proposée tire parti de l'information apportée par les deux indices pour traiter une grande variété de scènes. Notre approche est capable de traiter n'importe quelle surface texturée et tient compte à la fois des sources lumineuses statiques et dynamiques. Son efficacité est démontrée par une gamme d'applications, incluant la réalité mixte et la re-texturation. La détection des ombres projetées et des réflexions spéculaires étant au cœur de cette thèse, nous proposons finalement une méthode d'apprentissage approfondi pour détecter conjointement les deux indices visuels dans des scènes réelles d’intérieurs. / The overarching goal of Mixed Reality (MR) is to provide the users with the illusion that virtual and real objects coexist indistinguishably in the same space. An effective illusion requires an accurate registration between both worlds. This registration must be geometrically and photometrically coherent. In this thesis, we propose novel photometric registration methods to estimate the illumination and reflectance of real scenes. Specifically, we propose new approaches which address three main challenges: (1) use of a single RGB-D camera. (2) estimation of both diffuse and specular reflectance properties. (3) estimation of the 3D position and color of multiple dynamic light sources. Within our first contribution, we consider indoor real scenes where both geometry and illumination are static. As the sensor browses the scene, specular reflections can be observed throughout a sequence of RGB-D images. These visual cues are very informative about the illumination and reflectance of scene surfaces. Hence, we model these cues to recover both diffuse and specular reflectance properties as well as the 3D position of multiple light sources. Our algorithm allows convincing MR results such as realistic virtual shadows and correct real specularity removal. Shadows are omnipresent and result from the occlusion of light by existing geometry. They therefore represent interesting cues to reconstruct the photometric properties of the scene. Presence of texture in this context is a critical scenario. In fact, separating texture from illumination effects is often handled via approaches which require user interaction or do not satisfy mixed reality processing time requirements. We address these limitations and propose a method which estimates the 3D position and intensity of light sources. The proposed approach handles dynamic light sources and runs at an interactive frame rate. The existence of a light source is more likely if it is supported by more than one cue. We therefore address the problem of estimating illumination and reflectance properties by jointly analysing specular reflections and cast shadows. The proposed approach takes advantage of information brought by both cues to handle a large variety of scenes. Our approach is capable of handling any textured surface and considers both static and dynamic light sources. Its effectiveness is demonstrated through a range of applications including real-time mixed reality and retexturing. Since the detection of cast shadows and specular reflections are at the heart of this thesis, we further propose a deep-learning framework to jointly detect both cues in indoor real scenes.
203

Détection de changements entre vidéos aériennes avec trajectoires arbitraires / Change detection in aerial videos with arbitrary trajectories

Bourdis, Nicolas 24 May 2013 (has links)
Les activités basées sur l'exploitation de données vidéo se sont développées de manière fulgurante ces dernières années : nous assisté à une démocratisation de certaines de ces activités (vidéo-surveillance) mais également à une diversification importante des applications opérationnelles (suivi de ressources naturelles, reconnaissance etc). Cependant, le volume de données vidéo généré est aujourd'hui astronomique et l'efficacité de ces activités est limitée par le coût et la durée nécessaire à l'interprétation humaine des données vidéo. L'analyse automatique de flux vidéos est donc devenue une problématique cruciale pour de nombreuses applications. L'approche semi-automatique développée dans le cadre de cette thèse se concentre plus spécifiquement sur l'analyse de vidéos aériennes, et permet d'assister l'analyste image dans sa tâche en suggérant des zones d'intérêt potentiel par détection de changements. Pour cela, nous effectuons une modélisation tridimensionnelle des apparences observées dans les vidéos de référence. Cette modélisation permet ensuite d'effectuer une détection en ligne des changements significatifs dans une nouvelle vidéo, en identifiant les déviations d'apparence par rapport aux modèles de référence. Des techniques spécifiques ont également été proposées pour effectuer l'estimation des paramètres d'acquisition ainsi que l'atténuation des effets de l'illumination. De plus, nous avons développé plusieurs techniques de consolidation permettant d'exploiter la connaissance a priori relative aux changements à détecter. L'intérêt et les bonnes performances de notre approche a été minutieusement démontré à l'aide de données réelles et synthétiques. / Business activities based on the use of video data have developed at a dazzling speed these last few years: not only has the market of some of these activities widely expanded (video-surveillance) but the operational applications have also greatly diversified (natural resources monitoring, intelligence etc). However, nowadays, the volume of generated data has become overwhelming and the efficiency of these activities is now limited by the cost and the time required by the human interpretation of this video data. Automatic analysis of video streams has hence become a critical problem for numerous applications. The semi-autmoatic approach developed in this thesis focuses more specifically on the automatic analysis of aerial videos and enables assisting the image analyst in his task by suggesting areas of potential interest identified using change detection. For that purpose, our approach proceeds to a tridimensional modeling of the appearances observed in the reference videos. Such a modeling then enables the online detection of significant changes in a new video, by identifying appearance deviations with respect to the reference models. Specific techniques have also been developed to estimate the acquisition parameters and to attenuate illumination effects. Moreover, we developed several consolidation techniques making use of a priori knowledge related to targeted changes, in order to improve detection accuracy. The interest and good performance of our change detection approach has been carefully demonstrated using both real and synthetical data.
204

Neural probabilistic path prediction : skipping paths for acceleration

Peng, Bowen 10 1900 (has links)
La technique de tracé de chemins est la méthode Monte Carlo la plus populaire en infographie pour résoudre le problème de l'illumination globale. Une image produite par tracé de chemins est beaucoup plus photoréaliste que les méthodes standard tel que le rendu par rasterisation et même le lancer de rayons. Mais le tracé de chemins est coûteux et converge lentement, produisant une image bruitée lorsqu'elle n'est pas convergée. De nombreuses méthodes visant à accélérer le tracé de chemins ont été développées, mais chacune présente ses propres défauts et contraintes. Dans les dernières avancées en apprentissage profond, en particulier dans le domaine des modèles génératifs conditionnels, il a été démontré que ces modèles sont capables de bien apprendre, modéliser et tirer des échantillons à partir de distributions complexes. Comme le tracé de chemins dépend également d'un tel processus sur une distribution complexe, nous examinons les similarités entre ces deux problèmes et modélisons le processus de tracé de chemins comme un processus génératif. Ce processus peut ensuite être utilisé pour construire un estimateur efficace avec un réseau neuronal afin d'accélérer le temps de rendu sans trop d'hypothèses sur la scène. Nous montrons que notre estimateur neuronal (NPPP), utilisé avec le tracé de chemins, peut améliorer les temps de rendu d'une manière considérable sans beaucoup compromettre sur la qualité du rendu. Nous montrons également que l'estimateur est très flexible et permet à un utilisateur de contrôler et de prioriser la qualité ou le temps de rendu, sans autre modification ou entraînement du réseau neuronal. / Path tracing is one of the most popular Monte Carlo methods used in computer graphics to solve the problem of global illumination. A path traced image is much more photorealistic compared to standard rendering methods such as rasterization and even ray tracing. Unfortunately, path tracing is expensive to compute and slow to converge, resulting in noisy images when unconverged. Many methods aimed to accelerate path tracing have been developed, but each has its own downsides and limitiations. Recent advances in deep learning, especially with conditional generative models, have shown to be very capable at learning, modeling, and sampling from complex distributions. As path tracing is also dependent on sampling from complex distributions, we investigate the similarities between the two problems and model the path tracing process itself as a conditional generative process. It can then be used to build an efficient neural estimator that allows us to accelerate rendering time with as few assumptions as possible. We show that our neural estimator (NPPP) used along with path tracing can improve rendering time by a considerable amount without compromising much in rendering quality. The estimator is also shown to be very flexible and allows a user to control and prioritize quality or rendering time, without any further training or modifications to the neural network.
205

Micro-Anatomical Quantitative Imaging Towards Enabling Automated Diagnosis of Thick Tissues at the Point of Care

Mueller, Jenna Lynne Hook January 2015 (has links)
<p>Histopathology is the clinical standard for tissue diagnosis. However, histopathology has several limitations including that it requires tissue processing, which can take 30 minutes or more, and requires a highly trained pathologist to diagnose the tissue. Additionally, the diagnosis is qualitative, and the lack of quantitation leads to possible observer-specific diagnosis. Taken together, it is difficult to diagnose tissue at the point of care using histopathology.</p><p>Several clinical situations could benefit from more rapid and automated histological processing, which could reduce the time and the number of steps required between obtaining a fresh tissue specimen and rendering a diagnosis. For example, there is need for rapid detection of residual cancer on the surface of tumor resection specimens during excisional surgeries, which is known as intraoperative tumor margin assessment. Additionally, rapid assessment of biopsy specimens at the point-of-care could enable clinicians to confirm that a suspicious lesion is successfully sampled, thus preventing an unnecessary repeat biopsy procedure. Rapid and low cost histological processing could also be potentially useful in settings lacking the human resources and equipment necessary to perform standard histologic assessment. Lastly, automated interpretation of tissue samples could potentially reduce inter-observer error, particularly in the diagnosis of borderline lesions. </p><p>To address these needs, high quality microscopic images of the tissue must be obtained in rapid timeframes, in order for a pathologic assessment to be useful for guiding the intervention. Optical microscopy is a powerful technique to obtain high-resolution images of tissue morphology in real-time at the point of care, without the need for tissue processing. In particular, a number of groups have combined fluorescence microscopy with vital fluorescent stains to visualize micro-anatomical features of thick (i.e. unsectioned or unprocessed) tissue. However, robust methods for segmentation and quantitative analysis of heterogeneous images are essential to enable automated diagnosis. Thus, the goal of this work was to obtain high resolution imaging of tissue morphology through employing fluorescence microscopy and vital fluorescent stains and to develop a quantitative strategy to segment and quantify tissue features in heterogeneous images, such as nuclei and the surrounding stroma, which will enable automated diagnosis of thick tissues.</p><p>To achieve these goals, three specific aims were proposed. The first aim was to develop an image processing method that can differentiate nuclei from background tissue heterogeneity and enable automated diagnosis of thick tissue at the point of care. A computational technique called sparse component analysis (SCA) was adapted to isolate features of interest, such as nuclei, from the background. SCA has been used previously in the image processing community for image compression, enhancement, and restoration, but has never been applied to separate distinct tissue types in a heterogeneous image. In combination with a high resolution fluorescence microendoscope (HRME) and a contrast agent acriflavine, the utility of this technique was demonstrated through imaging preclinical sarcoma tumor margins. Acriflavine localizes to the nuclei of cells where it reversibly associates with RNA and DNA. Additionally, acriflavine shows some affinity for collagen and muscle. SCA was adapted to isolate acriflavine positive features or APFs (which correspond to RNA and DNA) from background tissue heterogeneity. The circle transform (CT) was applied to the SCA output to quantify the size and density of overlapping APFs. The sensitivity of the SCA+CT approach to variations in APF size, density and background heterogeneity was demonstrated through simulations. Specifically, SCA+CT achieved the lowest errors for higher contrast ratios and larger APF sizes. When applied to tissue images of excised sarcoma margins, SCA+CT correctly isolated APFs and showed consistently increased density in tumor and tumor + muscle images compared to images containing muscle. Next, variables were quantified from images of resected primary sarcomas and used to optimize a multivariate model. The sensitivity and specificity for differentiating positive from negative ex vivo resected tumor margins was 82% and 75%. The utility of this approach was further tested by imaging the in vivo tumor cavities from 34 mice after resection of a sarcoma with local recurrence as a bench mark. When applied prospectively to images from the tumor cavity, the sensitivity and specificity for differentiating local recurrence was 78% and 82%. The results indicate that SCA+CT can accurately delineate APFs in heterogeneous tissue, which is essential to enable automated and rapid surveillance of tissue pathology. </p><p>Two primary challenges were identified in the work in aim 1. First, while SCA can be used to isolate features, such as APFs, from heterogeneous images, its performance is limited by the contrast between APFs and the background. Second, while it is feasible to create mosaics by scanning a sarcoma tumor bed in a mouse, which is on the order of 3-7 mm in any one dimension, it is not feasible to evaluate an entire human surgical margin. Thus, improvements to the microscopic imaging system were made to (1) improve image contrast through rejecting out-of-focus background fluorescence and to (2) increase the field of view (FOV) while maintaining the sub-cellular resolution needed for delineation of nuclei. To address these challenges, a technique called structured illumination microscopy (SIM) was employed in which the entire FOV is illuminated with a defined spatial pattern rather than scanning a focal spot, such as in confocal microscopy. </p><p>Thus, the second aim was to improve image contrast and increase the FOV through employing wide-field, non-contact structured illumination microscopy and optimize the segmentation algorithm for new imaging modality. Both image contrast and FOV were increased through the development of a wide-field fluorescence SIM system. Clear improvement in image contrast was seen in structured illumination images compared to uniform illumination images. Additionally, the FOV is over 13X larger than the fluorescence microendoscope used in aim 1. Initial segmentation results of SIM images revealed that SCA is unable to segment large numbers of APFs in the tumor images. Because the FOV of the SIM system is over 13X larger than the FOV of the fluorescence microendoscope, dense collections of APFs commonly seen in tumor images could no longer be sparsely represented, and the fundamental sparsity assumption associated with SCA was no longer met. Thus, an algorithm called maximally stable extremal regions (MSER) was investigated as an alternative approach for APF segmentation in SIM images. MSER was able to accurately segment large numbers of APFs in SIM images of tumor tissue. In addition to optimizing MSER for SIM image segmentation, an optimal frequency of the illumination pattern used in SIM was carefully selected because the image signal to noise ratio (SNR) is dependent on the grid frequency. A grid frequency of 31.7 mm-1 led to the highest SNR and lowest percent error associated with MSER segmentation. </p><p>Once MSER was optimized for SIM image segmentation and the optimal grid frequency was selected, a quantitative model was developed to diagnose mouse sarcoma tumor margins that were imaged ex vivo with SIM. Tumor margins were stained with acridine orange (AO) in aim 2 because AO was found to stain the sarcoma tissue more brightly than acriflavine. Both acriflavine and AO are intravital dyes, which have been shown to stain nuclei, skeletal muscle, and collagenous stroma. A tissue-type classification model was developed to differentiate localized regions (75x75 µm) of tumor from skeletal muscle and adipose tissue based on the MSER segmentation output. Specifically, a logistic regression model was used to classify each localized region. The logistic regression model yielded an output in terms of probability (0-100%) that tumor was located within each 75x75 µm region. The model performance was tested using a receiver operator characteristic (ROC) curve analysis that revealed 77% sensitivity and 81% specificity. For margin classification, the whole margin image was divided into localized regions and this tissue-type classification model was applied. In a subset of 6 margins (3 negative, 3 positive), it was shown that with a tumor probability threshold of 50%, 8% of all regions from negative margins exceeded this threshold, while over 17% of all regions exceeded the threshold in the positive margins. Thus, 8% of regions in negative margins were considered false positives. These false positive regions are likely due to the high density of APFs present in normal tissues, which clearly demonstrates a challenge in implementing this automatic algorithm based on AO staining alone. </p><p>Thus, the third aim was to improve the specificity of the diagnostic model through leveraging other sources of contrast. Modifications were made to the SIM system to enable fluorescence imaging at a variety of wavelengths. Specifically, the SIM system was modified to enabling imaging of red fluorescent protein (RFP) expressing sarcomas, which were used to delineate the location of tumor cells within each image. Initial analysis of AO stained panels confirmed that there was room for improvement in tumor detection, particularly in regards to false positive regions that were negative for RFP. One approach for improving the specificity of the diagnostic model was to investigate using a fluorophore that was more specific to staining tumor. Specifically, tetracycline was selected because it appeared to specifically stain freshly excised tumor tissue in a matter of minutes, and was non-toxic and stable in solution. Results indicated that tetracycline staining has promise for increasing the specificity of tumor detection in SIM images of a preclinical sarcoma model and further investigation is warranted. </p><p>In conclusion, this work presents the development of a combination of tools that is capable of automated segmentation and quantification of micro-anatomical images of thick tissue. When compared to the fluorescence microendoscope, wide-field multispectral fluorescence SIM imaging provided improved image contrast, a larger FOV with comparable resolution, and the ability to image a variety of fluorophores. MSER was an appropriate and rapid approach to segment dense collections of APFs from wide-field SIM images. Variables that reflect the morphology of the tissue, such as the density, size, and shape of nuclei and nucleoli, can be used to automatically diagnose SIM images. The clinical utility of SIM imaging and MSER segmentation to detect microscopic residual disease has been demonstrated by imaging excised preclinical sarcoma margins. Ultimately, this work demonstrates that fluorescence imaging of tissue micro-anatomy combined with a specialized algorithm for delineation and quantification of features is a means for rapid, non-destructive and automated detection of microscopic disease, which could improve cancer management in a variety of clinical scenarios.</p> / Dissertation
206

Highly automated method for facial expression synthesis

Ersotelos, Nikolaos January 2010 (has links)
The synthesis of realistic facial expressions has been an unexplored area for computer graphics scientists. Over the last three decades, several different construction methods have been formulated in order to obtain natural graphic results. Despite these advancements, though, current techniques still require costly resources, heavy user intervention and specific training and outcomes are still not completely realistic. This thesis, therefore, aims to achieve an automated synthesis that will produce realistic facial expressions at a low cost. This thesis, proposes a highly automated approach for achieving a realistic facial expression synthesis, which allows for enhanced performance in speed (3 minutes processing time maximum) and quality with a minimum of user intervention. It will also demonstrate a highly technical and automated method of facial feature detection, by allowing users to obtain their desired facial expression synthesis with minimal physical input. Moreover, it will describe a novel approach to the normalization of the illumination settings values between source and target images, thereby allowing the algorithm to work accurately, even in different lighting conditions. Finally, we will present the results obtained from the proposed techniques, together with our conclusions, at the end of the paper.
207

Simulateur pour l'étude de la visibilité dans les environnements enfumés.

Ribardière, Mickaël 16 December 2010 (has links) (PDF)
La simulation d'éclairage peut être utilisée pour l'étude et l'analyse du confort visuel ou de la performance de dispositifs d'éclairage. Pour répondre à de tels objectifs, les méthodes utilisées doivent résoudre de manière précise et réaliste la problématique de l'illumination globale. De plus, les logiciels de simulation d'éclairage doivent souvent manipuler des scènes géométriquement complexes mais aussi exploiter les propriétés photométriques réalistes des sources artificielles étendues, des sources naturelles et des matériaux. L'objectif du travail de thèse est d'étendre les possibilités de ces outils à la prise en compte d'environnements enfumés dans lesquels la densité et la répartition des fumées évoluent avec le temps, tout en considérant le déplacement d'un observateur virtuel dans la scène. De telles possibilités ouvriraient le champ des possibilités de la simulation d'éclairage à des cas d'étude de la vision dans les fumées pour la sécurité incendie par exemple. Suite à une analyse globale du problème (interaction lumière/matériaux, lumière/fumée, évolution de la fumée dans le temps), le travail de recherche est décomposé en trois parties. Nous présentons dans un premier temps une nouvelle méthode de résolution de l'illumination globale pour les objets surfaciques basée sur la méthode de cache d'éclairement avec des enregistrements dont les zones d'influence s'adaptent à la géométrie et aux variations d'éclairage. Nous appelons ces enregistrements des \textit{enregistrements adaptatifs} Cette technique permet de contrôler plus finement la densité du cache. Par la suite, les travaux s'intéressent en détail à la problématique des milieux participatifs statiques et de leur interaction avec la lumière. Une méthode de résolution, s'appuyant sur les travaux de la première partie, est alors proposée. Les enregistrements adaptatifs sont créés dans l'espace en fonction des caractéristiques de la fumée (coefficients de diffusion et d'absorption) et de son influence sur l'éclairage global. Enfin, l'aspect dynamique est étudié et une extension temporelle de la méthode de simulation d'éclairage en présence de milieux participatifs est alors proposée. Nous introduisons le concept d'enregistrements adaptatifs spatio-temporels (pour les surfaces et les volumes) pour interpoler les variations d'éclairement à la fois dans l'espace et dans le temps.
208

Optical Design of Volume Holographic Imaging Systems for Microscopy

de Leon, Erich Ernesto January 2012 (has links)
Confocal microscopy rejects out of focus light from the object by scanning a pinhole through the object and constructing the image point by point. Volume holographic imaging (VHI) systems with bright-field illumination have been proposed as an alternative to conventional confocal type microscopes. VHI systems are an imaging modality that does not require scanning of a pinhole or a slit and thus provides video rate imaging of 3-dimensional objects. However, due to the wavelength-position degeneracy of the hologram, these systems produce less than optimal optical sectioning because the high selectivity of the volume hologram is not utilized. In this dissertation a generalized method for the design of VHI systems applied to microscopy is developed. Discussion includes the inter-relationships between the dispersive, degenerate, and depth axes of the system. Novel designs to remove the wavelength-position degeneracy and improve optical sectioning in these systems are also considered. Optimization of a fluorescence imaging system and of dual-grating confocal-rainbow designs are investigated. A ray-trace simulation that integrates the hologram diffraction efficiency and imaging results is constructed and an experimental system evaluated to demonstrate the optimization method. This results in an empirical relation between depth resolution and design tolerances. The dispersion and construction tolerances of a confocal-rainbow volume holographic imaging system are defined by the Bragg selectivity of the holograms. It is found that a broad diffraction efficiency profile of the illumination hologram with a narrow imaging hologram profile is an optimal balance between field of view, construction alignment, and depth resolution. The approach in this research is directly applicable towards imaging ovarian cells for the detection of cancer. Modeling methods, illumination design, eliminating the wavelength degeneracy of the hologram, and incorporating florescence imaging capability are emphasized in this dissertation. Results from this research may be used not only for biomedical imaging, but also for the design of volume holographic systems for both imaging and sensor applications in other fields including manufacturing (e.g. pharmaceutical), aerospace (e.g. LIDAR), and the physical sciences (e.g. climate change).
209

Divine illumination in Augustinian and Franciscan thought

Schumacher, Lydia Ann January 2009 (has links)
In this thesis, my purpose is to determine why Augustine’s theory of knowledge by illumination was rejected by Franciscan theologians at the end of the thirteenth century. My main methodological assumption is that Medieval accounts of divine illumination must be interpreted in a theological context, or with attention to a scholar’s underlying doctrines of God and of the human mind as the image of God, inasmuch as the latter doctrine determines one’s understanding of the nature of the mind’s cognitive work, and illumination illustrates cognition. In the first chapter, I show how Augustine’s understanding of illumination derives from his Trinitarian theology. In the second chapter, I use the same theological methods of inquiry to identify continuity of thought on illumination in Augustine and Anselm. The third chapter covers the events of the twelfth and early thirteenth centuries that had an impact on the interpretation of illumination, including the Greek and Arabic translation movements and the founding of universities and mendicant orders. In this chapter, I explain how the first Franciscan scholars transformed St. Francis of Assisi’s spiritual ideals into a theological and philosophical system, appropriating the Trinitarian theology of Richard of St. Victor and the philosophy of the Arab scholar Avicenna in the process. Bonaventure is typically hailed the great synthesizer of early Franciscan thought and the last and best proponent of traditional Medieval Augustinian thought. In the fourth chapter, I demonstrate that Bonaventure’s Victorine doctrine of the Trinity both enabled and motivated him to assign originally Avicennian meanings to philosophical arguments of Augustine and Anselm that were incompatible with the original ones. In the name of Augustine, in other words, Bonaventure introduced a theory of knowledge that is not Augustinian. In the fifth chapter, my aim is to throw the non-Augustinian character of Bonaventure’s illumination theory into sharper relief through a discussion of knowledge and illumination in the thought of his Dominican contemporary Thomas Aquinas. Although Aquinas is usually supposed to reject illumination theory, I show that he only objects to the Franciscan interpretation of the account, even while he bolsters a genuinely Augustinian account of knowledge and illumination by updating it in the Aristotelian forms of philosophical argumentation that were current at the time. In the final chapter, I explain why late thirteenth-century Franciscans challenged illumination theory, even after Bonaventure had enthusiastically championed it. In this context, I explain that that they did not reject their predecessor’s standard of knowledge outright, but only sought to eradicate the intellectually offensive interference of illumination, as he had defined it, which they perceived as inconsistent with the standard, in the interest of promulgating it. In concluding, I reiterate the importance of interpreting illumination as a function of Trinitarian theology. This approach throws the function of illumination in Augustine’s thought into relief and facilitates the effort to identify continuity and discontinuity amongst Augustine and his Medieval readers, which in turn makes it possible to identify the reasons for the late Medieval decline of divine illumination theory and the rise of an altogether unprecedented epistemological standard.
210

Roadmap on structured light (Parts 4 and 5)

Rubinsztein-Dunlop, Halina, Forbes, Andrew, Berry, M V, Dennis, M R, Andrews, David L, Mansuripur, Masud, Denz, Cornelia, Alpmann, Christina, Banzer, Peter, Bauer, Thomas, Karimi, Ebrahim, Marrucci, Lorenzo, Padgett, Miles, Ritsch-Marte, Monika, Litchinitser, Natalia M, Bigelow, Nicholas P, Rosales-Guzmán, C, Belmonte, A, Torres, J P, Neely, Tyler W, Baker, Mark, Gordon, Reuven, Stilgoe, Alexander B, Romero, Jacquiline, White, Andrew G, Fickler, Robert, Willner, Alan E, Xie, Guodong, McMorran, Benjamin, Weiner, Andrew M 01 January 2017 (has links)
Final accepted manuscripts of parts 4 and 5 from Roadmap on Structured Light, authored by Masud Mansuripur, College of Optical Sciences, The University of Arizona.

Page generated in 0.1026 seconds