• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 494
  • 228
  • 163
  • 44
  • 43
  • 28
  • 17
  • 9
  • 8
  • 6
  • 6
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 1215
  • 315
  • 121
  • 115
  • 106
  • 83
  • 82
  • 77
  • 75
  • 73
  • 56
  • 51
  • 48
  • 47
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Manufacturing of super-polished large aspheric/freeform optics

Kim, Dae Wook, Oh, Chang-jin, Lowman, Andrew, Smith, Greg A., Aftab, Maham, Burge, James H. 22 July 2016 (has links)
Several next generation astronomical telescopes or large optical systems utilize aspheric/freeform optics for creating a segmented optical system. Multiple mirrors can be combined to form a larger optical surface or used as a single surface to avoid obscurations. In this paper, we demonstrate a specific case of the Daniel K. Inouye Solar Telescope (DKIST). This optic is a 4.2 m in diameter off-axis primary mirror using ZERODUR thin substrate, and has been successfully completed in the Optical Engineering and Fabrication Facility (OEFF) at the University of Arizona, in 2016. As the telescope looks at the brightest object in the sky, our own Sun, the primary mirror surface quality meets extreme specifications covering a wide range of spatial frequency errors. In manufacturing the DKIST mirror, metrology systems have been studied, developed and applied to measure low-to-mid-to-high spatial frequency surface shape information in the 4.2 m super-polished optical surface. In this paper, measurements from these systems are converted to Power Spectral Density (PSD) plots and combined in the spatial frequency domain. Results cover 5 orders of magnitude in spatial frequencies and meet or exceed specifications for this large aspheric mirror. Precision manufacturing of the super-polished DKIST mirror enables a new level of solar science.
372

CRISPR-Cas9-mediated protein tagging in human cells for RESOLFT nanoscopy and the analysis of mitochondrial prohibitins

Ratz, Michael 17 December 2015 (has links)
No description available.
373

Machine learning in multi-frame image super-resolution

Pickup, Lyndsey C. January 2007 (has links)
Multi-frame image super-resolution is a procedure which takes several noisy low-resolution images of the same scene, acquired under different conditions, and processes them together to synthesize one or more high-quality super-resolution images, with higher spatial frequency, and less noise and image blur than any of the original images. The inputs can take the form of medical images, surveillance footage, digital video, satellite terrain imagery, or images from many other sources. This thesis focuses on Bayesian methods for multi-frame super-resolution, which use a prior distribution over the super-resolution image. The goal is to produce outputs which are as accurate as possible, and this is achieved through three novel super-resolution schemes presented in this thesis. Previous approaches obtained the super-resolution estimate by first computing and fixing the imaging parameters (such as image registration), and then computing the super-resolution image with this registration. In the first of the approaches taken here, superior results are obtained by optimizing over both the registrations and image pixels, creating a complete simultaneous algorithm. Additionally, parameters for the prior distribution are learnt automatically from data, rather than being set by trial and error. In the second approach, uncertainty in the values of the imaging parameters is dealt with by marginalization. In a previous Bayesian image super-resolution approach, the marginalization was over the super-resolution image, necessitating the use of an unfavorable image prior. By integrating over the imaging parameters rather than the image, the novel method presented here allows for more realistic prior distributions, and also reduces the dimension of the integral considerably, removing the main computational bottleneck of the other algorithm. Finally, a domain-specific image prior, based upon patches sampled from other images, is presented. For certain types of super-resolution problems where it is applicable, this sample-based prior gives a significant improvement in the super-resolution image quality.
374

De la pertinence de la congruence globale en analyse phylogénétique

Levasseur, Claudine January 2005 (has links)
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
375

Beyond the piano : the super instrument : widening the instrumental capacities in the context of the piano music of the 21st century

Kallionpaa, Maria E. January 2014 (has links)
Thanks to the development of new technology, musical instruments are no more tied to their existing acoustic or technical limitations as almost all parameters can be augmented or modified in real time. An increasing number of composers, performers, and computer programmers have thus become interested in different ways of "supersizing" acoustic instruments in order to open up previously-unheard instrumental sounds. This leads us to the question of what constitutes a super instrument and what challenges does it pose aesthetically and technically? This work explores the effects that super instruments have on the identity of a given solo instrument, on the identity of a composition and on the experience of performing this kind of repertoire. The super instrument comes to be defined as a bundle of more than one instrumental lines that achieve a coherent overall identity when generated in real time. On the basis of my own personal experience of performing the works discussed in this dissertation, super instruments vary a great deal but each has a transformative effect on the identity and performance practice of the pianist. This discussion approaches the topic from the viewpoint of contemporary keyboard music, showcasing examples of super instrument compositions of the 21st century. Thus, the main purposes of this practise based research project is to explore the essence and role of piano or toy piano in a super instrument constellation, as well as the performer's role as a "super instrumentalist". I consider these issues in relation to case studies drawn from my own compositional work and a selection of works composed by Karlheinz Essl and Jeff Brown.
376

Turbulence bidimensionnelle et convection thermique : système modèle pour étudier les évènements rares en turbulence atmosphérique

Seychelles, Fanny 17 December 2008 (has links)
L’hydrodynamique à deux dimensions est d’un intérêt majeur pour la compréhension des phénomènes atmosphériques divers comme la formation de structures tels les cyclones ou les ouragans. La convection thermique comme moteur est probablement essentielle. Depuis quelques années déjà, les films de savon représentent un outil idéal dans le cadre de l'étude de la turbulence à deux dimensions. Le sujet de cette thèse est l’étude de la convection thermique dans une demi-bulle de savon. Le gradient thermique entre l’équateur et le pôle en- gendre une turbulence autour de l’équateur et donne naissance à des struc- tures tourbillonnaires uniques de grandes tailles proches du pôle. Ces tour- billons évoluent de manière aléatoire à la surface de la demi-bulle. Je me suis donc intéressée à la caractérisation du mouvement de ces structures. L’étude du déplacement quadratique moyen du centre du tourbillon montre une loi d’échelle qui illustre un comportement super-di?usif. Au-delà d’une analogie qualitative de nos vortex avec les structures de grandes échelles tels les cy- clones, nous avons mis en évidence une dynamique identique à nos tourbillons pour les ouragans et cyclones, à savoir un comportement super-di?usif. Une étude de la turbulence engendrée par la convection thermique a éga- lement été réalisée. Plus particulièrement, nous avons étudié les ?uctuations d’épaisseur mais aussi du champ de vitesse et de température. En ce qui concerne l’épaisseur, les résultats obtenus sont en accord avec la théorie concernant la turbulence strati?ée. Parallèlement, la température présente une transition étonnante d’un comportement intermittent pour les gradients de température faible, à un comportement non intermittent lorsque l’on accroît le gradient de tempéra- ture. Cette dynamique est corrélée avec celle du champ de vitesse qui montre également cette transition. / Abstract
377

Histoire et postérité de l'atomisme logique : l'ontologie des simples, la typologie des complexes, théories de la constitution et méréologie

Bucchioni, Guillaume 03 December 2012 (has links)
Ce travail traite du problème métaphysique de la composition. Ce problème peut être compris comme un ensemble de questions touchant à l'existence et à la nature des entités complexes et simples. Existe-t-il des entités complexes? Si oui, de quelle nature sont-elles? Comment sont-elles composées? Par quoi sont-elles composées (par des simples ou non)? Nous allons dans un premier temps analyser le paradigme russellien concernant le problème de la composition, à savoir l'atomisme logique. Cette analyse va nous amener à comprendre la façon dont l'outil logique (ici la logique russellienne) peut nous permettre de déterminer une ontologie (l'ontologie des faits) dans laquelle ces différentes questions trouvent des réponses déterminées. Puis nous aborderons le traitement contemporain de cette question qui se base sur la question spéciale de la composition (SCQ) développée par Peter van Inwagen dans Material Beings, et sur l'analyse méréologique. Nous allons alors aborder les différentes théories de la composition et nous allons essayer de justifier l'une d'entre-elles : l'universalisme de la composition. Cette justification nous amènera à développer et justifier une conception des simples (la théorie du Gunk), du temps (le quadridimensionnalisme), et une ontologie de l'étoffe matérielle. Nous achèverons notre travail en essayant de montrer que l'universalisme de la composition prend un sens particulier à l'intérieur des théories du monisme de priorité et du super-substantialisme. / This work deals with the metaphysical problem of composition. This problem can be understood as a set of questions relating to the existence and nature of simple and complex entities. Are there complex entities? What kind are they? How are they made? What are they made of (simples or not)? We will initially analyze the Russellian paradigm for the problem of composition, namely logical atomism. This analysis will lead us to understand how the tool logic (logic Russell here) can help us determine an ontology (ontology of facts) in which these questions are answered. Then we discuss the contemporary treatment of this question wich is based on the special composition question (SCQ) developed by Peter van Inwagen in Material Beings, and the mereological analysis. We then discuss the different theories of composition and we will try to justify one of them: the universalism of composition. This explanation leads us to develop and justify a conception of simples (the gunk theory), time (quadridimensionalism), and a stuff ontology. We will complete our work by trying to show that the universalism of composition makes sense within particular theories : priority monism and supersubstantivalism.
378

STED Microscopy with Scanning Fields Below the Diffraction Limit

Göttfert, Fabian 01 December 2015 (has links)
No description available.
379

Algorithms for super-resolution of images and videos based on learning methods / Algorithmes de super-résolution d'images et de vidéos basés sur des méthodes d'apprentissage

Bevilacqua, Marco 04 June 2014 (has links)
Par le terme ''super-résolution'' (SR), nous faisons référence à une classe de techniques qui améliorent la résolution spatiale d'images et de vidéos. Les algorithmes de SR peuvent être de deux types : les méthodes ''multi-frame'', où plusieurs images en basse résolution sont agrégées pour former une image unique en haute résolution, et les méthodes ''single-image'', qui visent à élargir une seule image. Cette thèse a pour sujet le développement de théories et algorithmes pour le problème single-image. En particulier, nous adoptons une approche ''basée sur exemples'', où l'image de sortie est estimée grâce à des techniques d'apprentissage automatique, en utilisant les informations contenues dans un dictionnaire d'exemples. Ces exemples consistent en des blocs d'image, soit extraits à partir d'images externes, soit dérivées de l'image d'entrée elle-même. Pour les deux types de dictionnaire, nous concevons de nouveaux algorithmes de SR présentant de nouvelles méthodes de suréchantillonnage et de construction du dictionnaire, et les comparons à l'état de l'art. Les résultats obtenus s'avèrent très compétitifs en termes de qualité visuelle des images de sortie et de complexité des calculs. Nous appliquons ensuite nos algorithmes au cas de la vidéo, où l'objectif est d'élargir la résolution d'une séquence vidéo. Les algorithmes, opportunément adaptées pour faire face à ce cas, sont également analysés dans le contexte du codage. L'analyse effectuée montre que, dans des cas spécifiques, la SR peut aussi être un outil efficace pour la compression vidéo, ouvrant ainsi de nouvelles perspectives intéressantes. / With super-resolution (SR) we refer to a class of techniques that enhance the spatial resolution of images and videos. SR algorithms can be of two kinds: multi-frame methods, where multiple low-resolution images are aggregated to form a unique high-resolution image, and single-image methods, that aim at upscaling a single image. This thesis focuses on developing theory and algorithms for the single-image SR problem. In particular, we adopt the so called example-based approach, where the output image is estimated with machine learning techniques, by using the information contained in a dictionary of image “examples”. The examples consist in image patches, which are either extracted from external images or derived from the input image itself. For both kinds of dictionary, we design novel SR algorithms, with new upscaling and dictionary construction procedures, and compare them to state-of-the-art methods. The results achieved are shown to be very competitive both in terms of visual quality of the super-resolved images and computational complexity. We then apply our designed algorithms to the video upscaling case, where the goal is to enlarge the resolution of an entire video sequence. The algorithms, opportunely adapted to deal with this case, are also analyzed in the coding context. The analysis conducted shows that, in specific cases, SR can also be an effective tool for video compression, thus opening new interesting perspectives.
380

Rational Design of (Reduced) Graphene Oxide Materials and Their Applications

Alazmi, Amira 11 1900 (has links)
The Graphene term has become synonymous with layered carbon sheets having thicknesses ranging from the monolayer to stacks of about ten layers. For bulk volume production, graphite chemical exfoliation is the preferred solution. For this reason, much interest has congregated around different processes to oxidize and peel off graphite to obtain graphene oxide (GO) and its counterpart, reduced GO (rGO). The community at-large has quickly adopted those processes and has been intensively using the resulting (r)GO as active materials for a myriad of applications. Yet, partially given the absence of comparative studies in synthesis methodologies, a lack of understanding persists on how to best tailor these carbon materials for a given application. In this dissertation, the effect of using different chemical oxidation-reduction strategies for graphite, namely the impact on the structure and chemistry of GOs and rGOs is systematically discussed. Added to this, it is demonstrated that the drying step of the powdered materials cannot be neglected. Its influence is demonstrated in studies such as the optimization of capacitance of rGOs touted as electrochemical energy storage materials (Chapter 4). It is concluded that, in order to maximize the performance of GO and rGO materials for any particular application, there must be a judicious choice of their synthesis steps. Obvious as it may be for anyone working in Chemistry, this point has been surprisingly overlooked for too long by the vast majority of those working with these carbon materials.

Page generated in 0.0161 seconds