• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 153
  • 37
  • 23
  • 13
  • 12
  • 6
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 300
  • 70
  • 48
  • 47
  • 45
  • 44
  • 44
  • 39
  • 36
  • 33
  • 30
  • 30
  • 27
  • 26
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Interpolation temporelle des images avec estimation de mouvement raffinée basée pixel et réduction de l'effet de halo

Tran, Thi Thuy Ha January 2010 (has links)
Dans le présent travail, après un résumé de l'état de l'art, une nouvelle interpolation temporelle des images avec réduction de halo est proposée. D'abord, pour la télévision de définition standard, une estimation de mouvement dont la résolution est le pixel, est suggérée. L'estimation se fait par l'appariement des blocs, et est suivie par un raffinement basé pixel en considérant des vecteurs de mouvement environnant. La réduction de halo se faisant à l'aide d'une fenêtre glissante de forme adaptative ne recourt pas à une détection explicite des régions d'occlusion. Ensuite, pour la télévision à haute définition, dans le but de réduire la complexité, l'estimation de mouvement de résolution pixel ainsi que la réduction de halo sont généralisées dans le contexte d'une décomposition hiérarchique. L'interpolation finale proposée est générique et est fonction à la fois de la position de l'image et de la fiabilité de l'estimation. Plusieurs . post-traitements pour améliorer la qualité de l'image sont aussi suggérés. L'algorithme proposé intégré dans un ASIC selon la technologie de circuit intégré contemporain fonctionne en temps réel.
192

Spatial scale analysis of landscape processes for digital soil mapping in Ireland

Cavazzi, Stefano January 2013 (has links)
Soil is one of the most precious resources on Earth because of its role in storing and recycling water and nutrients essential for life, providing a variety of ecosystem services. This vulnerable resource is at risk from degradation by erosion, salinity, contamination and other effects of mismanagement. Information from soil is therefore crucial for its sustainable management. While the demand for soil information is growing, the quantity of data collected in the field is reducing due to financial constraints. Digital Soil Mapping (DSM) supports the creation of geographically referenced soil databases generated by using field observations or legacy data coupled, through quantitative relationships, with environmental covariates. This enables the creation of soil maps at unexplored locations at reduced costs. The selection of an optimal scale for environmental covariates is still an unsolved issue affecting the accuracy of DSM. The overall aim of this research was to explore the effect of spatial scale alterations of environmental covariates in DSM. Three main targets were identified: assessing the impact of spatial scale alterations on classifying soil taxonomic units; investigating existing approaches from related scientific fields for the detection of scale patterns and finally enabling practitioners to find a suitable scale for environmental covariates by developing a new methodology for spatial scale analysis in DSM. Three study areas, covered by detailed reconnaissance soil survey, were identified in the Republic of Ireland. Their different pedological and geomorphological characteristics allowed to test scale behaviours across the spectrum of conditions present in the Irish landscape. The investigation started by examining the effects of scale alteration of the finest resolution environmental covariate, the Digital Elevation Model (DEM), on the classification of soil taxonomic units. Empirical approaches from related scientific fields were subsequently selected from the literature, applied to the study areas and compared with the experimental methodology. Wavelet analysis was also employed to decompose the DEMs into a series of independent components at varying scales and then used in DSM analysis of soil taxonomic units. Finally, a new multiscale methodology was developed and evaluated against the previously presented experimental results. The results obtained by the experimental methodology have proved the significant role of scale alterations in the classification accuracy of soil taxonomic units, challenging the common practice of using the finest available resolution of DEM in DSM analysis. The set of eight empirical approaches selected in the literature have been proved to have a detrimental effect on the selection of an optimal DEM scale for DSM applications. Wavelet analysis was shown effective in removing DEM sources of variation, increasing DSM model performance by spatially decomposing the DEM. Finally, my main contribution to knowledge has been developing a new multiscale methodology for DSM applications by combining a DEM segmentation technique performed by k-means clustering of local variograms parameters calculated in a moving window with an experimental methodology altering DEM scales. The newly developed multiscale methodology offers a way to significantly improve classification accuracy of soil taxonomic units in DSM. In conclusion, this research has shown that spatial scale analysis of environmental covariates significantly enhances the practice of DSM, improving overall classification accuracy of soil taxonomic units. The newly developed multiscale methodology can be successfully integrated in current DSM analysis of soil taxonomic units performed with data mining techniques, so advancing the practice of soil mapping. The future of DSM, as it successfully progresses from the early pioneering years into an established discipline, will have to include scale and in particular multiscale investigations in its methodology. DSM will have to move from a methodology of spatial data with scale to a spatial scale methodology. It is now time to consider scale as a key soil and modelling attribute in DSM.
193

Recherche de nouveaux quarks lourds avec l'expérience ATLAS au LHC. Mise en oeuvre d'algorithmes d'identification de jets issus de quarks b / Search for new heavy top-like quarks with the ATLAS experiment at the LHC. Commissionning of hight-performance b-tagging algorithms

Bousson, Nicolas 18 December 2012 (has links)
L'hypothèse d'une quatrième famille de fermions –les particules de matière décrites au sein du Modèle Standard (MS) de la physique des particules– est un des plus simples modèles de nouvelle physique encore non exclu et accessible au démarrage du LHC – le plus puissant collisionneur hadronique au monde depuis 2009. Cette thèse s'intéresse à la production d'une paire de quarks t' se désintégrant chacun via Wb. La recherche se focalise sur le domaine des très hautes masses, où la production peut être distinguée de la production de bruit de fond d'une paire de quark top en exploitant la cinématique des produits de désintégration des collisions p-p produites au centre du détecteur ATLAS. Nous présentons une stratégie originale exploitant la collimation des produits de la désintégration des bosons W de grande impulsion transverse, permettant leur reconstruction explicite. L'analyse s'appuie sur un travail de mise en oeuvre des algorithmes d'identification des jets résultants de la fragmentation des quarks de saveur b. L'étiquetage-b permet à l'expérience ATLAS d'améliorer la (re)découverte du MS, et la sensibilité à la nouvelle physique. Il sera ainsi d'une grande importance pour les futures années d'opération du LHC, raison pour laquelle nous présentons une étude de prospective de ses performances attendues avec l'extension du détecteur à pixels d'ATLAS dénommée IBL. Notre recherche de quark t' a permis d'établir une limite inférieure à la masse du quark t' de 656 GeV à partir des 4.7 fb^−1 de données 7 TeV collectées en 2011, ce qui est la meilleure limite à ce jour en recherche directe, avec également une interprétation dans le cadre du modèle de quarks dits vecteurs. / The hypothesis of a fourth generation of fermions – the matter particles described in the Standard Model (SM) of particle physics – is one of the simplest model of new physics still not excluded and accessible at the start of the Large Hadron Collider (LHC) – the world most powerful hadron collider since 2009. We search for the pair production of up-type t' quarks decaying to a W boson and a b-quark. The search is optimized for the high quark mass regime, for which the production can be distinguished from the top background by exploiting kinematic features of the decay products arising from the proton-proton collisions occurring at the center of the ATLAS detector. We present a novel search strategy reconstructing explicitly very high-pT W bosons from their collimated decay products. The analysis benefits from the commissioning of algorithms intended to identify jets stemming from the fragmentation of b-quarks. These algorithms are based on the precise reconstruction of the trajectory of charged particles, vertices of primary interaction and secondary vertices in jets. The b-tagging ability allows for ATLAS to improve the (re)discovery of the SM, and the sensibility to new physics. It will hence play an important role in the future of the LHC, the reason why we study the expected performance with an upgrade of the ATLAS pixel detector, called IBL and currently under construction. Our search of t' quark, using 4.7 fb^−1 of the 7 TeV data collected in 2011, has resulted in the world most stringent limit, excluding t' masses below 656 GeV, with also an interpretation in the framework of vector-like quarks.
194

Mise en oeuvre du détecteur à pixels et mesure de la section efficace différentielle de production des jets issus de quarks beaux auprès de l'expérience ATLAS au LHC.

Aoun, Sahar 07 November 2011 (has links)
Le Modèle Standard de la physique des particules est une théorie quantique des champs qui décrit les particules élémentaires de la matière et leurs interactions. Le Large Hadron Collider (LHC) au CERN à Genève a été construit afin de tester les prédictions théoriques du Modèle Standard, en produisant des collisions proton-proton avec des énergies au centre de masse jamais atteintes auparavant. ATLAS est l'un des quatre détecteurs construits sur le LHC. Il est optimisé pour reconstruire les particules et leur produits de désintégrations issus de ces collisions. La partie technique de cette thèse est liée au détecteur à pixels d'ATLAS qui fournit une mesure précise des paramètres des traces à côté du point d'interaction. Ceci est important pour l'identification des jets issus de quarks beaux. Nous présentons une analyse des propriétés des amas de pixels avec les données de muons cosmiques collectées en 2008. Ce travail porte essentiellement sur l'étude de l'effet du seuil des pixels sur la taille, la charge et la position des amas principaux associés aux traces à grand angle d'incidence, avec les données et les simulations. Nous avons réalisé aussi une des premières mesures de la section efficace de production de jets issus de quarks beaux au LHC avec les données de collisions proton-proton à une énergie de 7 TeV au centre de masse. C'est la deuxième analyse présentée dans ce document. Une section efficace différentielle de production de jets b a été mesurée avec les données enregistrées par ATLAS en 2010, utilisant les muons dans les jets semi-leptoniques afin d'extraire la fraction de jets b. Les résultats de la mesure sont en bon accord avec les prédictions de la QCD du Modèle Standard. / The Standard Model of elementary particle physics is a quantum field theory describing the fundamental particles which constitute matter and the interactions between them. The Large Hadron Collider (LHC) at CERN in Geneva was built with the intention of testing various Standard Model predictions, by providing proton-proton collisions at a centre-of-mass energies never recorded before. The ATLAS detector is one of the four particle experiments constructed at the LHC. It is designed to reconstruct particles and their decay products originating from these collisions. The technical part of this thesis is related to the ATLAS pixel detector which provides a precise measurement of tracks next to the interaction point. This is important for the identification of particle jets which originate from bottom quarks. We present an analysis of the pixel cluster properties using cosmic rays data collected in 2008. The main part of this work is dedicated to the study of the effect of online pixel threshold that impacts especially the charge, the size and the position of main clusters associated to tracks at high incidence angle, with data and Monte Carlo. We performed also one of the first measurements of the b-jet production cross section in proton-proton collisions at sqrt{s} = 7 TeV with the ATLAS detector. It is the second analysis presented in this document. A differential b-jet production cross section was measured with the 2010 dataset recorded by the ATLAS detector, using muons in jets to estimate the fraction of b-jets. The measurement is in good agreement with the Standard Model QCD predictions.
195

Comparaison de la micro-tomodensitométrie par comptage de photons et par intégration de charges avec le dispositif d'irradiation PIXSCAN / Comparison of photon counting versus charge integration micro-CT within the irradiation setup PIXSCAN

Ouamara, Hamid 15 February 2013 (has links)
L'approche développée par l'équipe imXgam du CPPM a consisté à adapter la technologie des pixels hybrides XPAD à l'imagerie biomédicale. C'est dans cette optique qu'un micro-tomodensitomètre PIXSCAN II basé sur la nouvelle génération de détecteurs à pixels hybrides appelés XPAD3 a été développé. Ce travail de thèse décrit la démarche engagée pour évaluer l'apport de la technologie à pixels hybrides en tomodensitométrie par rayons X en termes de contraste et de dose et pour explorer de nouvelles possibilités d'imagerie biomédicale à faible dose. L'évaluation des performances ainsi que la validation des résultats obtenus avec les données acquises avec le détecteur XPAD3 ont été comparées aux résultats obtenus avec la caméra CCD DALSA XR-4 similaire aux détecteurs utilisés dans la plupart des micro-TDM usuels. Le détecteur XPAD3 permet d'obtenir des images reconstruites d'une qualité satisfaisante et proche de celle des images de la caméra DALSA XR-4, mais avec une meilleure résolution spatiale. A faible dose, les images du détecteur XPAD3 sont de meilleure qualité que celles de la caméra CCD. Du point de vue de l'instrumentation, ce projet a prouvé le bon fonctionnement du dispositif PIXSCAN II pour la souris. Nous avons pu reproduire une qualité d'image semblable à celle obtenue avec un détecteur à intégration de charges de type caméra CCD. Pour améliorer les performances du détecteur XPAD3, il va falloir optimiser la stabilité des seuils et avoir des courbes de réponses des pixels en fonction de l'énergie assez homogènes en utilisant un capteur plus dense comme le CdTe par exemple. / The pathway that has been followed by the imXgam team at CPPM was to adapt the hybrid pixel technology XPAD to biomedical imaging. It is in this context that the micro-CT PIXSCAN II based on the new generation of hybrid pixel detectors called XPAD3 has been developed. This thesis describes the process undertaken to assess the contribution of the hybrid pixel technology in X-ray computed tomography in terms of contrast and dose and to explore new opportunities for biomedical imaging at low doses. Performance evaluation as well as the validation of the results obtained with data acquired with the detector XPAD3 were compared to results obtained with the CCD camera DALSA XR-4 similar to detectors used in most conventional micro-CT systems. The detector XPAD3 allows to obtain reconstruced images of satisfactory quality close to that of images from the DALSA XR-4 camera, but with a better spatial resolution. At low doses, the images from the detector XPAD3 have a better quality that is those from CCD camera. From an instrumentation point of view, this project demonstrated the proper erations of the device PIXSCAN II for mouse imaging. We were able to reproduce an image quality similar to that obtained with a charge integration detector such as a CCD camera. To improve the performance of the detector XPAD3, we will have to optimize the stability of the thresholds and in order to obtain more homogeneous response curves of the pixels as a function as energy by using a denser sensor such as CdTe.
196

Système d'imagerie pour la caractérisation en couches de la peau par réflectance diffuse / Imaging system for the characterization of skin layers using diffuse reflectance

Petitdidier, Nils 27 November 2018 (has links)
Les travaux effectués au cours de cette thèse concernent le développement d’un instrument à faible coût et porté sur la personne permettant le suivi quantitatif des paramètres physiologiques de la peau in vivo et de manière non invasive. L’instrument est fondé sur la technique de Spectroscopie de Réflectance Diffuse résolue spatialement (srDRS). Cette technique fournit une quantification absolue des propriétés optiques endogènes d’absorption et de diffusion du tissu sondé et possède un potentiel pour la caractérisation de ces propriétés en couches de la peau.Afin de maximiser ce potentiel, notre approche repose sur l’utilisation d’un capteur matriciel placé en contact avec le tissu et permettant l’imagerie de réflectance diffuse à haute résolution spatiale. Les travaux présentés ici comprennent la spécification et la validation d’une architecture innovante permettant la mise en œuvre de l’approche proposée, l’implémentation d’un système porté sur la personne et bas coût basé sur cette architecture et l’évaluation des performances de ce système au travers d’expérimentations à la fois sur fantômes de peau et in vivo. Les résultats obtenus valident le potentiel de l’instrument développé pour le suivi quantitatif et non-invasif des propriétés de la peau. L’approche proposée est prometteuse pour l’analyse de milieux en couches tels que la peau et ouvre la voie au développement d’une nouvelle génération d’instruments portés sur la personne et bas coûts pour le suivi en continu des propriétés optiques des tissus. / This work presents the development of a low-cost, wearable instrument for quantitative monitoring of skin physiological parameters toward non-invasive diagnostics in vivo. The instrument is based on the spatially resolved Diffuse Reflectance Spectroscopy (srDRS) technique, which provides absolute quantification of absorption and scattering endogenous properties of the probed tissue volume with a potential to discriminate between properties of individual skin layers. In the developed instrument, this potential is maximized by the use of a multi-pixel image sensor to perform contact, high resolution imaging of the diffuse reflectance. This study comprises the specification and validation of a novel srDRS system architecture based on the proposed approach, the implementation of this architecture into a low-cost, wearable device and the evaluation of the device performance both on tissue-simulating phantoms and in vivo. Results validate the potential of the instrument for the non-invasive, quantitative monitoring of tissue properties. The described approach is promising for addressing the analysis of layered tissue suchas skin and paves the way for the development of low-cost, wearable devices for continuous, passive monitoring of tissue optical properties.
197

Investigations of time-interpolated single-slope analog-to-digital converters for CMOS image sensors

Levski, Deyan January 2018 (has links)
This thesis presents a study on solutions to high-speed analog-to-digital conversion in CMOS image sensors using time-interpolation methods. Data conversion is one of the few remaining speed bottlenecks in conventional 2D imagers. At the same time, as pixel dark current continues to improve, the resolution requirements on imaging data converters impose very high system-level design challenges. The focus of the presented investigations here is to shed light on methods in Time-to-Digital Converter interpolation of single-slope ADCs. By using high-factor time-interpolation, the resolution of single-slope converters can be increased without sacrificing conversion time or power. This work emphasizes on solutions for improvement of multiphase clock interpolation schemes, following an all-digital design paradigm. Presented is a digital calibration scheme which allows a complete elimination of analog clock generation blocks, such as PLL or DLL in Flash TDC-interpolated single-slope converters. To match the multiphase clocks in time-interpolated single-slope ADCs, the latter are generated by a conventional open-loop delay line. In order to correct the process voltage and temperature drift of the delay line, a digital backend calibration has been developed. It is also executed online, in-column, and at the end of each sample conversion. The introduced concept has been tested in silicon, and has showed promising results for its introduction in practical mass-production scenarios. Methods for reference voltage generation in single-slope ADCs have also been looked at. The origins of error and noise phenomenona, which occur during both the discrete and continuous-time conversion phases in a single-slope ADC have been mathematically formalized. A method for practical measurement of noise on the ramp reference voltage has also been presented. Multiphase clock interpolation schemes are difficult for implementation when high interpolation factors are used, due to their quadratic clock phase growth with resolution. To allow high interpolation factors a time-domain binary search concept with error calibration has been introduced. Although the study being conceptual, it shows promising results for highly efficient implementations, if a solution to stable column-level unit delays can be found. The latter is listed as a matter of future investigations.
198

Restoring the balance between stuff and things in scene understanding

Caesar, Holger January 2018 (has links)
Scene understanding is a central field in computer vision that attempts to detect objects in a scene and reason about their spatial, functional and semantic relations. While many works focus on things (objects with a well-defined shape), less attention has been given to stuff classes (amorphous background regions). However, stuff classes are important as they allow to explain many aspects of an image, including the scene type, thing classes likely to be present and physical attributes of all objects in the scene. The goal of this thesis is to restore the balance between stuff and things in scene understanding. In particular, we investigate how the recognition of stuff differs from things and develop methods that are suitable to deal with both. We use stuff to find things and annotate a large-scale dataset to study stuff and things in context. First, we present two methods for semantic segmentation of stuff and things. Most methods require manual class weighting to counter imbalanced class frequency distributions, particularly on datasets with stuff and thing classes. We develop a novel joint calibration technique that takes into account class imbalance, class competition and overlapping regions by calibrating for the pixel-level evaluation criterion. The second method shows how to unify the advantages of region-based approaches (accurately delineated object boundaries) and fully convolutional approaches (end-to-end training). Both are combined in a universal framework that is equally suitable to deal with stuff and things. Second, we propose to help weakly supervised object localization for classes where location annotations are not available, by transferring things and stuff knowledge from a source set with available annotations. This is particularly important if we want to scale scene understanding to real-world applications with thousands of classes, without having to exhaustively annotate millions of images. Finally, we present COCO-Stuff - the largest existing dataset with dense stuff and thing annotations. Existing datasets are much smaller and were made with expensive polygon-based annotation. We use a very efficient stuff annotation protocol to densely annotate 164K images. Using this new dataset, we provide a detailed analysis of the dataset and visualize how stuff and things co-occur spatially in an image. We revisit the question whether stuff or things are easier to detect and which is more important based on visual and linguistic analysis.
199

Feature Extraction and Image Analysis with the Applications to Print Quality Assessment, Streak Detection, and Pedestrian Detection

Xing Liu (5929994) 02 January 2019 (has links)
Feature extraction is the main driving force behind the advancement of the image processing techniques infields suchas image quality assessment, objectdetection, and object recognition. In this work, we perform a comprehensive and in-depth study on feature extraction for the following applications: image macro-uniformity assessment, 2.5D printing quality assessment, streak defect detection, and pedestrian detection. Firstly, a set of multi-scale wavelet-based features is proposed, and a quality predictor is trained to predict the perceived macro-uniformity. Secondly, the 2.5D printing quality is characterized by a set of merits that focus on the surface structure.Thirdly, a set of features is proposed to describe the streaks, based on which two detectors are developed: the first one uses Support Vector Machine (SVM) to train a binary classifier to detect the streak; the second one adopts Hidden Markov Model (HMM) to incorporates the row dependency information within a single streak. Finally, a novel set of pixel-difference features is proposed to develop a computationally efficient feature extraction method for pedestrian detection.
200

Depletion of CMOS pixel sensors : studies, characterization, and applications / Désertion de capteurs à pixels CMOS : étude, caractérisations et applications

Heymes, Julian 17 July 2018 (has links)
Une architecture de capteurs à pixels CMOS permettant la désertion du volume sensible par polarisation via la face avant du circuit est étudiée à travers la caractérisation en laboratoire d’un capteur prototype. Les performances de collection de charge confirment la désertion d‘une grande partie de l’épaisseur sensible. De plus, le bruit de lecture restant modeste, le capteur présente une excellente résolution en énergie pour les photons en dessous de 20 keV à des températures positives. Ces résultats soulignent l’intérêt de cette architecture pour la spectroscopie des rayons X mous et pour la trajectométrie des particules chargées en milieu très radiatif. La profondeur sur laquelle le capteur est déserté est prédite par un modèle analytique simplifié et par des calculs par éléments finis. Une méthode d’évaluation de cette profondeur par mesure indirecte est proposée. Les mesures corroborent les prédictions concernant un substrat fin, très résistif, qui est intégralement déserté et un substrat moins résistif et mesurant 40 micromètres, qui est partiellement déserté sur 18 micromètres mais détecte correctement sur la totalité de l’épaisseur. Deux développements de capteurs destinés à l’imagerie X et à la neuro-imagerie intracérébrale sur des rats éveillés et libres de leurs mouvements sont présentés. / An architecture of CMOS pixel sensor allowing the depletion of the sensitive volume through frontside biasing is studied through the characterization in laboratory of a prototype. The charge collection performances confirm the depletion of a large part of the sensitive thickness. In addition, with a modest noise level, the sensor features an excellent energy resolution for photons below 20 keV at positive temperatures. These results demonstrate that such sensors are suited for soft X-ray spectroscopy and for charged particle tracking in highly radiative environment. A simplified analytical model and finite elements calculus are used to predict the depletion depth reached. An indirect measurement method to evaluate this depth is proposed. Measurements confirm predictions for a thin highly resistive epitaxial layer, which is fully depleted, and a 40micrometers thick bulk less resistive substrate, for which depletion reached 18 micrometers but which still offers correct detection over its full depth. Two sensor designs dedicated to X-ray imaging and in-brain neuroimaging on awake and freely moving rats are presented.

Page generated in 0.0745 seconds