• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 34
  • 22
  • 9
  • 6
  • 2
  • 2
  • Tagged with
  • 82
  • 21
  • 20
  • 18
  • 18
  • 16
  • 16
  • 15
  • 13
  • 13
  • 12
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Etudes de bruit du fond dans le canal H→ZZ*→4l pour le Run 1 du LHC. Perspectives du mode bbH(→γγ) et études d'un système de détecteur pixel amélioré pour la mise à niveau de l'expérience ATLAS pour la phase HL-LHC / Background studies on the H→ZZ→4l channel for LHC Run 1. Prospects of the bbH(→γγ) mode and studies for an improved pixel detector system for the ATLAS upgrade towards HL-LHC

Gkougkousis, Evangelos 04 February 2016 (has links)
La première prise des données du LHC (2010-2012) a été marquée par la découverte du boson scalaire, dit boson de Higgs. Sa masse a été mesurée avec une précision de 0.2% en utilisant ses désintégrations en deux photons et celles en deux bosons Z donnant quatre leptons dans l’état final. Les couplages ont été estimés en combinant plusieurs états finaux, tandis que la précision sur leur mesure pourra bénéficier énormément de la grande statistique qui sera accumulée pendant les prochaines périodes de prise des données au LHC (Run 2, Phase 2).Le canal H→ZZ*→4 leptons, a un rapport d'embranchement réduit mais présente un faible bruit de fond, ce qui le rend attractif pour la détermination des propriétés du nouveau boson. Dans cette thèse, l’analyse conduite pour la mise en évidence de ce mode dans l’expérience ATLAS est détaillée, avec un poids particulier porté à la mesure et au contrôle du bruit de fond réductible en présence d’électrons.Dans le cadre de la préparation de futures prises de données à très haute luminosité, prévues à partir de 2025, deux études sont menées:La première concerne l’observabilité du mode de production du boson de Higgs en association avec des quarks b. Une analyse multivariée, basée sur des données simulées, confirme un très faible signal dans le canal H→2 photons.La seconde concerne la conception et le développement d’un détecteur interne en silicium, adapté à l’environnement hostile, de haute irradiation et de taux d’occupation élevée, attendues pendant la Phase 2 du LHC. Des études concernant l’optimisation de la géométrie, l’amélioration de l’efficacité ainsi que la résistance à l’irradiation ont été menées. A travers des mesures SiMS et des simulations des procédés de fabrication, les profiles de dopage et les caractéristiques électriques attendues pour des technologies innovantes sont explorés. Des prototypes ont été testés sous faisceau et soumis à des irradiations, afin d’évaluer les performances du détecteur et celles de son électronique associée. / The discovery of a scalar boson, known as the Higgs boson, marked the first LHC data period (2010-2012). Using mainly di-photon and di-Z decays, with the latest leading to a four leptons final state, the mass of the boson was measured with a precision of <0.2%. Relevant couplings were estimated by combining several final states, while corresponding uncertainties would largely benefit from the increased statistics expected during the next LHC data periods (Run 2, Phase 2).The H→ZZ*→4l channel, in spite of its suppressed brunching ratio, benefits from a weak background, making it a prime choice for the investigation of the new boson’s properties. In this thesis, the analysis aimed to the observation of this mode with the ALTAS detector is presented, with a focus on the measurement and control of the reducible electron background.In the context of preparation for future high luminosity data periods, foreseen from 2025 onwards, two distinct studies are conducted:The first concerns the observability potential of the Higgs associated production mode in conjunction with two b-quarks. A multivariate analysis based on simulated data confirms a very weak expected signal in the H→di-photon channel.The second revolves around the conception and development of an inner silicon detector capable of operating in the hostile environment of high radiation and increased occupancy, expected during LHC Phase 2. Main studies were concentrated on improving radiation hardness, geometrical and detection efficiency. Through fabrication process simulation and SiMS measurements, doping profiles and electrical characteristics, expected for innovative technologies, are explored. Prototypes were designed and evaluated in test beams and irradiation experiments in order to asses their performances and that of associated read-out electronics.
12

DATA COMPRESSION SYSTEM FOR VIDEO IMAGES

RAJYALAKSHMI, P.S., RAJANGAM, R.K. 10 1900 (has links)
International Telemetering Conference Proceedings / October 13-16, 1986 / Riviera Hotel, Las Vegas, Nevada / In most transmission channels, bandwidth is at a premium and an important attribute of any good digital signalling scheme is to optimally utilise the bandwidth for transmitting the information. The Data Compression System in this way plays a significant role in the transmission of picture data from any Remote Sensing Satellite by exploiting the statistical properties of the imagery. The data rate required for transmission to ground can be reduced by using suitable compression technique. A data compression algorithm has been developed for processing the images of Indian Remote Sensing Satellite. Sample LANDSAT imagery and also a reference photo are used for evaluating the performance of the system. The reconstructed images are obtained after compression for 1.5 bits per pixel and 2 bits per pixel as against the original of 7 bits per pixel. The technique used is uni-dimensional Hadamard Transform Technique. The Histograms are computed for various pictures which are used as samples. This paper describes the development of such a hardware and software system and also indicates how hardware can be adopted for a two dimensional Hadamard Transform Technique.
13

Hur adapteras komedi? : En kvalitativ adaptionsanalys av humorns förändring i moderna studiokomedier

Byström, Marcus, Sagvold, Pontus January 2020 (has links)
Syftet med denna uppsats är att ta reda på hur amerikanska studiokomedier adapterar komik från manus till film. Eftersom humor är subjektivt och personer finner komik i olika företeelser är det svårt att adaptera komedimanus till komedifilm. Vår uppsats undersöker hur de tre filmerna Baksmällan (2009), Horrible Bosses (2011) och Pixels (2015) adapterats. Detta gör vi ur ett humorperspektiv med hjälp av Geoff Kings komediteori om orimlighet och annan liknande teori. Komiken undersöks sedan inom tre kategorier: skämt genom dialog, visuell komik och situationskomik. Resultatet visade att Baksmällans adaption var trognast till sitt manus komiska intentioner medan Horrible Bosses och Pixels oftare avvek. I diskussion kommer vi fram till att detta kan ha att göra med att Baksmällans regissör var med i manusprocessen medan Horrible Bosses och Pixels regissörer inte var det. Detta tyder på svårigheten med att adaptera humor eftersom personer har egna subjektiva åsikter om vad som är komiskt.
14

Efficient and Consistent Convolutional Neural Networks for Computer Vision

Caleb Tung (16649301) 27 July 2023 (has links)
<p>Convolutional Neural Networks (CNNs) are machine learning models that are commonly used for computer vision tasks like image classification and object detection. State-of-the-art CNNs achieve high accuracy by using many convolutional filters to extract features from the input images for correct predictions. This high accuracy is achieved at the cost of high computational intensity. Large, accurate CNNs typically require powerful Graphics Processing Units (GPUs) to train and deploy, while attempts at creating smaller, less computationally-intense CNNs lose accuracy. In fact, maintaining consistent accuracy is a challenge for even the state-of-the-art CNNs. This presents a problem: the vast energy expenditure demanded by CNN training raises concerns about environmental impact and sustainability, while the computational intensity of CNN inference makes it challenging for low-power devices (e.g. embedded, mobile, Internet-of-Things) to deploy the CNNs on their limited hardware. Further, when reliable network is limited or when extremely low latency is required, the cloud cannot be used to offload computing from the low-power device, forcing a need to research methods to deploy CNNs on the device itself: to improve energy efficiency and mitigate consistency and accuracy losses of CNNs.</p> <p>This dissertation investigates causes of CNN accuracy inconsistency and energy consumption. We further propose methods to improve both, enabling CNN deployment on low-power devices. Our methods do not require training to avoid the high energy costs associated with training.</p> <p>To address accuracy inconsistency, we first design a new metric to properly capture such behavior. We conduct a study of modern object detectors to find that they all exhibit inconsistent behavior. That is, when two images are similar, an object detector can sometimes produce completely different predictions. Malicious actors exploit this to cause CNNs to mispredict, while  image distortions caused by camera equipment and natural phenomena can also cause mispredictions. Regardless the cause of the misprediction, we find that modern accuracy metrics do not capture this behavior, and we create a new consistency metric to measure the behavior. Finally, we demonstrate the use of image processing techniques to improve CNN consistency on modern object detection datasets.</p> <p>To improve CNN energy efficiency and reduce inference latency, we design the focused convolution operation. We observe that in a given image, many pixels are often irrelevant to the computer vision task -- if the pixels are deleted, the CNN can still give the correct prediction. We design a method to use a depth mapping neural network to identify which pixels are irrelevant in modern computer vision datasets. Next, we design the focused convolution to automatically ignore any pixels marked irrelevant outside the Area of Interest (AoI). By replacing the standard convolutional operations in CNNs with our focused convolutions, we find that ignoring those irrelevant pixels can save up to 45% energy and inference latency. </p> <p>Finally, we improve the focused convolutions, allowing for (1) energy-efficient, automated AoI generation within the CNN itself and (2) improved memory alignment and better utilization of parallel processing hardware. The original focused convolution required AoI generation in advance, using a computationally-intense depth mapping method. Our AoI generation technique automatically filters the features from the early layers of a CNN using a threshold. The threshold is determined using an Accuracy vs Latency curve search method. The remaining layers will apply focused convolutions to the AoI to reduce energy use. This will allow focused convolutions to be deployed within any pretrained CNN for various observed use cases. No training is required.</p>
15

Algorithm Oriented to the Detection of the Level of Blood Filling in Venipuncture Tubes Based on Digital Image Processing

Castillo, Jorge, Apfata, Nelson, Kemper, Guillermo 01 January 2021 (has links)
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado. / This article proposes an algorithm oriented to the detection of the level of blood filling in patients, with detection capacity in millimeters. The objective of the software is to detect the amount of blood stored into the venipuncture tube and avoid coagulation problems due to excess fluid. It also aims to avoid blood levels below that required, depending on the type of analysis to be performed. The algorithm acquires images from a camera positioned in a rectangular structure located within an enclosure, which has its own internal lighting to ensure adequate segmentation of the pixels of the region of interest. The algorithm consists of an image improvement stage based on gamma correction, followed by a segmentation stage of the area of ​​pixels of interest, which is based on thresholding by HSI model, in addition to filtering to accentuate the contrast between the level of filling and staining, and as a penultimate stage, the location of the filling level due to changes in the vertical tonality of the image. Finally, the level of blood contained in the tube is obtained from the detection of the number of pixels that make up the vertical dimension of the tube filling. This number of pixels is then converted to physical dimensions expressed in millimeters. The validation results show an average percentage error of 0.96% by the proposed algorithm. / Revisión por pares
16

Análise geoestatística multi-pontos / Analysis of multiple-point geostatistics

Cruz Rodriguez, Joan Neylo da 12 June 2013 (has links)
Estimativa e simulação baseados na estatística de dois pontos têm sido usadas desde a década de 1960 na análise geoestatístico. Esses métodos dependem do modelo de correlação espacial derivado da bem conhecida função semivariograma. Entretanto, a função semivariograma não pode descrever a heterogeneidade geológica encontrada em depósitos minerais e reservatórios de petróleo. Assim, ao invés de usar a estatística de dois pontos, a geoestatística multi-pontos, baseada em distribuições de probabilidade de múltiplo pontos, tem sido considerada uma alternativa confiável para descrição da heterogeneidade geológica. Nessa tese, o algoritmo multi-ponto é revisado e uma nova solução é proposta. Essa solução é muito melhor que a original, pois evita usar as probabilidades marginais quando um evento que nunca ocorre é encontrado no template. Além disso, para cada realização a zona de incerteza é ressaltada. Uma base de dados sintética foi gerada e usada como imagem de treinamento. A partir dessa base de dados completa, uma amostra com 25 pontos foi extraída. Os resultados mostram que a aproximação proposta proporciona realizações mais confiáveis com zonas de incerteza menores. / Estimation and simulation based on two-point statistics have been used since 1960\'s in geostatistical analysis. These methods depend on the spatial correlation model derived from the well known semivariogram function. However, the semivariogram function cannot describe the geological heterogeneity found in mineral deposits and oil reservoirs. Thus, instead of using two-point statistics, multiple-point geostatistics based on probability distributions of multiple-points has been considered as a reliable alternative for describing the geological heterogeneity. In this thesis, the multiple-point algorithm is revisited and a new solution is proposed. This solution is much better than the former one because it avoids using marginal probabilities when a never occurring event is found in a template. Moreover, for each realization the uncertainty zone is highlighted. A synthetic data base was generated and used as training image. From this exhaustive data set, a sample with 25 points was drawn. Results show that the proposed approach provides more reliable realizations with smaller uncertainty zones.
17

Análise geoestatística multi-pontos / Analysis of multiple-point geostatistics

Joan Neylo da Cruz Rodriguez 12 June 2013 (has links)
Estimativa e simulação baseados na estatística de dois pontos têm sido usadas desde a década de 1960 na análise geoestatístico. Esses métodos dependem do modelo de correlação espacial derivado da bem conhecida função semivariograma. Entretanto, a função semivariograma não pode descrever a heterogeneidade geológica encontrada em depósitos minerais e reservatórios de petróleo. Assim, ao invés de usar a estatística de dois pontos, a geoestatística multi-pontos, baseada em distribuições de probabilidade de múltiplo pontos, tem sido considerada uma alternativa confiável para descrição da heterogeneidade geológica. Nessa tese, o algoritmo multi-ponto é revisado e uma nova solução é proposta. Essa solução é muito melhor que a original, pois evita usar as probabilidades marginais quando um evento que nunca ocorre é encontrado no template. Além disso, para cada realização a zona de incerteza é ressaltada. Uma base de dados sintética foi gerada e usada como imagem de treinamento. A partir dessa base de dados completa, uma amostra com 25 pontos foi extraída. Os resultados mostram que a aproximação proposta proporciona realizações mais confiáveis com zonas de incerteza menores. / Estimation and simulation based on two-point statistics have been used since 1960\'s in geostatistical analysis. These methods depend on the spatial correlation model derived from the well known semivariogram function. However, the semivariogram function cannot describe the geological heterogeneity found in mineral deposits and oil reservoirs. Thus, instead of using two-point statistics, multiple-point geostatistics based on probability distributions of multiple-points has been considered as a reliable alternative for describing the geological heterogeneity. In this thesis, the multiple-point algorithm is revisited and a new solution is proposed. This solution is much better than the former one because it avoids using marginal probabilities when a never occurring event is found in a template. Moreover, for each realization the uncertainty zone is highlighted. A synthetic data base was generated and used as training image. From this exhaustive data set, a sample with 25 points was drawn. Results show that the proposed approach provides more reliable realizations with smaller uncertainty zones.
18

Advanced spectral unmixing and classification methods for hyperspectral remote sensing data / Source separation in hyperspectral imagery

Villa, Alberto 29 July 2011 (has links)
La thèse propose des nouvelles techniques pour la classification et le démelange spectraldes images obtenus par télédétection iperspectrale. Les problèmes liées au données (notammenttrès grande dimensionalité, présence de mélanges des pixels) ont été considerés et destechniques innovantes pour résoudre ces problèmes. Nouvelles méthodes de classi_cationavancées basées sur l'utilisation des méthodes traditionnel de réduction des dimension etl'integration de l'information spatiale ont été développés. De plus, les méthodes de démelangespectral ont été utilisés conjointement pour ameliorer la classification obtenu avec lesméthodes traditionnel, donnant la possibilité d'obtenir aussi une amélioration de la résolutionspatial des maps de classification grace à l'utilisation de l'information à niveau sous-pixel.Les travaux ont suivi une progression logique, avec les étapes suivantes:1. Constat de base: pour améliorer la classification d'imagerie hyperspectrale, il fautconsidérer les problèmes liées au données : très grande dimensionalité, presence demélanges des pixels.2. Peut-on développer méthodes de classi_cation avancées basées sur l'utilisation des méthodestraditionnel de réduction des dimension (ICA ou autre)?3. Comment utiliser les differents types d'information contextuel typique des imagés satellitaires?4. Peut-on utiliser l'information données par les méthodes de démelange spectral pourproposer nouvelles chaines de réduction des dimension?5. Est-ce qu'on peut utiliser conjointement les méthodes de démelange spectral pour ameliorerla classification obtenu avec les méthodes traditionnel?6. Peut-on obtenir une amélioration de la résolution spatial des maps de classi_cationgrace à l'utilisation de l'information à niveau sous-pixel?Les différents méthodes proposées ont été testées sur plusieurs jeux de données réelles, montrantresultats comparable ou meilleurs de la plus part des methodes presentés dans la litterature. / The thesis presents new techniques for classification and unmixing of hyperspectral remote sensing data. The main issues connected to this kind of data (in particular the huge dimension and the possibility to find mixed pixels) have been considered. New advanced techniques have been proposed in order to solve these problems. In a first part, new classification methods based on the use of traditional dimensionality reduction methods (such as Independent Component Analysis - ICA) and on the integration of spatial and spectral information have been proposed. In a second part, methods based on spectral unmixing have been considered to improve the results obtained with classical methods. These methods gave the possibility to improve the spatial resolution of the classification maps thanks to the sub-pixel information which they consider.The main steps of the work are the following:- Introduction and survey of the data. Base assessment: in order to improve the classification of hyperspectral images, data related problems must be considered (very high dimension, presence of mixed pixels)- Development of advanced classification methods making use of classic dimensionality reduction techniques (Independent Component Discriminant Analysis)- Proposition of classification methods exploiting different kinds of contextual information, typical of hyperspectral imagery - Study of spectral unmixing techniques, in order to propose new feature extraction methods exploiting sub-pixel information - Joint use of traditional classification methods and unmixing techniques in order to obtain land cover classification maps at a finer resolutionThe different methods proposed have been tested on several real hyperspectral data, showing results which are comparable or better than methods recently proposed in the literature.
19

Tomographie hybride simultanée TEP/TDM combinant détecteurs à pixels hybrides et modules phoswich à scintillateurs / Development of a simultaneous PET/CT scanner for mice combining hybrid pixels detectors and phoswich scintillators modules

Hamonet, Margaux 19 April 2016 (has links)
Le mise en commun de la tomographie par émission de positons (TEP) et de la tomodensitométrie (TDM) sur des scanner bimodaux TEP/TDM a été un axe de recherche essentiel en imagerie ces vingt dernières années et a mené à un développement rapide de cette technique en clinique et en pré-clinique. Cependant, même si les deux modalités peuvent être juxtaposées sur le même statif, l’idée originelle de David W. Townsend prévoyait d’acquérir de manière simultanée les données TEP et TDM provenant du même champ de vue. Cela devrait permettre de connaître la position exacte du sujet lors de l’acquisition, et donc potentiellement de corriger de ses mouvements. C’est dans cet optique que nous avons développé le prototype ClearPET/XPAD : le premier scanner TEP/TDM simultané combinant les modules phoswich à scintillateurs du prototype ClearPET et un détecteur à pixels hybride développé au CPPM faisant face à une source de rayons X sur le même rotateur. Je présente les premières images TEP/CT simultanées de source ponctuelles, du fantôme Derenzo et d’une souris vivante acquises sur le prototype ClearPET/XPAD et discute les limites et les particularités de l’imagerie hybride TEP/TDM simultanée. / The combination of Positron Emission Tomography (PET) and X-ray Computerized Tomographie (CT) for PET/CT imaging has been an essential line of research in the previous decades and led to a rapid expansion of this technique in clinics and preclinics. However, even if the two modalities can be juxtaposed on the same gantry, the original concept invented by David W. Townsend foresaw to do both PET and CT imaging at the same time, while imaging a common Field-Of-View (FOV). This will allow to know the exact position of the animal during the scan and possibly to correct for the animal movements. Therefore we developed the ClearPET/XPAD prototype: the first simultaneous PET/CT scanner for mice combining on the same gantry ClearPET modules and the hybrid pixels camera XPAD3 facing an X-ray source. I present the first simultaneous PET/CT scans of point sources, of the Derenzo phantom and of a living mouse acquired with the ClearePET/XPAD prototype and discuss the limitations and prospects of simultaneous hybrid PET/CT imaging.
20

Développement de la tomographie intra-vitale au K-edge avec la camera à pixels hybrides XPAD3 / Development of K-edge in vivo tomography with the hybrid pixel camera XPAD3

Kronland-Martinet, Carine 19 March 2015 (has links)
La caméra à pixels hybrides XPAD3 intégrée dans le micro-CT PIXSCAN II est un nouveau dispositif dans lequel le comptage de photons remplace l’intégration de charges utilisée dans les systèmes de radiographie standard. XPAD3 apporte des avantages, en particulier l’absence de bruit de courant noir et la possibilité d’imposer un seuil de discrimination sur chaque pixel. Ces particularités ont pu être exploitées au cours de ce travail de thèse en imagerie préclinique classique sur petit animal et ont permis de faire la preuve de faisabilité d’un marquage ex vivo puis intra-vital des macrophages. D’autre part les capacités de cette caméra sont intéressantes pour le développement d’une nouvelle méthode d’imagerie spectrale dite au K-edge. L’imagerie au K-edge permet de différencier des compartiments contenant un agent de contraste par rapport à l’os dans des radiographies classiques. Elle est obtenue via l’étalonnage de 3 différents seuils autour de l’énergie de liaison de la couche K de l’agent de contraste considéré. Le développement d’un nouvel d’étalonnage avec l’utilisation de pixels composites a permis d’établir la preuve de concept d’un brevet et d’obtenir les premiers résultats sur souris vivantes en divisant le temps d’acquisition par 3 avec un compromis sur la résolution spatiale. Cette nouvelle approche peut être implémentée en "2 couleurs" pour séparer deux différents types d’agents de contraste. Ceci offre une nouvelle manière de visualiser des informations biologiques pertinentes dans un contexte applicatif visant à étudier de manière dynamique (longitudinale) l’interdépendance de la vascularisation et de la réponse immunitaire au cours du développement tumoral. / The hybrid pixel camera XPAD3 integrated in the micro-CT PIXSCAN II is a new device for which photon counting replaces charge integration used in standard X-ray CT. This novel approach involves advantages, in particular the absence of dark noise and the ability to set an energy threshold on each pixel of the detected photons. This features has been exploited during this thesis work for standard small animal preclinical imaging and permitted to establish the faisability of ex vivo, and then in vivo labelling of marcrophages. On another hand, the ability of this camera is of uppermost importance for the development of K-edge imaging approaches, which exploit spectral information on the counted photons. K-edge imaging permits to identify contrast agent compartiments with regards to bones in classical radiography. K-edge imaging is obtained by selecting, for each pixel calibration, those pixel that are set at one of the three different thresholds around the K-shell’s binding energy of the selected contrast agents and then to proceed with a subtraction analysis to the images obtained above and below the K-edge energy. We develop a new way of calibrating the XPAD3 detector that permits to provide the proof of concept of a patent, and to obtain the first results on living mice by dividing the acquisition time by three with a compromise on the resolution. This novel approach can be implemented in “2 colours” in order to separate clearly two different contrast agents. This brings a new way to visualize biological information and to provide possible future approaches for the study of the inter-dependance of vascularisation and inflammation during the tumor development.

Page generated in 0.054 seconds