241 |
Caractérisation d’un champ de radiation avec Timepix3Boussa, Miloud Mohamed Mahdi 05 1900 (has links)
Le Timepix3, successeur du Timepix, est un détecteur au silicium composé de deux couches
sensibles installées en parallèle. Chaque couche est munie d’une matrice de 65 536 pixels
(256x256) et d’une épaisseur de 500 μm. Une des améliorations du Timepix3 par rapport
aux générations précédentes est qu’il est possible de récolter simultanément la quantité
de charge déposée ainsi que le temps d’arrivée de cette charge. Pour la prise de données
Run 3 du LHC qui a débuté en 2022, 16 détecteurs Timepix3 ont été installés dans la
caverne du détecteur ATLAS. Les Timepix3 seront utilisés pour mesurer la luminosité du
faisceau du LHC ainsi que pour caractériser et mesurer la radiation dans la caverne ATLAS,
où beaucoup de composantes électroniques sont installés. L’objet de cette maitrise est
de développer un algorithme d’identification des particules qui frappe le détecteur Timepix3.
Dans un premier temps, l’information de la quantité d’énergie déposée et du temps
d’arrivée sera utilisée pour caractériser un champ de particules incidentes au détecteur
Timepix3 (électrons, photons, particules lourdes chargées). La nouvelle méthode consiste
à utiliser les paramètres physiques des particules lors de leurs interactions avec le milieu,
tels que la trajectoire, l’angle d’incidence, le dépôt d’énergie, la densité spatiale de l’amas,
densité énergétique le long de la trajectoire de la particule incidente.
Dans un second temps, comme les rayons delta sont des effets récurrents et indésirables
qui perturbent l’analyse des données en physique des particules, ce mémoire traitera de la
façon de les supprimer pour ne récolter que l’énergie déposée directement par la particule
incidente. Il sera aussi question d’utiliser la statistique liée à la production des rayons
delta lors du passage d’un flux de particules dans le détecteur pour en déterminer l’énergie
cinétique.
L’algorithme développé pour caractériser un champ de particules avec le Timepix3 a
été confronté aux données obtenues avec un cyclotron de protons à Aahrus au Danemark.
Nous avons obtenu des résultats satisfaisants, étant donné que la majorité des particules
sont identifiées comme des protons et que nous avons réussi à déterminer l’énergie
cinétique de ces protons qui se rapproche de l’énergie cinétique du faisceau de protons utilisé. / The Timepix3, successor to the Timepix, is a silicon detector composed of two sensitive
layers mounted in parallel. Each layer has a matrix of 65 536 pixels (256x256) and a
thickness of 500 μm. One of the improvements of the Timepix3 compared to previous
generations is that it is possible to simultaneously collect the quantity of charge deposited
as well as the time of arrival of this charge. For the LHC Run 3 data taking which
started in 2022, 16 Timepix3 detectors were installed in the ATLAS detector cavern. The
Timepix3 will be used to measure the luminosity of the LHC beam as well as to characterize
and measure the radiation in the ATLAS cavern, where many electronic components are
installed. The purpose of this master thesis is to develop an algorithm for identifying
particles that strike the Timepix3 detector.
Initially, information on the amount of energy deposited and the time of arrival will be
used to characterize a field of particles incident at the Timepix3 detector (electrons, photons,
heavy charged particles). The new method consists in using the physical parameters of the
particles during their interactions with the medium, such as the trajectory, the angle of
incidence, the energy deposition, the spatial density of the cluster, energy density along the
trajectory of the incident particle.
Secondly, as delta rays are recurring and undesirable effects which disturb the analysis
of data in particle physics, this thesis will deal with how to suppress them in order to
harvest only the energy deposited directly by the incident particle. It will also be a question
of using the statistics linked to the production of delta rays when a flow of particles passes
through the detector to determine their kinetic energy.
The algorithm developed to characterize a particle field with the Timepix3 was confronted
with data obtained with a proton cyclotron at Aahrus in Denmark. We have
obtained satisfactory results, given that the majority of the particles are identified as
protons and that we have succeeded in determining the kinetic energy of these protons
which is close to the kinetic energy of the proton beam used.
|
242 |
Photogrammetry and image processing techniques for beach monitoringSánchez García, Elena 07 December 2019 (has links)
Tesis por compendio / [ES] Las playas son ambientes ecológicos sumamente valiosos donde a lo largo de una frágil franja de transición converge el entorno terrestre y el medio marino. Durante el último siglo, la mejora en la comprensión de los procesos físicos que ocurren en la zona costera se ha convertido en un asunto de máxima importancia. Para abordar una planificación coherente de la gestión costera se requiere tomar en consideración el dinamismo de los diferentes cambios morfológicos que caracterizan estos ambientes a distintas escalas espaciales y temporales.
El límite tierra-agua varía en función de la posición del nivel del mar y de la forma del perfil de playa que continuamente queda modelado por las olas incidentes. Intentar modelizar la respuesta de un paisaje tan voluble geomorfológicamente como las playas requiere disponer de múltiples medidas registradas con suficiente precisión para poder reconocer su respuesta frente a la acción de los distintos agentes geomórficos. Para ello resulta esencial disponer de diferentes sistemas de monitorización capaces de registrar de forma sistemática la línea de costa con exactitud y efectividad. Se requieren nuevos métodos y herramientas informáticas que permitan capturar, caracterizar y analizar eficientemente la información con el objeto de obtener indicadores con significación geomorfológica de calidad. En esto radica el objetivo de la presente tesis doctoral, centrándose en el desarrollo de herramientas y procedimientos eficientes para la monitorización costera mediante el uso de imágenes satelitales y fotografías terrestres.
El trabajo aporta soluciones de procesamiento de imágenes de satélite y fotogramétricas a científicos, ingenieros y gestores costeros, proporcionando resultados que evidencian la gran utilidad de estas técnicas viables y de bajo coste para la monitorización costera. Mediante ellas se puede convertir información pública existente y de libre acceso (imágenes satelitales, datos de video cámaras o fotografías de la ciudadanía) en datos de alta calidad para el monitoreo de los cambios morfológicos de las playas, y lograr así una consiguiente gestión sostenible de los recursos costeros. / [CA] Les platges són ambients ecològics summament valuosos on al llarg d'una feble franja de transició convergeix l'entorn terrestre i el medi marí. En l'últim segle, la millora en la comprensió dels processos físics que ocorren en la zona costanera s'ha convertit en un assumpte de màxima importància. Per a abordar una planificació coherent de la gestió costanera es requereix prendre en consideració el dinamisme dels diferents canvis morfològics que caracteritzen aquests ambients a diferents escales espacials i temporals.
El límit terra-aigua varia en funció de la posició del nivell del mar i de la forma del perfil de platja que contínuament queda modelat per les ones incidents. Intentar modelitzar la resposta d'un paisatge tan voluble geomorfològicament com les platges requereix disposar de múltiples mesures registrades amb suficient precisió per poder reconèixer la seua resposta enfront de l'acció dels diferents agents geomòrfics. Per tant, resulta essencial disposar de diferents sistemes de monitoratge capaços de registrar de forma sistemàtica la línia de costa amb exactitud i efectivitat. Es requereixen nous mètodes i eines informàtiques que permeten capturar, caracteritzar i analitzar eficientment la informació a fi d'obtindre indicadors amb significació geomorfològica de qualitat. En això radica l'objectiu de la present tesi doctoral, que es centra en el desenvolupament d'eines i procediments eficients per al monitoratge costaner mitjançant l'ús d'imatges de satèl·lit i fotografies terrestres.
El treball aporta solucions de processament d'imatges de satèl·lit i fotogramètriques a científics, enginyers, polítics i gestors costaners, proporcionant resultats que evidencien la gran utilitat d'aquestes tècniques factibles i de baix cost per a la monitorització costanera. Mitjançant aquestes es pot convertir informació pública existent i de lliure accés (imatges de satèl·lit, dades de videocàmeres o fotografies de la ciutadania) en dades d'alta qualitat per al monitoratge dels canvis morfològics de les platges, i aconseguir així una consegüent gestió sostenible dels recursos costaners. / [EN] Beaches are extremely valuable ecological spaces where terrestrial and marine environments converge along a fragile transition strip. An improvement in our understanding of the physical processes that occur in the coastal zone has become increasingly important during the last century. To approach a coherent planning of coastal management it is necessary to consider the dynamism of the various morphological changes that characterize these environments at different spatial and temporal scales.
The land-water boundary varies according to the sea level and the shape of a beach profile that is continuously modelled by incident waves. Attempting to model the response of a landscape as geomorphologically volatile as beaches requires multiple precise measurements to recognize responses to the actions of various geomorphic agents. It is therefore essential to have monitoring systems capable of systematically recording the shoreline accurately and effectively. New methods and tools are required to efficiently capture, characterize, and analyze information - and so obtain geomorphologically significant indicators. This is the aim of the doctoral thesis, focusing on the development of tools and procedures for coastal monitoring using satellite images and terrestrial photographs.
The work brings satellite image processing and photogrammetric solutions to scientists, engineers, and coastal managers by providing results that demonstrate the usefulness of these viable and low-cost techniques. Existing and freely accessible public information (satellite images, video-derived data, or crowd-sourced photographs) can be converted into high quality data for monitoring morphological changes on beaches and thus help achieve a sustainable management of coastal resources. / Agradecer al Ministerio de Educación, Cultura y Deporte del Gobierno de
España por la beca predoctoral FPU, y por las ayudas de movilidad concedidas,
que han permitido que esta Tesis Doctoral fuera una realidad. También a los
proyectos AICO/2015/098 y CGL2015-69906-R financiados respectivamente por
la Generalitat Valenciana y por el Ministerio de Economía y Competitividad. / Sánchez García, E. (2019). Photogrammetry and image processing techniques for beach monitoring [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/123956 / Compendio
|
243 |
[pt] ESTRATÉGIAS PARA OTIMIZAR PROCESSOS DE ANOTAÇÃO E GERAÇÃO DE DATASETS DE SEGMENTAÇÃO SEMÂNTICA EM IMAGENS DE MAMOGRAFIA / [en] STRATEGIES TO OPTIMIZE ANNOTATION PROCESSES AND GENERATION OF SEMANTIC SEGMENTATION DATASETS IN MAMMOGRAPHY IMAGESBRUNO YUSUKE KITABAYASHI 17 November 2022 (has links)
[pt] Com o avanço recente do uso de aprendizagem profunda supervisionada
(supervised deep learning) em aplicações no ramo da visão computacional, a
indústria e a comunidade acadêmica vêm evidenciando que uma das principais
dificuldades para o sucesso destas aplicações é a falta de datasets com a
suficiente quantidade de dados anotados. Nesse sentido aponta-se a necessidade
de alavancar grandes quantidades de dados rotulados para que estes modelos
inteligentes possam solucionar problemas pertinentes ao seu contexto para
atingir os resultados desejados. O uso de técnicas para gerar dados anotados
de maneira mais eficiente está sendo cada vez mais explorado, juntamente com
técnicas para o apoio à geração dos datasets que servem de insumos para o
treinamento dos modelos de inteligência artificial. Este trabalho tem como
propósito propor estratégias para otimizar processos de anotação e geração
de datasets de segmentação semântica. Dentre as abordagens utilizadas neste
trabalho destacamos o Interactive Segmentation e Active Learning. A primeira,
tenta melhorar o processo de anotação de dados, tornando-o mais eficiente e
eficaz do ponto de vista do anotador ou especialista responsável pela rotulagem
dos dados com uso de um modelo de segmentação semântica que tenta imitar
as anotações feitas pelo anotador. A segunda, consiste em uma abordagem que
permite consolidar um modelo deep learning utilizando um critério inteligente,
visando a seleção de dados não anotados mais informativos para o treinamento
do modelo a partir de uma função de aquisição que se baseia na estimação de
incerteza da rede para realizar a filtragem desses dados. Para aplicar e validar
os resultados de ambas as técnicas, o trabalho os incorpora em um caso de
uso relacionado em imagens de mamografia para segmentação de estruturas
anatômicas. / [en] With the recent advancement of the use of supervised deep learning in
applications in the field of computer vision, the industry and the academic
community have been showing that one of the main difficulties for the success
of these applications is the lack of datasets with a sufficient amount of
annotated data. In this sense, there is a need to leverage large amounts of
labeled data so that these intelligent models can solve problems relevant to
their context to achieve the desired results. The use of techniques to generate
annotated data more efficiently is being increasingly explored, together with
techniques to support the generation of datasets that serve as inputs for the
training of artificial intelligence models. This work aims to propose strategies
to optimize annotation processes and generation of semantic segmentation
datasets. Among the approaches used in this work, we highlight Interactive
Segmentation and Active Learning. The first one tries to improve the data
annotation process, making it more efficient and effective from the point of
view of the annotator or specialist responsible for labeling the data using a
semantic segmentation model that tries to imitate the annotations made by
the annotator. The second consists of an approach that allows consolidating
a deep learning model using an intelligent criterion, aiming at the selection of
more informative unannotated data for training the model from an acquisition
function that is based on the uncertainty estimation of the network to filter
these data. To apply and validate the results of both techniques, the work
incorporates them in a use case in mammography images for segmentation of
anatomical structures.
|
244 |
Mätning av LCD-bildskärmars responstid och latens : Measurement of LCD displays response time and input lagMikkelsen, Markus, Svanfors, Gustav January 2013 (has links)
Examensarbetet har utförts i samarbete med företaget LVI (Low Vision International) som tillverkar elektroniska hjälpmedel för synskadade. LVI utvärderar vid jämna mellanrum nya LCD-bildskärmar för deras produkter. LVI behöver metoder samt utrustning för att mäta bildskärmars responstid och latens. Både responstiden och latensen ger fördröjningar vilket t.ex. leder till att bilden blir oskarp, rörliga föremål får en svans efter sig eller att lju- det kommer före bilden. I detta arbete utförs en grundlig förstudie som behandlar bild- skärmars responstid och latens samt ger ett underlag för att konstruera eller köpa mätut- rustning för responstidsmätningar. I förstudien framkommer den standardiserade mätmeto- den ”grey-to-grey” som LVI kan använda för att mäta responstiden. En mätkrets konstrue- ras för att mäta responstiden samt beställs en dedikerad enhet för latensmätning. För att ut- värdera mätmetoderna utförs ett antal tester med mätkretsen för responstid och den dedike- rade enheten för mätning av latens. Mätningarna visar senare att mätmetoden ”grey-to- grey” är den som LVI ska använda men metoden behöver vidareutvecklas. Den dedikerade enheten för latensmätningar visar sig mäta en del av responstiden och bör därför endast an- vändas som komplement till responstidsmätningen vid jämförelser mellan olika bildskär- mar. Arbetet levererar en förstudie i LCD-bildskärmars responstid och latens, en vidareut- vecklad version av ”grey-to-grey”-metoden, mätutrustning för responstidsmätning samt den dedikerade mätenheten för latens till företaget LVI. / The thesis was performed in collaboration with the company LVI (Low Vision Interna- tional) that manufactures electronic devices for the visually impaired. LVI evaluates new LCD displays for their products at regular intervals. LVI need methods and equipment for measuring response time and input lag. Both response time and input lag cause delays, which results in such things as image blur, ghosting after moving objects or a delay between sound and image. The preliminary study reveals the standardized method “grey- to-grey” that LVI can use to measure response time. A measurement circuit was constructed to measure response time and a dedicated unit for input lag measurement was ordered. To evaluate the measurement methods a number of tests were conducted with the response time circuit and the dedicated input lag unit. The measurements showed that the method LVI shall use is the "grey-to-grey” method but it needs further development. It turned out that the dedicated unit for input lag measured a portion of the response time and should therefore only be used as a complement to the response time measurement when comparing displays. The thesis delivers a preliminary study in LCD displays response times and input lag, a further developed version of the “grey-to-grey” method, measurement equipment for response time and a dedicated unit for input lag measurements to the company LVI.
|
245 |
Development of a CMOS pixel sensor for the outer layers of the ILC vertex detectorZhang, Liang 30 September 2013 (has links) (PDF)
This work deals with the design of a CMOS pixel sensor prototype (called MIMOSA 31) for the outer layers of the International Linear Collider (ILC) vertex detector. CMOS pixel sensors (CPS) also called monolithic active pixel sensors (MAPS) have demonstrated attractive performance towards the requirements of the vertex detector of the future linear collider. MIMOSA 31developed at IPHC-Strasbourg is the first pixel sensor integrated with 4-bit column-level ADC for the outer layers. It is composed of a matrix of 64 rows and 48 columns. The pixel concept combines in-pixel amplification with a correlated double sampling (CDS) operation in order to reduce the temporal and fixed pattern noise (FPN). At the bottom of the pixel array, each column is terminated with an analog to digital converter (ADC). The self-triggered ADC accommodating the pixel readout in a rolling shutter mode completes the conversion by performing a multi-bit/step approximation. The ADC design was optimized for power saving at sampling frequency. Accounting the fact that in the outer layers of the ILC vertex detector, the hit density is inthe order of a few per thousand, this ADC works in two modes: active mode and inactive mode. This thesis presents the details of the prototype chip and its laboratory test results.
|
246 |
Study and improvement of radiation hard monolithic active pixel sensors of charged particle tracking / Etude et amélioration de capteurs monolithiques actifs à pixels résistants aux rayonnements pour reconstruire la trajectoire des particules chargéesWei, Xiaomin 18 December 2012 (has links)
Les capteurs monolithiques actifs à pixels (Monolithic Active Pixel Sensors, MAPS) sont de bons candidats pour être utilisés dans des expériences en Physique des Hautes Énergies (PHE) pour la détection des particules chargées. Dans les applications en PHE, des puces MAPS sont placées dans le voisinage immédiat du point d’interaction et sont directement exposées au rayonnement intense de leur environnement. Dans cette thèse, nous avons étudié et amélioré la résistance aux radiations des MAPS. Les effets principaux de l’irradiation et le progrès de la recherche sur les MAPS sont étudiés tout d'abord. Nous avons constaté que les cœurs des SRAM IP incorporées dans la puce MAPS limitent sensiblement la tolérance aux radiations de la puce MAPS entière. Aussi, pour améliorer la radiorésistance des MAPS, trois mémoires radiorésistantes sont conçues et évaluées pour les expériences en PHE. Pour remplacer les cœurs des IP SRAM, une SRAM radiorésistante est développée sur une petite surface. Pour les procédés de plus petit taille de grille des transistors, dans lequel les effets SEU (Single Event Upset) deviennent significatifs, une SRAM radiorésistante avec une tolérance SEU accrue est réalisée à l’aide d’un algorithme de détection et de correction d'erreurs (Error Detection And Correction, EDAC) et un stockage entrelacé des bits. Afin d'obtenir une tolérance aux rayonnements et une densité de micro-circuits plus élevées, une mémoire à double accès avec une cellule à 2 transistors originale est développée et évaluée pour des puces MAPS futures. Enfin, la radiorésistance des puces MAPS avec des nouveaux procédés disponibles est étudiée, et les travaux futurs sont proposés. / Monolithic Active Pixel Sensors (MAPS) are good candidates to be used in High Energy Physics (HEP) experiments for charged particle detection. In the HEP applications, MAPS chips are placed very close to the interaction point and are directly exposed to harsh environmental radiation. This thesis focuses on the study and improvement of the MAPS radiation hardness. The main radiation effects and the research progress of MAPS are studied firstly. During the study, the SRAM IP cores built in MAPS are found limiting the radiation hardness of the whole MAPS chips. Consequently, in order to improve the radiation hardness of MAPS, three radiation hard memories are designed and evaluated for the HEP experiments. In order to replace the SRAM IP cores, a radiation hard SRAM is developed on a very limited area. For smaller feature size processes, in which the single event upset (SEU) effects get significant, a radiation hard SRAM with enhanced SEU tolerance is implemented by an error detection and correction algorithm and a bit-interleaving storage. In order to obtain higher radiation tolerance and higher circuitry density, a dual-port memory with an original 2-transistor cell is developed and evaluated for future MAPS chips. Finally, the radiation hardness of the MAPS chips using new available processes is studied, and the future works are prospected.
|
247 |
Do frame à pinturaHaddad, João Müller 05 October 2011 (has links)
Made available in DSpace on 2016-12-08T16:19:08Z (GMT). No. of bitstreams: 1
joao.pdf: 335517 bytes, checksum: 3088cceab0bbf187a7690e581047302d (MD5)
Previous issue date: 2011-10-05 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The study starts on the encounter, in contemporary art, between analog and digital media made possible by the use of technological means. It deals, more specifically, with the materiality and immateriality of painting and video languages, been based on artists of the contemporary scene s productions, Romanita Disconzi and my own poetic work. The poetic investigation is about the frames transposition, moments, minimum divisible part of scenes of human bodies in motion, from the movie Cidade de Deus, for the painting. The images, transposed into the contemporary painting by the technique of painting on canvas and acrylic alternative surfaces, materialize frames of movie scenes and reveal the look changed through the electronic means. The study also discusses the material record of the tiny fraction of the video, digital imaging and painting - frame, pixel and brush stroke - as the smallest piece that still allows the identification of the differentiated processes of observation. In this sense, the final result of the pictorial body, pervaded by the hybridization notion, as used by Couchot, leads us to the blind spot of the video - where the video sees nothing - and to the subjective field of the temporality of movie scenes / A pesquisa parte do encontro, na arte contemporânea, entre meios artísticos (mídias analógicas e digitais), possibilitado pela utilização de meios tecnológicos. Trata, mais especificamente, da materialidade e imaterialidade das linguagens da pintura e do vídeo, se pautando na produção de artistas da cena contemporânea, Romanita Disconzi, e minha própria produção poética. A investigação poética trata da transposição de frames, instantes, parte mínima divisível de cenas de corpos humanos em movimento, do filme Cidade de Deus, para a pintura. As imagens, transpostas para a pintura contemporânea por meio da técnica da pintura acrílica sobre tela e superfícies alternativas, materializam frames de cenas de filmes e revelam o olhar modificado pelos meios eletrônicos. Discorre sobre o registro material da fração mínima do vídeo, da imagem digital e da pintura frame, pixel e pincelada como o fragmento menor que ainda permite a identificação dos processos diferenciados de observação. Neste sentido, o resultado final do corpo pictórico permeado pela noção de hibridação, como utilizada por Couchot, nos remete ao ponto cego do vídeo onde o vídeo nada vê e ao campo subjetivo da temporalidade das cenas de filmes
|
248 |
Textile Engineering ›SurFace‹: Oberflächenentwurf von der taktilen zur grafischen zur taktilen Erfahrbarkeit im Design Engineering der ZukunftWachs, Marina-Elena, Scholl, Theresa, Balbig, Gesa, Grobheiser, Katharina 06 September 2021 (has links)
Das Textile-Engineering steht innerhalb der Digitalisierungsphase der vierten industriellen Revolution, vor der Herausforderung, die taktile Erfahrbarkeit von physischen Oberflächen in digitale tools zu übersetzen. Hierbei stehen scheinbar analoge Entwurfsmethoden des Skizzierens, mit dem Duktus im Konflikt mit den digitalen Entwurfsflächen und -räumen. Wie können wir digitale Materialbibliotheken so verwenden, dass diese der „wahren“ Oberflächen(-Ästhetik), entsprechend unseren physisch erlebbaren Welten entsprechen? Wir entwickeln die interaktiven Entwurfsräume der Zukunft „sur face“, über das „Gesicht“ des Materials. Mittels Matrix und digitalem Duktus und im vis à vis von analogen und digitalen vernetzt designen, kommen wir der Anforderung von human centred design der textilen Zukunftswelten näher.
|
249 |
Quantifying urban land cover by means of machine learning and imaging spectrometer data at multiple spatial scalesOkujeni, Akpona 15 December 2014 (has links)
Das weltweite Ausmaß der Urbanisierung zählt zu den großen ökologischen Herausforderungen des 21. Jahrhunderts. Die Fernerkundung bietet die Möglichkeit das Verständnis dieses Prozesses und seiner Auswirkungen zu erweitern. Der Fokus dieser Arbeit lag in der Quantifizierung der städtischen Landbedeckung mittels Maschinellen Lernens und räumlich unterschiedlich aufgelöster Hyperspektraldaten. Untersuchungen berücksichtigten innovative methodische Entwicklungen und neue Möglichkeiten, die durch die bevorstehende Satellitenmission EnMAP geschaffen werden. Auf Basis von Bilder des flugzeugestützten HyMap Sensors mit Auflösungen von 3,6 m und 9 m sowie simulierten EnMAP-Daten mit einer Auflösung von 30 m wurde eine Kartierung entlang des Stadt-Umland-Gradienten Berlins durchgeführt. Im ersten Teil der Arbeit wurde die Kombination von Support Vektor Regression mit synthetischen Trainingsdaten für die Subpixelkartierung eingeführt. Ergebnisse zeigen, dass sich der Ansatz gut zur Quantifizierung thematisch relevanter und spektral komplexer Oberflächenarten eignet, dass er verbesserte Ergebnisse gegenüber weiteren Subpixelverfahren erzielt, und sich als universell einsetzbar hinsichtlich der räumlichen Auflösung erweist. Im zweiten Teil der Arbeit wurde der Wert zukünftiger EnMAP-Daten für die städtische Fernerkundung abgeschätzt. Detaillierte Untersuchungen unterstreichen deren Eignung für eine verbesserte und erweiterte Beschreibung der Stadt nach dem bewährten Vegetation-Impervious-Soil-Schema. Analysen der Möglichkeiten und Grenzen zeigen sowohl Nachteile durch die höhere Anzahl von Mischpixel im Vergleich zu hyperspektralen Flugzeugdaten als auch Vorteile aufgrund der verbesserten Differenzierung städtischer Materialien im Vergleich zu multispektralen Daten. Insgesamt veranschaulicht diese Arbeit, dass die Kombination von hyperspektraler Satellitenbildfernerkundung mit Methoden des Maschinellen Lernens eine neue Qualität in die städtische Fernerkundung bringen kann. / The global dimension of urbanization constitutes a great environmental challenge for the 21st century. Remote sensing is a valuable Earth observation tool, which helps to better understand this process and its ecological implications. The focus of this work was to quantify urban land cover by means of machine learning and imaging spectrometer data at multiple spatial scales. Experiments considered innovative methodological developments and novel opportunities in urban research that will be created by the upcoming hyperspectral satellite mission EnMAP. Airborne HyMap data at 3.6 m and 9 m resolution and simulated EnMAP data at 30 m resolution were used to map land cover along an urban-rural gradient of Berlin. In the first part of this work, the combination of support vector regression with synthetically mixed training data was introduced as sub-pixel mapping technique. Results demonstrate that the approach performs well in quantifying thematically meaningful yet spectrally challenging surface types. The method proves to be both superior to other sub-pixel mapping approaches and universally applicable with respect to changes in spatial scales. In the second part of this work, the value of future EnMAP data for urban remote sensing was evaluated. Detailed explorations on simulated data demonstrate their suitability for improving and extending the approved vegetation-impervious-soil mapping scheme. Comprehensive analyses of benefits and limitations of EnMAP data reveal both challenges caused by the high numbers of mixed pixels, when compared to hyperspectral airborne imagery, and improvements due to the greater material discrimination capability when compared to multispectral spaceborne imagery. In summary, findings demonstrate how combining spaceborne imaging spectrometry and machine learning techniques could introduce a new quality to the field of urban remote sensing.
|
250 |
Digital Signature Technologies for Image Information Assurance / Vaizdo skaitmeninis parašas vaizdinės informacijos apsaugaiKriukovas, Artūras 25 January 2011 (has links)
The dissertation investigates the issues of image authentication and tamper localization after general image processing operations – blurring, sharpening, rotation and JPEG compression.
The dissertation consists of four parts including Introduction, 4 chapters, Conclusions, References.
The introduction reveals the investigated problem, importance of the thesis and the object of research and describes the purpose and tasks of the paper, research methodology, scientific novelty, the practical significance of results examined in the paper and defended statements. The introduction ends in presenting the author’s publications on the subject of the defended dissertation,
offering the material of made presentations in conferences and defining the structure of the dissertation.
Chapter 1 revises used literature, analyzes competitive methods. The devastating effect of blur/sharpen methods on digital image matrices is shown. General pixel-wise tamper localization methods are completely inefficient after
such casual processing. Block-wise methods demonstrate some resistance against blurring/sharpening, but no tamper localization with the resolution of up to one pixel is possible. There is clearly a need for a method, able to locate damaged pixels despite general image processing operations such as blurring or sharpening.
Chapter 2 defines theoretical foundation for the proposed method. It is shown that phase of Fourier transform demonstrates invariance against blurring or sharpening... [to full text] / Disertacijoje nagrinėjamos atvaizdų apsaugos – autentiškumo užtikrinimo ir pažeidimų radimo, po bendrųjų vaizdo apdorojimo procedūrų – blukinimo, ryškinimo, pasukimo ar JPEG suspaudimo – problemos.
Disertaciją sudaro įvadas, keturi skyriai, rezultatų apibendrinimas, naudotos literatūros ir autoriaus publikacijų disertacijos tema sąrašai.
Įvadiniame skyriuje aptariama tiriamoji problema, darbo aktualumas, aprašomas tyrimų objektas, formuluojamas darbo tikslas bei uždaviniai, aprašoma tyrimų metodika, darbo mokslinis naujumas, darbo rezultatų praktinė reikšmė, ginamieji teiginiai. Įvado pabaigoje pristatomos disertacijos tema autoriaus paskelbtos publikacijos ir pranešimai konferencijose bei disertacijos struktūra.
Pirmasis skyrius skirtas literatūros apžvalgai. Jame pateikta konkuruojančių metodų apžvalga. Parodomas globalus blukinimo/ryškinimo operacijų poveikis atvaizdo skaitmeninėms matricoms. Išsiaiškinama kodėl pikselių tikslumo metodai nėra atsparūs tokiam poveikiui. Kodėl blokiniai metodai demonstruoja dalinį atsparumą – bet jie nėra pajėgūs surasti vieno pažeisto pikselio.
Parodomas aiškus poreikis metodo, tiek galinčio rasti pažeistus pikselius, tiek atsparaus bendroms vaizdo apdorojimo procedūroms, tokioms kaip blukinimas
ar aštrinimas.
Antrajame skyriuje pateiktas sukurto metodo teorinis pagrindimas. Įrodoma, kad Furjė fazė pasižymi atsparumu blukinimui ir ryškinimui. Tačiau iškyla sekanti problema dėl to, kad Furjė fazėje neįmanoma rasti konkretaus pikselio –... [toliau žr. visą tekstą]
|
Page generated in 0.0516 seconds