• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 153
  • 37
  • 23
  • 13
  • 12
  • 6
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 300
  • 70
  • 48
  • 47
  • 45
  • 44
  • 44
  • 39
  • 36
  • 33
  • 30
  • 30
  • 27
  • 26
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

A CONTROL MECHANISM TO THE ANYWHERE PIXEL ROUTER

Krishnan, Subhasri 01 January 2007 (has links)
Traditionally large format displays have been achieved using software. A new technique of using hardware based anywhere pixel routing is explored in this thesis. Information stored in a Look Up Table (LUT) in the hardware can be used to tile two image streams to produce a seamless image display. This thesis develops a 1 input-image 1 output-image system that implements arbitrary image warping on the image, based a LUT stored in memory. The developed system control mechanism is first validated using simulation results. It is next validated via implementation to a Field Programmable Gate Array (FPGA) based hardware prototype and appropriate experimental testing. It was validated by changing the contents of the LUT and observing that the resulting changes on the pixel mapping were always correct.
112

AN EFFECTIVE CACHE FOR THE ANYWHERE PIXEL ROUTER

Raghunathan, Vijai 01 January 2007 (has links)
Designing hardware to output pixels for light field displays or multi-projector systems is challenging owing to the memory bandwidth and speed of the application. A new technique of hardware that implements ‗anywhere pixel routing‘ was designed earlier at the University of Kentucky. This technique uses hardware to route pixels from input to output based upon a Look up Table (LUT). The initial design suffered from high memory latency due to random accesses to the DDR SDRAM input buffer. This thesis presents a cache design that alleviates the memory latency issue by reducing the number of random SDRAM accesses. The cache is implemented in the block RAM of a field programmable gate array (FPGA). A number of simulations are conducted to find an efficient cache. It is found that the cache takes only a few kilobits, about 7% of the block RAM and on an average speeds up the memory accesses by 20-30%.
113

A Universal Background Subtraction System

Sajid, Hasan 01 January 2014 (has links)
Background Subtraction is one of the fundamental pre-processing steps in video processing. It helps to distinguish between foreground and background for any given image and thus has numerous applications including security, privacy, surveillance and traffic monitoring to name a few. Unfortunately, no single algorithm exists that can handle various challenges associated with background subtraction such as illumination changes, dynamic background, camera jitter etc. In this work, we propose a Multiple Background Model based Background Subtraction (MB2S) system, which is universal in nature and is robust against real life challenges associated with background subtraction. It creates multiple background models of the scene followed by both pixel and frame based binary classification on both RGB and YCbCr color spaces. The masks generated after processing these input images are then combined in a framework to classify background and foreground pixels. Comprehensive evaluation of proposed approach on publicly available test sequences show superiority of our system over other state-of-the-art algorithms.
114

A comparison between techniques for color grading in games

Oldenborg, Mattias January 2006 (has links)
<p>Color has been significant in visual arts for as long as the art-forms have existed. Still images and movies have long used colors and color grading effects to affect the viewer and characterize the work. In recent years attempts have been made to bring these techniques of stylizing also to interactive games. This dissertation aims to compare two different approaches of performing real-time color grading for games. Focus is put on examining the two ways from a number of different perspectives and from there draw conclusions on advantages and disadvantages of the approaches. The results show no unanimously superior approach but rather aim to break down the results in categories and attempt to explain the benefits and drawbacks in using either one of them, aiding the decision for anyone inclined to implement color grading effects in games.</p>
115

Entwicklung von optischen 3D-CMOS-Bildsensoren auf der Basis der Pulslaufzeitmessung

Elkhalili, Omar. Unknown Date (has links) (PDF)
Essen, Universiẗat, Diss., 2005--Duisburg.
116

Quando o computador se torna atelier de criação artística. Liana Timm : vida de artista e artista da vida / When the computer becomes a workshop of artistic creation. Liana Timm : life as an artist and artist of life

Thomazoni, Andresa Ribeiro January 2014 (has links)
A presente tese foi desenvolvida no Programa de Pós-Graduação Informática na Educação, inserindo-se na linha de pesquisa “Interfaces digitais em educação, arte, linguagem e cognição”. Trata-se de uma pesquisa interdisciplinar que buscou tramar um diálogo entre arte e tecnologia. Mais especificamente, foi realizada uma investigação a respeito da imagem digital, entendendo-a como produzida em um agenciamento maquínico entre computador, considerado como atelier, e o corpo-artista. Desta paisagem, a investigação sobre a imagem digital buscou explorar as possibilidades de indeterminação na imagem, ou seja, suas possibilidades de criação e invenção, numa espécie de escavação até visibilizar o pixel, como o menor elemento da imagem. Metodologicamente realizou-se uma cartografia, escolhendo-se os trabalhos de imagens digitais realizados pela artista gaúcha Liana Timm. Tomou-se Liana como uma personagem estética e suas obras como disparadoras da problematização sobre imagens digitais produzidas no embate criacionista com o computador-atelier. Buscamos, dessa forma, abrir outros espaços para problematizar a imagem digital e suas possíveis conexões, bem como afirmar o computador como uma potência para fomentar maquinismos produtores da diferença e invenção. / The present thesis was developed in the Computer in Education Postgraduate Program, as part of the "Digital interfaces in education, art, language and cognition" line of research. This is an interdisciplinary research that sought to weave a dialogue between art and technology. More specifically, an investigation was carried out regarding the digital image, comprising it as produced by a machinic assemblage between the computer considered as a workshop and the body-artist. From this landscape, the research on digital image sought to explore the possibilities of indeterminacy in the image, ie, its creation and invention possibilities, a kind of digging done to show the pixel as the smallest element of the image. Methodologically, it was done a cartography, picking up the pieces of digital images made by Brazilian artist, Liana Timm. We took Liana as an aesthetic character and her works as the trigger of the problematization about the digital images produced in a creationist clash with the computer-workshop. We seek, therefore, to open up other spaces to discuss the digital image and its possible connections as well as to firm the computer as a potency to foment machinisms producers of difference and invention.
117

Classify-normalize-classify : a novel data-driven framework for classifying forest pixels in remote sensing images / Classifica-normaliza-classifica : um nova abordagem para classficar pixels de floresta em imagens de sensoriamento remoto

Souza, César Salgado Vieira de January 2017 (has links)
O monitoramento do meio ambiente e suas mudanças requer a análise de uma grade quantidade de imagens muitas vezes coletadas por satélites. No entanto, variações nos sinais devido a mudanças nas condições atmosféricas frequentemente resultam num deslocamento da distribuição dos dados para diferentes locais e datas. Isso torna difícil a distinção dentre as várias classes de uma base de dados construída a partir de várias imagens. Neste trabalho introduzimos uma nova abordagem de classificação supervisionada, chamada Classifica-Normaliza-Classifica (CNC), para amenizar o problema de deslocamento dos dados. A proposta é implementada usando dois classificadores. O primeiro é treinado em imagens não normalizadas de refletância de topo de atmosfera para distinguir dentre pixels de uma classe de interesse (CDI) e pixels de outras categorias (e.g. floresta versus não-floresta). Dada uma nova imagem de teste, o primeiro classificador gera uma segmentação das regiões da CDI e então um vetor mediano é calculado para os valores espectrais dessas áreas. Então, esse vetor é subtraído de cada pixel da imagem e portanto fixa a distribuição de dados de diferentes imagens num mesmo referencial. Finalmente, o segundo classificador, que é treinado para minimizar o erro de classificação em imagens já centralizadas pela mediana, é aplicado na imagem de teste normalizada no segundo passo para produzir a segmentação binária final. A metodologia proposta foi testada para detectar desflorestamento em pares de imagens co-registradas da Landsat 8 OLI sobre a floresta Amazônica. Experimentos usando imagens multiespectrais de refletância de topo de atmosfera mostraram que a CNC obteve maior acurácia na detecção de desflorestamento do que classificadores aplicados em imagens de refletância de superfície fornecidas pelo United States Geological Survey. As acurácias do método proposto também se mostraram superiores às obtidas pelas máscaras de desflorestamento do programa PRODES. / Monitoring natural environments and their changes over time requires the analysis of a large amount of image data, often collected by orbital remote sensing platforms. However, variations in the observed signals due to changing atmospheric conditions often result in a data distribution shift for different dates and locations making it difficult to discriminate between various classes in a dataset built from several images. This work introduces a novel supervised classification framework, called Classify-Normalize-Classify (CNC), to alleviate this data shift issue. The proposed scheme uses a two classifier approach. The first classifier is trained on non-normalized top-of-the-atmosphere reflectance samples to discriminate between pixels belonging to a class of interest (COI) and pixels from other categories (e.g. forest vs. non-forest). At test time, the estimated COI’s multivariate median signal, derived from the first classifier segmentation, is subtracted from the image and thus anchoring the data distribution from different images to the same reference. Then, a second classifier, pre-trained to minimize the classification error on COI median centered samples, is applied to the median-normalized test image to produce the final binary segmentation. The proposed methodology was tested to detect deforestation using bitemporal Landsat 8 OLI images over the Amazon rainforest. Experiments using top-of-the-atmosphere multispectral reflectance images showed that the deforestation was mapped by the CNC framework more accurately as compared to running a single classifier on surface reflectance images provided by the United States Geological Survey (USGS). Accuracies from the proposed framework also compared favorably with the benchmark masks of the PRODES program.
118

Quando o computador se torna atelier de criação artística. Liana Timm : vida de artista e artista da vida / When the computer becomes a workshop of artistic creation. Liana Timm : life as an artist and artist of life

Thomazoni, Andresa Ribeiro January 2014 (has links)
A presente tese foi desenvolvida no Programa de Pós-Graduação Informática na Educação, inserindo-se na linha de pesquisa “Interfaces digitais em educação, arte, linguagem e cognição”. Trata-se de uma pesquisa interdisciplinar que buscou tramar um diálogo entre arte e tecnologia. Mais especificamente, foi realizada uma investigação a respeito da imagem digital, entendendo-a como produzida em um agenciamento maquínico entre computador, considerado como atelier, e o corpo-artista. Desta paisagem, a investigação sobre a imagem digital buscou explorar as possibilidades de indeterminação na imagem, ou seja, suas possibilidades de criação e invenção, numa espécie de escavação até visibilizar o pixel, como o menor elemento da imagem. Metodologicamente realizou-se uma cartografia, escolhendo-se os trabalhos de imagens digitais realizados pela artista gaúcha Liana Timm. Tomou-se Liana como uma personagem estética e suas obras como disparadoras da problematização sobre imagens digitais produzidas no embate criacionista com o computador-atelier. Buscamos, dessa forma, abrir outros espaços para problematizar a imagem digital e suas possíveis conexões, bem como afirmar o computador como uma potência para fomentar maquinismos produtores da diferença e invenção. / The present thesis was developed in the Computer in Education Postgraduate Program, as part of the "Digital interfaces in education, art, language and cognition" line of research. This is an interdisciplinary research that sought to weave a dialogue between art and technology. More specifically, an investigation was carried out regarding the digital image, comprising it as produced by a machinic assemblage between the computer considered as a workshop and the body-artist. From this landscape, the research on digital image sought to explore the possibilities of indeterminacy in the image, ie, its creation and invention possibilities, a kind of digging done to show the pixel as the smallest element of the image. Methodologically, it was done a cartography, picking up the pieces of digital images made by Brazilian artist, Liana Timm. We took Liana as an aesthetic character and her works as the trigger of the problematization about the digital images produced in a creationist clash with the computer-workshop. We seek, therefore, to open up other spaces to discuss the digital image and its possible connections as well as to firm the computer as a potency to foment machinisms producers of difference and invention.
119

Commissioning of the Atlas pixel detector at Run 2 of the LHC, and search for supersymmetric particles with two same-sign leptons or three leptons in the final state / Mise en oeuvre du détecteur à pixels d'Atlas lors du Run 2 du LHC et recherche de particules supersymétriques dans les états finals à deux leptons de même signe et à trois leptons

Alstaty, Mahmoud Ibrahim 07 November 2017 (has links)
Le LHC, ATLAS, le détecteur à pixels et l’IBL sont décrits dans la première partie de ce mémoire. La mise en oeuvre du détecteur à pixels muni de sa nouvelle couche grâce à l’acquisition de rayons cosmiques juste avant le démarrage du Run 2 du LHC est ensuite présentée. L’analyse comprend l’étude des propriétés des amas de pixels allumés par le passage des rayons cosmiques, ainsi que la comparaison entre les deux technologies de compteurs présentes dans la nouvelle couche, les compteurs planaires et les compteurs 3D utilisés pour la première fois auprès d’un collisionneur. Ces études ont permis de valider les logiciels de reconstruction et d’améliorer la simulation de la nouvelle couche. Les produits d’ionisation créés dans les compteurs par le passage de particules chargées sont déviés de leurs trajectoires naturelles le long du champ électrique des jonctions, par le champ magnétique uniforme dans lequel est plongé le trajectographe d’ATLAS. L’angle de la déviation est appelé angle de Lorentz. La mesure de cet angle est essentielle car il affecte la position mesurée. Cette mesure a été réalisée pour toutes les couches, ainsi que la variation de l’angle de Lorentz en fonction de la température. A la fin du Run 1, aucun excès n’a été observé par-dessus les prédictions du Modèle Standard, et des limites inférieures sur les masses de particules supersymétriques en ont été déduites. Ces limites ont été étendues avec l’analyse montrée ici. Le gluino est ainsi plus lourd que 1.87TeV, tandis que la masse du squark b devrait être plus grande que 700 GeV, sous des hypothèses simplificatrices. Ces résultats constituent des contraintes supplémentaires pour la supersymétrie. / In the first part of this thesis, the LHC, ATLAS, the Pixel Detector and the IBL are all reviewed. Afterwards, the analysis of first cosmic data collected by the ATLAS Detector after the IBL insertion is presented, as part of the Pixel and IBL commissioning before Run 2 started. The analysis included the study of the Pixel clusters properties, and making comparisons between the two different technologies used in the IBL sensors: the Planar type, and the 3D type which has been used for the first time in a collider experiment. Analyzing the Pixel clusters properties is important to study the detector response after the IBL insertion, in order to insure utilizing the ultimate capabilities of the detector, and to achieve better resolutions for the measurements. The Standard Model (SM) of particle physics describes physical phenomena in the fundamental level with great success. However, it suffers from several shortcomings; for instance, it has no candidate for the dark matter, and it has no solution for the gauge hierarchy problem, motivating the search for new physics beyond the SM theories. On of those theories is Supersymmetry(SUSY), which occupies a primer place in the LHC physics program. At the end of Run-1, no significant excess in data over the SM prediction is observed and limits on the supersymmetric particle masses are set. With this analysis, which is basically an extension of the Run 1 analysis, those exclusion limits are extended and the gluino masses are excluded up to 1.87 TeV, while the sbottom mass should be above 700 GeV when using simplified assumptions. These results provide new constraints on natural SUSY models.
120

Classify-normalize-classify : a novel data-driven framework for classifying forest pixels in remote sensing images / Classifica-normaliza-classifica : um nova abordagem para classficar pixels de floresta em imagens de sensoriamento remoto

Souza, César Salgado Vieira de January 2017 (has links)
O monitoramento do meio ambiente e suas mudanças requer a análise de uma grade quantidade de imagens muitas vezes coletadas por satélites. No entanto, variações nos sinais devido a mudanças nas condições atmosféricas frequentemente resultam num deslocamento da distribuição dos dados para diferentes locais e datas. Isso torna difícil a distinção dentre as várias classes de uma base de dados construída a partir de várias imagens. Neste trabalho introduzimos uma nova abordagem de classificação supervisionada, chamada Classifica-Normaliza-Classifica (CNC), para amenizar o problema de deslocamento dos dados. A proposta é implementada usando dois classificadores. O primeiro é treinado em imagens não normalizadas de refletância de topo de atmosfera para distinguir dentre pixels de uma classe de interesse (CDI) e pixels de outras categorias (e.g. floresta versus não-floresta). Dada uma nova imagem de teste, o primeiro classificador gera uma segmentação das regiões da CDI e então um vetor mediano é calculado para os valores espectrais dessas áreas. Então, esse vetor é subtraído de cada pixel da imagem e portanto fixa a distribuição de dados de diferentes imagens num mesmo referencial. Finalmente, o segundo classificador, que é treinado para minimizar o erro de classificação em imagens já centralizadas pela mediana, é aplicado na imagem de teste normalizada no segundo passo para produzir a segmentação binária final. A metodologia proposta foi testada para detectar desflorestamento em pares de imagens co-registradas da Landsat 8 OLI sobre a floresta Amazônica. Experimentos usando imagens multiespectrais de refletância de topo de atmosfera mostraram que a CNC obteve maior acurácia na detecção de desflorestamento do que classificadores aplicados em imagens de refletância de superfície fornecidas pelo United States Geological Survey. As acurácias do método proposto também se mostraram superiores às obtidas pelas máscaras de desflorestamento do programa PRODES. / Monitoring natural environments and their changes over time requires the analysis of a large amount of image data, often collected by orbital remote sensing platforms. However, variations in the observed signals due to changing atmospheric conditions often result in a data distribution shift for different dates and locations making it difficult to discriminate between various classes in a dataset built from several images. This work introduces a novel supervised classification framework, called Classify-Normalize-Classify (CNC), to alleviate this data shift issue. The proposed scheme uses a two classifier approach. The first classifier is trained on non-normalized top-of-the-atmosphere reflectance samples to discriminate between pixels belonging to a class of interest (COI) and pixels from other categories (e.g. forest vs. non-forest). At test time, the estimated COI’s multivariate median signal, derived from the first classifier segmentation, is subtracted from the image and thus anchoring the data distribution from different images to the same reference. Then, a second classifier, pre-trained to minimize the classification error on COI median centered samples, is applied to the median-normalized test image to produce the final binary segmentation. The proposed methodology was tested to detect deforestation using bitemporal Landsat 8 OLI images over the Amazon rainforest. Experiments using top-of-the-atmosphere multispectral reflectance images showed that the deforestation was mapped by the CNC framework more accurately as compared to running a single classifier on surface reflectance images provided by the United States Geological Survey (USGS). Accuracies from the proposed framework also compared favorably with the benchmark masks of the PRODES program.

Page generated in 0.0298 seconds