• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 183
  • 30
  • 14
  • 10
  • 8
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 309
  • 309
  • 79
  • 64
  • 58
  • 47
  • 47
  • 42
  • 40
  • 37
  • 36
  • 32
  • 31
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Image Approximation using Triangulation

Trisiripisal, Phichet 11 July 2003 (has links)
An image is a set of quantized intensity values that are sampled at a finite set of sample points on a two-dimensional plane. Images are crucial to many application areas, such as computer graphics and pattern recognition, because they discretely represent the information that the human eyes interpret. This thesis considers the use of triangular meshes for approximating intensity images. With the help of the wavelet-based analysis, triangular meshes can be efficiently constructed to approximate the image data. In this thesis, this study will focus on local image enhancement and mesh simplification operations, which try to minimize the total error of the reconstructed image as well as the number of triangles used to represent the image. The study will also present an optimal procedure for selecting triangle types used to represent the intensity image. Besides its applications to image and video compression, this triangular representation is potentially very useful for data storage and retrieval, and for processing such as image segmentation and object recognition. / Master of Science
292

Optimizing YOLOv5 Deployment : A Comparative Study of In-Node and Remote Processing on Edge Devices

Wijitchakhorn, Alice January 2024 (has links)
Artificiell intelligens utvecklas i snabb takt, och objektdetektering har blivit en central komponent inom detta område. Objektdetektering möjliggör att automatiserade system på ett noggrant sätt kan identifiera och lokalisera objekt i bilder. En av de mest framstående metoderna för detta ändamål är YOLOv5 (You Only Look Once, version 5), känd för sin snabbhet och effektivitet i realtidsapplikationer. Implementeringen av sådan avancerad teknologi på mindre enheter som Raspberry Pi 4 är utmaningar, främst till följd av begränsad processorkraft och energitillgång på dessa små enheter. Denna avhandling undersöker den optimala användningen av YOLOv5- modellen med hänsyn till energieffektivitet och latens i kommunikationen. Dessa aspekter är särskilt kritiska för enheter som kräver hög effektivitet, exempelvis smartphones, drönare och andra portabla enheter. Studien jämför två huvudsakliga tillvägagångssätt: bearbetning direkt på enheten (in-node) och distans utförande på en server. Genom att välja en lämplig metod för processkörning påverkas effektiviteten av objektdetektering i praktiska tillämpningar. Att bearbeta data direkt på enheten kan ge fördelar i form av snabbare svarstid och bättre integritet, eftersom det undviker att skicka data över nätverket. Dock kan detta öka energiförbrukningen och ökad belastning på enheten. Å andra sidan kan remote process, som utnyttjar kraftfulla datorer, förbättra prestandan och minska belastningen på enheten, men detta kan leda till ökad latens och potentiella integritetsproblem. Genom att använda resurser från remote-servern kan arbetsbelastningen på enheter som Raspberry Pi minskas, vilket resulterar i förbättrad energieffektivitet och latens över samtliga testade upplösningar. / Artificial intelligence is advancing quickly, and object detection has become a key part of this field. Object detection helps automated systems recognise and object detecting pictures very accurately. One of the best methods for this is YOLOv5 (You Only Look Once, version 5), known for working fast and well in real-time uses. However, using such sophisticated technology on smaller devices like the Raspberry Pi 4 can be challenging. These challenges come mainly from limited processing power and energy availability on such small devices. This thesis explores the best way to use the YOLOv5 model while considering energy efficiency and latency between communication. These aspects are crucial when devices need to be efficient, like smartphones, drones, or other portable devices. The study compares two main ways to set up the system: processing directly on the device (in-node) and processing remotely on a server or in the cloud. Choosing where to process the data affects the effectiveness of object detection in real-world applications. Processing on the device can be better for privacy and speed since it does not need to send data over a network. However, this might use more energy and put more strain on the device. On the other hand, processing remotely can use powerful computers to improve performance and reduce the load on the device, but it might make things slower and raise privacy issues. By using remote server resources, the workload in single-processing devices like Rasberry Pi is drastically reduced, which shows better energy efficiency and latency in all test resolutions.
293

Nouvelles approches de modélisation multidimensionnelle fondées sur la décomposition de Wold

Merchan Spiegel, Fernando 14 December 2009 (has links)
Dans cette thèse nous proposons de nouveaux modèles paramétriques en traitement du signal et de l'image, fondés sur la décomposition de Wold des processus stochastiques. Les approches de modélisation font appel à l'analyse fonctionnelle et harmonique, l'analyse par ondelettes, ainsi qu'à la théorie des champs stochastiques. Le premier chapitre a un caractère introductif théorique et précise les éléments de base concernant le contexte de la prédiction linéaire des processus stochastiques stationnaires et la décomposition Wold, dans le cas 1-D et multi-D. On montre comment les différentes parties de la décomposition sont obtenues à partir de l'hypothèse de stationnarité, via la représentation du processus comme l'orbite d'un certain opérateur unitaire, l'isomorphisme canonique de Kolmogorov et les conséquences sur la prédiction linéaire du théorème de Szégö et de ses extensions multidimensionnelles. Le deuxième chapitre traite une approche de factorisation spectrale de la densité spectrale de puissance qu'on utilisera pour l'identification des modèles de type Moyenne Ajustée (MA), Autorégressif (AR) et ARMA. On utilise la représentation par le noyau reproduisant de Poisson d'une fonction extérieure pour construire un algorithme d'estimation d'un modèle MA avec une densité spectrale de puissance donnée. Cette méthode d'estimation est présentée dans le cadre de deux applications: - Dans la simulation de canaux sans fil de type Rayleigh (cas 1-D). - Dans le cadre d'une approche de décomposition de Wold des images texturées (cas 2-D). Dans le troisième chapitre nous abordons la représentation et la compression hybride d'images. Nous proposons une approche de compression d'images qui utilise conjointement : - les modèles issus de la décomposition de Wold pour la représentation des régions dites texturées de l'image; - une approche fondée sur les ondelettes pour le codage de la partie "cartoon" (ou non-texturée) de l' image. Dans ce cadre, nous proposons une nouvelle approche pour la décomposition d'une image dans une partie texturée et une partie non-texturée fondée sur la régularité locale. Chaque partie est ensuite codée à l'aide de sa représentation particulière. / In this thesis we propose new parametric models in signal and image processing based on the Wold decomposition of stationary stochastic processes. These models rely upon several theoretical results from functional and harmonic analysis, wavelet analysis and the theory of stochastic fields, The first chapter presents the theoretical background of the linear prediction for stationary processes and of the Wold decomposition theorems in 1-D and n-D. It is shown how the different parts of the decomposition are obtained and represented, by the means of the unitary orbit representation of stationary processes, the Kolmogorov canonical model and Szego-type extensions. The second chapter deals with a spectral factorisation approach of the power spectral density used for the parameter estimation of Moving Avergage (MA), AutoRegressif (AR) and ARMA models. The method uses the Poisson integral representation in Hardy spaces in order to estimate an outer transfer function from its power spectral density. - Simulators for Rayleigh fading channels (1-D). - A scheme for the Wold decomposition for texture images (2-D). In the third chapter we deal with hybrid models for image representation and compression. We propose a compression scheme which jointly uses, on one hand, Wold models for textured regions of the image, and on the other hand a wavelet-based approach for coding the 'cartoon' (or non-textured) part of the image. In this context, we propose a new algorithm for the decomposing images in a textured part and a non-textured part. The separate parts are then coded with the appropriate representation.
294

Compression d'images dans les réseaux de capteurs sans fil / Image compression in Wireless Sensor Networks

Makkaoui, Leila 26 November 2012 (has links)
Les réseaux de capteurs sans fil d'images sont utilisés aujourd'hui dans de nombreuses applications qui diffèrent par leurs objectifs et leurs contraintes individuelles. Toutefois, le dénominateur commun de toutes les applications de réseaux de capteurs reste la vulnérabilité des noeuds-capteurs en raison de leurs ressources matérielles limitées dont la plus contraignante est l'énergie. En effet, les technologies sans fil disponibles dans ce type de réseaux sont généralement à faible portée, et les ressources matérielles (CPU, batterie) sont également de faible puissance. Il faut donc répondre à un double objectif : l'efficacité d'une solution tout en offrant une bonne qualité d'image à la réception. La contribution de cette thèse porte principalement sur l'étude des méthodes de traitement et de compression d'images au noeud-caméra, nous avons proposé une nouvelle méthode de compression d'images qui permet d'améliorer l'efficacité énergétique des réseaux de capteurs sans fil. Des expérimentations sur une plate-forme réelle de réseau de capteurs d'images ont été réalisées afin de démontrer la validité de nos propositions, en mesurant des aspects telles que la quantité de mémoire requise pour l'implantation logicielle de nos algorithmes, leur consommation d'énergie et leur temps d'exécution. Nous présentons aussi, les résultats de synthèse de la chaine de compression proposée sur des systèmes à puce FPGA et ASIC / The increasing development of Wireless Camera Sensor Networks today allows a wide variety of applications with different objectives and constraints. However, the common problem of all the applications of sensor networks remains the vulnerability of sensors nodes because of their limitation in material resources, the most restricting being energy. Indeed, the available wireless technologies in this type of networks are usually a low-power, short-range wireless technology and low power hardware resources (CPU, battery). So we should meet a twofold objective: an efficient solution while delivering outstanding image quality on reception. This thesis concentrates mainly on the study and evaluation of compression methods dedicated to transmission over wireless camera sensor networks. We have suggested a new image compression method which decreases the energy consumption of sensors and thus maintains a long network lifetime. We evaluate its hardware implementation using experiments on real camera sensor platforms in order to show the validity of our propositions, by measuring aspects such as the quantity of memory required for the implantation program of our algorithms, the energy consumption and the execution time. We then focus on the study of the hardware features of our proposed method of synthesis of the compression circuit when implemented on a FPGA and ASIC chip prototype
295

Un système intégré d'acquisition 3D multispectral : acquisition, codage et compression des données / A 3D multispectral integrated acquisition system : acquisition, data coding and compression

Delcourt, Jonathan 29 October 2010 (has links)
Nous avons développé un système intégré permettant l'acquisition simultanée de la forme 3D ainsi que de la réflectance des surfaces des objets scannés. Nous appelons ce système un scanner 3D multispectral du fait qu’il combine, dans un couple stéréoscopique, une caméra multispectrale et un système projecteur de lumière structurée. Nous voyons plusieurs possibilités d’application pour un tel système mais nous mettons en avant des applications dans le domaine de l’archivage et la diffusion numériques des objets du patrimoine. Dans le manuscrit, nous présentons d’abord ce système ainsi que tous les calibrages et traitements nécessaires à sa mise en oeuvre. Ensuite, une fois que le système est fonctionnel, les données qui en sont générées sont riches d’informations, hétérogènes (maillage + réflectances, etc.) et surtout occupent beaucoup de place. Ce fait rend problématiques le stockage et la transmission, notamment pour des applications en ligne de type musée virtuel. Pour cette raison, nous étudions les différentes possibilités de représentation et de codage des données acquises par ce système pour en adopter la plus pertinente. Puis nous examinons les stratégies les plus appropriées à la compression de telles données, sans toutefois perdre la généralité sur d’autres données (type satellitaire). Nous réalisons un benchmark des stratégies de compression en proposant un cadre d’évaluation et des améliorations sur les stratégies classiques existantes. Cette première étude nous permettra de proposer une approche adaptative qui se révélera plus efficace pour la compression et notamment dans le cadre de la stratégie que nous appelons Full-3D. / We have developed an integrated system permitting the simultaneous acquisition of the 3D shape and the spectral spectral reflectance of scanned object surfaces. We call this system a 3D multispectral scanner because it combines within a stereopair, a multispectral video camera and a structured light projector. We see several application possibilities for a such acquisition system but we want to highlight applications in the field of digital archiving and broadcasting for heritage objects. In the manuscript we first introduce the acquisition system and its necessary calibrations and treatments needed for his use. Then, once the acquisition system is functional, data that are generated are rich in information, heterogeneous (mesh + reflectance, etc.) and in particular require lots of memory space. This fact makes data storage and transmission problematic, especially for applications like on line virtual museum. For this reason we study the different possibilities of representation and coding of data acquired by this system to adopt the most appropriate one. Then we examinate the most appropriate strategies to compress such data, without lost the generality on other data (satellite type). We perform a benchmark of compression strategies by providing an evaluation framework and improvements on existing conventional strategies. This first study will allow us to propose an adaptive approach that will be most effective for compression and particularly in the context of the compression strategy that we call Full-3D.
296

奇異值分解在影像處理上之運用 / Singular Value Decomposition: Application to Image Processing

顏佑君, Yen, Yu Chun Unknown Date (has links)
奇異值分解(singular valve decomposition)是一個重要且被廣為運用的矩陣分解方法,其具備許多良好性質,包括低階近似理論(low rank approximation)。在現今大數據(big data)的年代,人們接收到的資訊數量龐大且形式多元。相較於文字型態的資料,影像資料可以提供更多的資訊,因此影像資料扮演舉足輕重的角色。影像資料的儲存比文字資料更為複雜,若能運用影像壓縮的技術,減少影像資料中較不重要的資訊,降低影像的儲存空間,便能大幅提升影像處理工作的效率。另一方面,有時影像在被存取的過程中遭到雜訊汙染,產生模糊影像,此模糊的影像被稱為退化影像(image degradation)。近年來奇異值分解常被用於解決影像處理問題,對於影像資料也有充分的解釋能力。本文考慮將奇異值分解應用在影像壓縮與去除雜訊上,以奇異值累積比重作為選取奇異值的準則,並透過模擬實驗來評估此方法的效果。 / Singular value decomposition (SVD) is a robust and reliable matrix decomposition method. It has many attractive properties, such as the low rank approximation. In the era of big data, numerous data are generated rapidly. Offering attractive visual effect and important information, image becomes a common and useful type of data. Recently, SVD has been utilized in several image process and analysis problems. This research focuses on the problems of image compression and image denoise for restoration. We propose to apply the SVD method to capture the main signal image subspace for an efficient image compression, and to screen out the noise image subspace for image restoration. Simulations are conducted to investigate the proposed method. We find that the SVD method has satisfactory results for image compression. However, in image denoising, the performance of the SVD method varies depending on the original image, the noise added and the threshold used.
297

Information retrieval via universal source coding

Bae, Soo Hyun 17 November 2008 (has links)
This dissertation explores the intersection of information retrieval and universal source coding techniques and studies an optimal multidimensional source representation from an information theoretic point of view. Previous research on information retrieval particularly focus on learning probabilistic or deterministic source models based on primarily two different types of source representations, e.g., fixed-shape partitions or uniform regions. We study the limitations of the conventional source representations on capturing the semantics of the given multidimensional source sequences and propose a new type of primitive source representation generated by a universal source coding technique. We propose a multidimensional incremental parsing algorithm extended from the Lempel-Ziv incremental parsing and its three component schemes for multidimensional source coding. The properties of the proposed coding algorithm are exploited under two-dimensional lossless and lossy source coding. By the proposed coding algorithm, a given multidimensional source sequence is parsed into a number of variable-size patches. We call this methodology a parsed representation. Based on the source representation, we propose an information retrieval framework that analyzes a set of source sequences under a linguistic processing technique and implemented content-based image retrieval systems. We examine the relevance of the proposed source representation by comparing it with the conventional representation of visual information. To further extend the proposed framework, we apply a probabilistic linguistic processing technique to modeling the latent aspects of a set of documents. In addition, beyond the symbol-wise pattern matching paradigm employed in the source coding and the image retrieval systems, we devise a robust pattern matching that compares the first- and second-order statistics of source patches. Qualitative and quantitative analysis of the proposed framework justifies the superiority of the proposed information retrieval framework based on the parsed representation. The proposed source representation technique and the information retrieval frameworks encourage future work in exploiting a systematic way of understanding multidimensional sources that parallels a linguistic structure.
298

Mathematical imaging tools in cancer research : from mitosis analysis to sparse regularisation

Grah, Joana Sarah January 2018 (has links)
This dissertation deals with customised image analysis tools in cancer research. In the field of biomedical sciences, mathematical imaging has become crucial in order to account for advancements in technical equipment and data storage by sound mathematical methods that can process and analyse imaging data in an automated way. This thesis contributes to the development of such mathematically sound imaging models in four ways: (i) automated cell segmentation and tracking. In cancer drug development, time-lapse light microscopy experiments are conducted for performance validation. The aim is to monitor behaviour of cells in cultures that have previously been treated with chemotherapy drugs, since atypical duration and outcome of mitosis, the process of cell division, can be an indicator of successfully working drugs. As an imaging modality we focus on phase contrast microscopy, hence avoiding phototoxicity and influence on cell behaviour. As a drawback, the common halo- and shade-off effect impede image analysis. We present a novel workflow uniting both automated mitotic cell detection with the Hough transform and subsequent cell tracking by a tailor-made level-set method in order to obtain statistics on length of mitosis and cell fates. The proposed image analysis pipeline is deployed in a MATLAB software package called MitosisAnalyser. For the detection of mitotic cells we use the circular Hough transform. This concept is investigated further in the framework of image regularisation in the general context of imaging inverse problems, in which circular objects should be enhanced, (ii) exploiting sparsity of first-order derivatives in combination with the linear circular Hough transform operation. Furthermore, (iii) we present a new unified higher-order derivative-type regularisation functional enforcing sparsity of a vector field related to an image to be reconstructed using curl, divergence and shear operators. The model is able to interpolate between well-known regularisers such as total generalised variation and infimal convolution total variation. Finally, (iv) we demonstrate how we can learn sparsity promoting parametrised regularisers via quotient minimisation, which can be motivated by generalised Eigenproblems. Learning approaches have recently become very popular in the field of inverse problems. However, the majority aims at fitting models to favourable training data, whereas we incorporate knowledge about both fit and misfit data. We present results resembling behaviour of well-established derivative-based sparse regularisers, introduce novel families of non-derivative-based regularisers and extend this framework to classification problems.
299

Komprese videa v obvodu FPGA / Implementation of video compression into FPGA chip

Tomko, Jakub January 2014 (has links)
This thesis is focused on the compression algorithm's analysis of MJPEG format and its implementation in FPGA chip. Three additional video bitstream reduction methods have been evaluated for real-time low latency applications of MJPEG format. These methods are noise filtering, inter-frame encoding and lowering video's quality. Based on this analysis, a MJPEG codec has been designed for implementation into FPGA chip XC6SLX45, from Spartan-6 family.
300

Compress?o Seletiva de Imagens Coloridas com Detec??o Autom?tica de Regi?es de Interesse

Gomes, Diego de Miranda 05 January 2006 (has links)
Made available in DSpace on 2014-12-17T14:56:22Z (GMT). No. of bitstreams: 1 DiegoMG.pdf: 1982662 bytes, checksum: e489eb42e914d358aaeb197489ceb5e4 (MD5) Previous issue date: 2006-01-05 / There has been an increasing tendency on the use of selective image compression, since several applications make use of digital images and the loss of information in certain regions is not allowed in some cases. However, there are applications in which these images are captured and stored automatically making it impossible to the user to select the regions of interest to be compressed in a lossless manner. A possible solution for this matter would be the automatic selection of these regions, a very difficult problem to solve in general cases. Nevertheless, it is possible to use intelligent techniques to detect these regions in specific cases. This work proposes a selective color image compression method in which regions of interest, previously chosen, are compressed in a lossless manner. This method uses the wavelet transform to decorrelate the pixels of the image, competitive neural network to make a vectorial quantization, mathematical morphology, and Huffman adaptive coding. There are two options for automatic detection in addition to the manual one: a method of texture segmentation, in which the highest frequency texture is selected to be the region of interest, and a new face detection method where the region of the face will be lossless compressed. The results show that both can be successfully used with the compression method, giving the map of the region of interest as an input / A compress?o seletiva de imagens tende a ser cada vez mais utilizada, visto que diversas aplica??es fazem uso de imagens digitais que em alguns casos n?o permitem perdas de informa??es em certas regi?es. Por?m, existem aplica??es nas quais essas imagens s?o capturadas e armazenadas automaticamente, impossibilitando a um usu?rio indicar as regi?es da imagem que devem ser comprimidas sem perdas. Uma solu??o para esse problema seria a detec??o autom?tica das regi?es de interesse, um problema muito dif?cil de ser resolvido em casos gerais. Em certos casos, no entanto, pode-se utilizar t?cnicas inteligentes para detectar essas regi?es. Esta disserta??o apresenta um compressor seletivo de imagens coloridas onde as regi?es de interesse, previamente fornecidas, s?o comprimidas totalmente sem perdas. Este m?todo faz uso da transformada wavelet para descorrelacionar os pixels da imagem, de uma rede neural competitiva para realizar uma quantiza??o vetorial, da morfologia matem?tica e do c?digo adaptativo de Huffman. Al?m da op??o da sele??o manual das regi?es de interesse, existem duas op??es de detec??o autom?tica: um m?todo de segmenta??o de texturas, onde a textura com maior freq??ncia ? selecionada para ser a regi?o de interesse, e um novo m?todo de detec??o de faces onde a regi?o da face ? comprimida sem perdas. Os resultados mostram que ambos os m?todos podem ser utilizados com o algoritmo de compress?o, fornecendo a este o mapa de regi?o de interesse

Page generated in 0.1221 seconds