• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 11
  • 7
  • 7
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 89
  • 89
  • 47
  • 29
  • 22
  • 17
  • 17
  • 15
  • 12
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Respiration Pattern Using Amplified Video

Raihani, Nilgoun January 2018 (has links)
No description available.
32

Algorithmic Rectification of Visual Illegibility under Extreme Lighting

Li, Zhenhao January 2018 (has links)
Image and video enhancement, a classical problem of signal processing, has remained a very active research topic for past decades. This technical subject will not become obsolete even as the sensitivity and quality of modern image sensors steadily improve. No matter what level of sophistication cameras reach, there will always be more extreme and complex lighting conditions, in which the acquired images are improperly exposed and thus need to be enhanced. The central theme of enhancement is to algorithmically compensate for sensor limitations under ill lighting and make illegible details conspicuous, while maintaining a degree of naturalness. In retrospect, all existing contrast enhancement methods focus on heightening of spatial details in the luminance channel to fulfil the goal, with no or little consideration of the colour fidelity of the processed images; as a result they can introduce highly noticeable distortions in chrominance. This long-time much overlooked problem is addressed and systematically investigated by the thesis. We then propose a novel optimization-based enhancement algorithm, generating optimal tone mapping that not only makes maximal gain of contrast but also constrains tone and chrominance distortion, achieving superior output perceptual quality against severe underexposure and/or overexposure. Besides, we present a novel solution to restore images captured under more challenging backlit scenes, by combining the above enhancement method and feature-driven, machine learning based segmentation. We demonstrate the superior performance of the proposed method in terms of segmentation accuracy and restoration results over state-of-the-art methods. We also shed light on a common yet largely untreated video restoration problem called Yin-Yang Phasing (YYP), featured by involuntary, intense fluctuation in intensity and chrominance of an object as the video plays. We propose a novel video restoration technique to suppress YYP artifacts while retaining temporal consistency of objects appearance via inter-frame, spatially-adaptive optimal tone mapping. Experimental results are encouraging, pointing to an effective and practical solution to the problem. / Thesis / Doctor of Philosophy (PhD)
33

Rapid Prototyping of an FPGA-Based Video Processing System

Shi, Zhun 20 June 2016 (has links)
Computer vision technology can be seen in a variety of applications ranging from mobile phones to autonomous vehicles. Many computer vision applications such as drones and autonomous vehicles requires real-time processing capability in order to communicate with the control unit for sending commands in real time. Besides real-time processing capability, it is crucial to keep the power consumption low in order to extend the battery life of not only mobile devices, but also drones and autonomous vehicles. FPGAs are desired platforms that can provide high-performance and low-power solutions for real-time video processing. As hardware designs typically are more time consuming than equivalent software designs, this thesis proposes a rapid prototyping flow for FPGA-based video processing system design by taking advantage of the use of high performance AXI interface and a high level synthesis tool, Vivado HLS. Vivado HLS provides the convenience of automatically synthesizing a software implementation to hardware implementation. But the tool is far from being perfect, and users still need embedded hardware knowledge and experience in order to accomplish a successful design. In order to effectively create a stream type video processing system as well as to utilize the fastest memory on an FPGA, a sliding window memory architecture is proposed. This memory architecture can be applied to a series of video processing algorithms while the latency between an input pixel and an output pixel is minimized. By comparing my approach with other works, this optimized memory architecture proves to offer better performance and lower resource usage over what other works could offer. Its reconfigurability also provides better adaptability of other algorithms. In addition, this work includes performance and power analysis among an Intel CPU based design, an ARM based design, and an FPGA-based embedded design. / Master of Science
34

Very low bitrate facial video coding : based on principal component analysis

Söderström, Ulrik January 2006 (has links)
<p>This thesis introduces a coding scheme for very low bitrate video coding through the aid of principal component analysis. Principal information of the facial mimic for a person can be extracted and stored in an Eigenspace. Entire video frames of this persons face can then be compressed with the Eigenspace to only a few projection coefficients. Principal component video coding encodes entire frames at once and increased frame size does not increase the necessary bitrate for encoding, as standard coding schemes do. This enables video communication with high frame rate, spatial resolution and visual quality at very low bitrates. No standard video coding technique provides these four features at the same time.</p><p>Theoretical bounds for using principal components to encode facial video sequences are presented. Two different theoretical bounds are derived. One that describes the minimal distortion when a certain number of Eigenimages are used and one that describes the minimum distortion when a minimum number of bits are used.</p><p>We investigate how the reconstruction quality for the coding scheme is affected when the Eigenspace, mean image and coefficients are compressed to enable efficient transmission. The Eigenspace and mean image are compressed through JPEG-compression while the while the coefficients are quantized. We show that high compression ratios can be used almost without any decrease in reconstruction quality for the coding scheme.</p><p>Different ways of re-using the Eigenspace for a person extracted from one video sequence to encode other video sequences are examined. The most important factor is the positioning of the facial features in the video frames.</p><p>Through a user test we find that it is extremely important to consider secondary workloads and how users make use of video when experimental setups are designed.</p>
35

Τρισδιάστατη αναπαράσταση του χώρου από ψηφιακό σήμα video

Μιχαήλ, Αθανάσιος 09 October 2014 (has links)
Η εργασία αυτή πραγματοποιήθηκε σε αποκλειστική συνεργασία του φοιτητή Αθανάσιου Μιχαήλ και του επίκουρου καθηγητή Ευάγγελου Δερματά του πανεπιστημίου Πατρών. Το θέμα αποφασίστηκε από κοινού των προαναφερόμενων προσώπων, στα πλαίσια των προπτυχιακών σπουδών του φοιτητή. Οι λόγοι που οδήγησαν σε αυτό είναι πολλοί. Όχι μόνο η ραγδαία αύξηση των ψυχαγωγικών εφαρμογών που χρησιμοποιούν τρισδιάστατες τεχνικές αλλά και η απόκτηση όλο και μεγαλύτερης σημασίας σε επιστημονικούς τομείς όπως της ιατρικής κίνησαν το ενδιαφέρον του φοιτητή. Επίσης, πρόσφατες χρήσεις σε υποβοηθητικά συστήματα οδηγών όπως στερεοσκοπικές ή κάμερες κοντινού εύρους αλλά και σε βιομηχανικές εφαρμογές με ρομπότ, καθιστούν γενικότερα τον τομέα υπολογιστικής όρασης ιδιαίτερα σπουδαίο. Πολλά προβλήματα απεικόνισης και προσανατολισμού στο χώρο μπορούν να λυθούν με μια χωρική αντίληψη από τον άνθρωπο. Για την αναπαράσταση στις τρεις διαστάσεις θα βασιστούμε στην παθητική στερεοσκόπιση και τη βαθμωτή δομή από κίνηση με Multi-View-Stereo. Στην υπολογιστική όραση η 3D ανακατασκευή είναι η διαδικασία της σύλληψης του σχήματος και της εμφάνισης πραγματικών αντικειμένων. Θα παρουσιαστούν οι διάφορες μέθοδοι εκτίμησης βάθους (ανάκτησης τρίτης διάστασης από δισδιάστατη ψηφιακή εικόνα). Θα δοθεί έμφαση στο γεωμετρικό υπόβαθρο και η παραπομπή σε αρκετές σημαντικές εργασίες όπου αυτό κρίνεται αναγκαίο. Θα αναφερθούμε τόσο στην μοντελοποίηση της κάμερας όσο και στην βαθμονόμησή της, βήμα απαραίτητο για μια ρεαλιστική αναπαράσταση. Στη συνέχεια περνάμε στην ψηφιακή επεξεργασία των εικόνων και την ανάδειξη σημείων ενδιαφέροντος επί αυτών. Ύστερα εξηγούνται οι βασικές αρχές τις επιπολικής γεωμετρίας κι έπειτα δίνεται βαρύτητα στη σθεναρότητα μεθόδων εκτίμησης. Παρουσιάζεται η διόρθωση των εικόνων με αντιπροσωπευτικά πειραματικά αποτελέσματα και τελικά φτάνουμε στην αραιή και πυκνή ανακατασκευή στις τρεις διαστάσεις. Κατά τη διάρκεια διεκπεραίωσης της εργασίας αφιερώθηκε αρκετός χρόνος για το θεωρητικό της μέρος, ώστε ο αναγνώστης να μπορέσει να κατανοήσει εύκολα τις βασικές αρχές και τα βασικά στάδια αναπαράστασης ενός τρισδιάστατου χώρου. Ταυτόχρονα απεικονίζονται γραφικά αποτελέσματα που τεκμηριώνουν τη θεωρία ή οδηγούν σε χρήσιμα συμπεράσματα. / The current thesis has been written in an exclusive collaboration of the student Athanasios Michail and Assistant Professor Evangelos Dermatas of the University of Patras. The subject was decided jointly by the above persons in the context of the undergraduate study. There are many reasons for this decision. Not only the rapidly growth of entertainment applications using three-dimensional techniques but also the acquisition of more and more importance in the science sector, such as medicine, caught the student´s attention. Also, recent usages in Driver Assistance Systems like Stereoscopic or Near Range Cameras and industrial applications with robots, make the general sector of Computer Vision particularly important. Many specific depiction and spatial orientation problems can be solved by the spatial perception by humans. For the representation in three dimensions we will rely on stereoscopic vision and structure from motion in combination with Multi-View-Stereo. In Computer Vision the 3D reconstruction means a process of capturing the shape and appearance of real objects. We will present the various depth estimation methods (third dimension recovery from two-dimensional digital image). Emphasis will be placed at the geometric background and it will be referred to several important works where necessary. Both modeling of a camera and its calibration are mentioned. This is a necessary step for a realistic representation. Then we introduct digital image processing, highlighting the points of interest at captured frames. As a next step, we explain the basic principles of epipolar geometry and emphasize on the robust estimation methods. After we present the chapter of image rectification, we finally reach the dense reconstruction in three dimensions. A lot of time was spent to the theoretical part during this work, so that the reader can easily understand the basic principles and key stages of the representation of three-dimensional space. Simultaneously to this, graphed results are illustrated. They can substantiate the theory or lead to useful conclusions.
36

Semantics and planning based workflow composition and execution for video processing

Nadarajan, Gayathri January 2011 (has links)
Traditional workflow systems have several drawbacks, e.g. in their inabilities to rapidly react to changes, to construct workflow automatically (or with user involvement) and to improve performance autonomously (or with user involvement) in an incremental manner according to specified goals. Overcoming these limitations would be highly beneficial for complex domains where such adversities are exhibited. Video processing is one such domain that increasingly requires attention as larger amounts of images and videos are becoming available to persons who are not technically adept in modelling the processes that are involved in constructing complex video processing workflows. Conventional video and image processing systems, on the other hand, are developed by programmers possessing image processing expertise. These systems are tailored to produce highly specialised hand-crafted solutions for very specific tasks, making them rigid and non-modular. The knowledge-based vision community have attempted to produce more modular solutions by incorporating ontologies. However, they have not been maximally utilised to encompass aspects such as application context descriptions (e.g. lighting and clearness effects) and qualitative measures. This thesis aims to tackle some of the research gaps yet to be addressed by the workflow and knowledge-based image processing communities by proposing a novel workflow composition and execution approach within an integrated framework. This framework distinguishes three levels of abstraction via the design, workflow and processing layers. The core technologies that drive the workflow composition mechanism are ontologies and planning. Video processing problems provide a fitting domain for investigating the effectiveness of this integratedmethod as tackling such problems have not been fully explored by the workflow, planning and ontological communities despite their combined beneficial traits to confront this known hard problem. In addition, the pervasiveness of video data has proliferated the need for more automated assistance for image processing-naive users, but no adequate support has been provided as of yet. A video and image processing ontology that comprises three sub-ontologies was constructed to capture the goals, video descriptions and capabilities (video and image processing tools). The sub-ontologies are used for representation and inference. In particular, they are used in conjunction with an enhanced Hierarchical Task Network (HTN) domain independent planner to help with performance-based selection of solution steps based on preconditions, effects and postconditions. The planner, in turn, makes use of process models contained in a process library when deliberating on the steps and then consults the capability ontology to retrieve a suitable tool at each step. Two key features of the planner are the ability to support workflow execution (interleaves planning with execution) and can perform in automatic or semi-automatic (interactive) mode. The first feature is highly desirable for video processing problems because execution of image processing steps yield visual results that are intuitive and verifiable by the human user, as automatic validation is non trivial. In the semiautomaticmode, the planner is interactive and prompts the user tomake a tool selection when there is more than one tool available to perform a task. The user makes the tool selection based on the recommended descriptions provided by the workflow system. Once planning is complete, the result of applying the tool of their choice is presented to the user textually and visually for verification. This plays a pivotal role in providing the user with control and the ability to make informed decisions. Hence, the planner extends the capabilities of typical planners by guiding the user to construct more optimal solutions. Video processing problems can also be solved in more modular, reusable and adaptable ways as compared to conventional image processing systems. The integrated approach was evaluated on a test set consisting of videos originating from open sea environment of varying quality. Experiments to evaluate the efficiency, adaptability to user’s changing needs and user learnability of this approach were conducted on users who did not possess image processing expertise. The findings indicate that using this integrated workflow composition and execution method: 1) provides a speed up of over 90% in execution time for video classification tasks using full automatic processing compared to manual methods without loss of accuracy; 2) is more flexible and adaptable in response to changes in user requests (be it in the task, constraints to the task or descriptions of the video) than modifying existing image processing programs when the domain descriptions are altered; 3) assists the user in selecting optimal solutions by providing recommended descriptions.
37

Αναγνώριση αριθμού κινούμενων αντικειμένων και παρακολούθηση της τροχιάς των με μεθόδους μηχανικής όρασης

Κουζούπης, Δημήτριος 05 January 2011 (has links)
Η παρούσα διπλωματική εργασία αφορά την ανίχνευση και παρακολούθηση ανθρώπινων μορφών σε ακολουθίες βίντεο με μεθόδους μηχανικής όρασης. Οι ακολουθίες αυτές θεωρούμε πως έχουν ληφθεί από στατική κάμερα σε εσωτερικό ή εξωτερικό χώρο. Πιο συγκεκριμένα, το εν λόγω πρόβλημα υποδιαιρείται σε τρία κυρίως μέρη τα οποία μελετώνται, αναλύονται και υλοποιούνται σε ξεχωριστά κεφάλαια. Ξεκινάμε με το κομμάτι κατάτμησης κίνησης, συνεχίζουμε με την ταξινόμηση αντικειμένων ώστε να αναγνωριστούν οι άνθρωποι ανάμεσα στις κινούμενες οντότητες και τελειώνουμε με την παρακολούθηση των ανθρώπινων σιλουετών για καταγραφή της πορείας τους όση ώρα βρίσκονται στο πλάνο. Οι αλγόριθμοι που αναπτύχθηκαν λειτούργησαν ικανοποιητικά κάτω από διάφορες συνθήκες και τα αποτελέσματά τους μπορούν να περάσουν ως είσοδοι σε μια πληθώρα εφαρμογών υψηλότερου επιπέδου με σκοπό την αναγνώριση ανθρώπινης δραστηριότητας και την κατανόηση συμπεριφοράς. / The purpose of this thesis is to deal with the problem of human tracking in video sequences. We have divided the problem in three parts: motion segmentation, human tracking and object classification. Finally we have dedicate a whole chapter to optical flow techniques and the relevant methods that can be employed to solve the same problem.
38

Detecção e contagem de veículos em vídeos de tráfego urbano / Detecting and counting vehicles in urban traffic video

Barcellos, Pablo Roberlan Manke January 2014 (has links)
Este trabalho apresenta um novo método para o rastreamento e contagem de veículos em vídeos de tráfego urbano. Usando técnicas de processamento de imagens e de agrupamentos de partículas, o método proposto usa coerência de movimento e coerência espacial para agrupar partículas, de modo que cada grupo represente veículos nas sequências de vídeo. Uma máscara contendo os objetos do primeiro plano é criada usando os métodos Gaussian Mixture Model e Motion Energy Images para determinar os locais onde as partículas devem ser geradas, e as regiões convexas dos agrupamentos são então analisadas para verificar se correspondem a um veículo. Esta análise leva em consideração a forma convexa dos grupos de partículas (objetos) e a máscara de foreground para realizar a fusão ou divisão dos agrupamentos obtidos. Depois que um veículo é identificado, ele é rastreado utilizando similaridade de histogramas de cor em janelas centradas nas partículas dos agrupamentos. A contagem de veículos acontece em laços virtuais definidos pelo usuário, através da interseção dos veículos rastreados com os laços virtuais. Testes foram realizados utilizando seis diferentes vídeos de tráfego, em um total de 80000 quadros. Os resultados foram comparados com métodos semelhantes disponíveis na literatura, fornecendo, resultados equivalentes ou superiores. / This work presents a new method for tracking and counting vehicles in traffic videos. Using techniques of image processing and particle clustering, the proposed method uses motion coherence and spatial adjacency to group particles so that each group represents vehicles in the video sequences. A foreground mask is created using Gaussian Mixture Model and Motion Energy Images to determine the locations where the particles must be generated, and the convex shapes of detecting groups are then analyzed for the potential detection of vehicles. This analysis takes into consideration the convex shape of the particle groups (objects) and the foreground mask to merge or split the obtained groupings. After a vehicle is identified, it is tracked using the similarity of color histograms on windows centered at the particle locations. The vehicle count takes place on userdefined virtual loops, through the intersections of tracked vehicles with the virtual loops. Tests were conducted using six different traffic videos, on a total of 80.000 frames. The results were compared with similar methods available in the literature, providing results equivalent or superior.
39

Detecção e contagem de veículos em vídeos de tráfego urbano / Detecting and counting vehicles in urban traffic video

Barcellos, Pablo Roberlan Manke January 2014 (has links)
Este trabalho apresenta um novo método para o rastreamento e contagem de veículos em vídeos de tráfego urbano. Usando técnicas de processamento de imagens e de agrupamentos de partículas, o método proposto usa coerência de movimento e coerência espacial para agrupar partículas, de modo que cada grupo represente veículos nas sequências de vídeo. Uma máscara contendo os objetos do primeiro plano é criada usando os métodos Gaussian Mixture Model e Motion Energy Images para determinar os locais onde as partículas devem ser geradas, e as regiões convexas dos agrupamentos são então analisadas para verificar se correspondem a um veículo. Esta análise leva em consideração a forma convexa dos grupos de partículas (objetos) e a máscara de foreground para realizar a fusão ou divisão dos agrupamentos obtidos. Depois que um veículo é identificado, ele é rastreado utilizando similaridade de histogramas de cor em janelas centradas nas partículas dos agrupamentos. A contagem de veículos acontece em laços virtuais definidos pelo usuário, através da interseção dos veículos rastreados com os laços virtuais. Testes foram realizados utilizando seis diferentes vídeos de tráfego, em um total de 80000 quadros. Os resultados foram comparados com métodos semelhantes disponíveis na literatura, fornecendo, resultados equivalentes ou superiores. / This work presents a new method for tracking and counting vehicles in traffic videos. Using techniques of image processing and particle clustering, the proposed method uses motion coherence and spatial adjacency to group particles so that each group represents vehicles in the video sequences. A foreground mask is created using Gaussian Mixture Model and Motion Energy Images to determine the locations where the particles must be generated, and the convex shapes of detecting groups are then analyzed for the potential detection of vehicles. This analysis takes into consideration the convex shape of the particle groups (objects) and the foreground mask to merge or split the obtained groupings. After a vehicle is identified, it is tracked using the similarity of color histograms on windows centered at the particle locations. The vehicle count takes place on userdefined virtual loops, through the intersections of tracked vehicles with the virtual loops. Tests were conducted using six different traffic videos, on a total of 80.000 frames. The results were compared with similar methods available in the literature, providing results equivalent or superior.
40

Detecção e contagem de veículos em vídeos de tráfego urbano / Detecting and counting vehicles in urban traffic video

Barcellos, Pablo Roberlan Manke January 2014 (has links)
Este trabalho apresenta um novo método para o rastreamento e contagem de veículos em vídeos de tráfego urbano. Usando técnicas de processamento de imagens e de agrupamentos de partículas, o método proposto usa coerência de movimento e coerência espacial para agrupar partículas, de modo que cada grupo represente veículos nas sequências de vídeo. Uma máscara contendo os objetos do primeiro plano é criada usando os métodos Gaussian Mixture Model e Motion Energy Images para determinar os locais onde as partículas devem ser geradas, e as regiões convexas dos agrupamentos são então analisadas para verificar se correspondem a um veículo. Esta análise leva em consideração a forma convexa dos grupos de partículas (objetos) e a máscara de foreground para realizar a fusão ou divisão dos agrupamentos obtidos. Depois que um veículo é identificado, ele é rastreado utilizando similaridade de histogramas de cor em janelas centradas nas partículas dos agrupamentos. A contagem de veículos acontece em laços virtuais definidos pelo usuário, através da interseção dos veículos rastreados com os laços virtuais. Testes foram realizados utilizando seis diferentes vídeos de tráfego, em um total de 80000 quadros. Os resultados foram comparados com métodos semelhantes disponíveis na literatura, fornecendo, resultados equivalentes ou superiores. / This work presents a new method for tracking and counting vehicles in traffic videos. Using techniques of image processing and particle clustering, the proposed method uses motion coherence and spatial adjacency to group particles so that each group represents vehicles in the video sequences. A foreground mask is created using Gaussian Mixture Model and Motion Energy Images to determine the locations where the particles must be generated, and the convex shapes of detecting groups are then analyzed for the potential detection of vehicles. This analysis takes into consideration the convex shape of the particle groups (objects) and the foreground mask to merge or split the obtained groupings. After a vehicle is identified, it is tracked using the similarity of color histograms on windows centered at the particle locations. The vehicle count takes place on userdefined virtual loops, through the intersections of tracked vehicles with the virtual loops. Tests were conducted using six different traffic videos, on a total of 80.000 frames. The results were compared with similar methods available in the literature, providing results equivalent or superior.

Page generated in 0.1152 seconds