• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 1
  • Tagged with
  • 14
  • 14
  • 14
  • 14
  • 9
  • 6
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A pipelined parallel processor for real-time edge detection

McIlroy, C. D. January 1984 (has links)
No description available.
2

Foreground detection in specific outdoor scenes : A review of recognized techniques and proposed improvements for a real-time GPU-based implementation in C++

Sandström, Gustav January 2016 (has links)
Correct insertion of computer graphics into live-action broadcasts of outdoor sports requires precise knowledge of the foreground, i.e. players present in the scene. This thesis proposes a foreground detection and segmentation- framework with focus on real-time performance for 1080p resolution. A dataset consisting of four scenes; single-, multi-segment-, transcending-foreground and a light-witch scene all with dynamic backgrounds was constructed together with 26 ground-truths. Results show that the framework should run internally at 288p using GPU acceleration with geometrical nearest-neighbour-interpolation to attain real-time-capability. To maximize accuracy of the results, the framework uses two instances of OpenCV MOG2 in parallel on differently downsampled frames that are bitwise-joined to increase robustness. A set of morphological operations provides post-processing to get spatial coherence and a specific turf- consideration gives accurate contours. Thanks to additional camera- operator input, a crude distance-estimate lets foreground segments fade into background at a predetermined depth. The framework suffers from inaccurate segmentation during rapid light-switches, but recovers in a matter of seconds like the 'vanilla' MOG algorithm. For the specific scenes the framework provides excellent performance, especially considering the light-switch scene by comparison to the MOG-algorithm. For non-specific scenes of the 'BMC 2012' performance does not exceed the current state-of-the-art. / Korrekt placering av datorgrafik i video för tv-produktion kräver god känndedom om aktuell förgrund. Denna avhandling föreslår ett förgrundsdetektions- och segmenterings- ramverk med fokus på realtidsbearbetning av full-HD upplöst sport i utomhusmiljö. För utvärdering skapades ett dataset bestående av fyra scener; singel-, multisegment-, avlägsnande-förgrund och en ljusomväxlingsscen tillsammans med 26 referensförgrunder. För att erhålla realtidsbearbetning skall ramverket internt nyttja 288p upplösning med GPU acceleration och geometrisk närmaste-granne-interpolation. Resultaten visade att maximal noggranhet och ökad robusthet erhölls med två instanser av OpenCV MOG2 arbetandes parallellt på olikt nerskalade bilder för att därefter pixelvis förenas. För att erhålla sammanhängande förgrundssegment nyttjades morfologiska operationer på den binära sammansatta förgrunden vilket tillsammans med en specifik gräskantskorrektion ger precisa konturer. Tack vare givna kameraparametrar kan djupet till förgrundselementen uppskattas därmed låts de övergå till bakgrund för ett visst djupt. Ramverket lider av oprecis segmententering vid snabba ljusomväxlingar, men återhämtar sig när bakgrundsmodellen uppdaterats till de nya ljusförutsättningarna. För ovan nämnda specifika scener presterar ramverket utmärkt, speciellt med avseende på ljusomväxlingen, där prestandan är flerfaldigt bättre än den enskilda 'MOG'-metoden. För generella scener ur 'BMC 2012' datasetet presterar vår metod dock inte bättre än state-of-the-art.
3

Real time image processing : algorithm parallelization on multicore multithread architecture / Imagerie temps réel : parallélisation d’algorithmes sur plate-forme multi-processeurs

Mahmoudi, Ramzi 13 December 2011 (has links)
Les caractéristiques topologiques d'un objet sont fondamentales dans le traitement d'image. Dansplusieurs applications, notamment l'imagerie médicale, il est important de préserver ou de contrôlerla topologie de l'image. Cependant la conception de telles transformations qui préservent à la foi la topologie et les caractéristiques géométriques de l'image est une tache complexe, en particulier dans le cas du traitement parallèle.Le principal objectif du traitement parallèle est d'accélérer le calcul en partagent la charge de travail à réaliser entre plusieurs processeurs. Si on approche cet objectif sous l'angle de la conception algorithmique, les stratégies du calcul parallèle exploite l'ordre partiel des algorithmes, désigné également par le parallélisme naturel qui présent dans l'algorithme et qui fournit deux principales sources de parallélisme : le parallélisme de données et le parallélisme fonctionnelle.De point de vue conception architectural, il est essentiel de lier l'évolution spectaculaire desarchitectures parallèles et le traitement parallèle. En effet, si les stratégies de parallèlisation sont devenues nécessaire, c'est grâce à des améliorations considérables dans les systèmes de multitraitement ainsi que la montée des architectures multi-core. Toutes ces raisons font du calculeparallèle une approche très efficace. Dans le cas des machines à mémoire partagé, il existe un autreavantage à savoir le partage immédiat des données qui offre plus de souplesse, notamment avec l'évolution du système d'interconnexion entre processeurs, dans la conception de ces stratégies etl'exploitation du parallélisme de données et le parallélisme fonctionnel.Dans cette perspective, nous proposons une nouvelle stratégie de parallèlisation, baptisé SD&M(Split, Distribute and Merge) stratégie qui couvrent une large classe d'opérateurs topologiques.SD&M a été développée afin de fournir un traitement parallèle de tout opérateur basée sur latransformation topologique. Basé sur cette stratégie, nous avons proposé une série d'algorithmestopologiques parallèle (nouvelle version ou version adapté). Nos principales contributions sont :(1)Une nouvelle approche pour calculer la ligne de partage des eaux basée sur ‘MSF transform'.L'algorithme proposé est parallèle, préserve la topologie, n'a pas besoin d'extraction préalable deminima et adaptée pour les machines parallèle à mémoire partagée. Il utilise la même approchede calcule de flux proposé par Jean Cousty et il ne nécessite aucune étape de tri, ni l'utilisationd'une file d'attente hiérarchique. Cette contribution a été précédé par une étude intensive desalgorithmes de calcule de la ligne de partage des eaux dans le cas discret.(2)Une étude similaire sur les algorithmes d'amincissement a été menée. Elle concerne seizealgorithmes d'amincissement qui préservent la topologie. En sus des critères de performance,nous somme basé sur deux critères qualitative pour les comparer et les classés. Après cetteclassification, nous avons essayé d'obtenir de meilleurs résultats grâce avec une version adaptéede l'algorithme d'amincissement proposé par Michel Couprie.(3)Une méthode de calcul amélioré pour le lissage topologique grâce à la combinaison du calculparallèle de la distance euclidienne (en utilisant l'algorithme Meijster) et l'amincissement/épaississement parallèle (en utilisant la version adaptée de l'algorithme de Couprie déjàmentionné). / Topological features of an object are fundamental in image processing. In many applications,including medical imaging, it is important to maintain or control the topology of the image. Howeverthe design of such transformations that preserve topology and geometric characteristics of the inputimage is a complex task, especially in the case of parallel processing.Parallel processing is applied to accelerate computation by sharing the workload among multipleprocessors. In terms of algorithm design, parallel computing strategies profits from the naturalparallelism (called also partial order of algorithms) present in the algorithm which provides two main resources of parallelism: data and functional parallelism. Concerning architectural design, it is essential to link the spectacular evolution of parallel architectures and the parallel processing. In effect, if parallelization strategies become necessary, it is thanks to the considerable improvements in multiprocessing systems and the rise of multi-core processors. All these reasons make multiprocessing very practical. In the case of SMP machines, immediate sharing of data provides more flexibility in designing such strategies and exploiting data and functional parallelism, notably with the evolution of interconnection system between processors.In this perspective, we propose a new parallelization strategy, called SD&M (Split Distribute andMerge) strategy that cover a large class of topological operators. SD&M has been developed in orderto provide a parallel processing for many topological transformations.Based on this strategy, we proposed a series of parallel topological algorithm (new or adaptedversion). In the following we present our main contributions:(1)A new approach to compute watershed transform based on MSF transform, that is parallel,preserves the topology, does not need prior minima extraction and suited for SMP machines.Proposed algorithm makes use of Jean Cousty streaming approach and it does not require any sortingstep, or the use of any hierarchical queue. This contribution came after an intensive study of allexisting watershed transform in the discrete case.(2)A similar study on thinning transform was conducted. It concerns sixteen parallel thinningalgorithms that preserve topology. In addition to performance criteria, we introduce two qualitativecriteria, to compare and classify them. New classification criteria are based on the relationshipbetween the medial axis and the obtained homotopic skeleton. After this classification, we tried toget better results through the proposal of a new adapted version of Couprie's filtered thinningalgorithm by applying our strategy.(3)An enhanced computation method for topological smoothing through combining parallelcomputation of Euclidean Distance Transform using Meijster algorithm and parallel Thinning–Thickening processes using the adapted version of Couprie's algorithm already mentioned.
4

A REDUNDANT MONITORING SYSTEM FOR HUMAN WELDER OPERATION USING IMU AND VISION SENSORS

Yu, Rui 01 January 2018 (has links)
In manual control, the welding gun’s moving speed can significantly influence the welding results and critical welding operations usually require welders to concentrate consistently in order to react rapidly and accurately. However, human welders always have some habitual action which can have some subtle influence the welding process. It takes countless hours to train an experienced human welder. Using vision and IMU sensor will be able to set up a system and allow the worker got more accurate visual feedback like an experienced worker. The problem is that monitor and measuring of the control process not always easy under a complex working environment like welding. In this thesis, a new method is developed that use two different methods to compensate each other to obtain accurate monitoring results. Vision sensor and IMU sensor both developed to obtain the accurate data from the control process in real-time but don’t influence other. Although both vision and IMU sensor has their own limits, they also have their own advantage which can contribute to the measuring system.
5

Real time image processing : algorithm parallelization on multicore multithread architecture

Mahmoudi, Ramzi, Mahmoudi, Ramzi 13 December 2011 (has links) (PDF)
Topological features of an object are fundamental in image processing. In many applications,including medical imaging, it is important to maintain or control the topology of the image. Howeverthe design of such transformations that preserve topology and geometric characteristics of the inputimage is a complex task, especially in the case of parallel processing.Parallel processing is applied to accelerate computation by sharing the workload among multipleprocessors. In terms of algorithm design, parallel computing strategies profits from the naturalparallelism (called also partial order of algorithms) present in the algorithm which provides two main resources of parallelism: data and functional parallelism. Concerning architectural design, it is essential to link the spectacular evolution of parallel architectures and the parallel processing. In effect, if parallelization strategies become necessary, it is thanks to the considerable improvements in multiprocessing systems and the rise of multi-core processors. All these reasons make multiprocessing very practical. In the case of SMP machines, immediate sharing of data provides more flexibility in designing such strategies and exploiting data and functional parallelism, notably with the evolution of interconnection system between processors.In this perspective, we propose a new parallelization strategy, called SD&M (Split Distribute andMerge) strategy that cover a large class of topological operators. SD&M has been developed in orderto provide a parallel processing for many topological transformations.Based on this strategy, we proposed a series of parallel topological algorithm (new or adaptedversion). In the following we present our main contributions:(1)A new approach to compute watershed transform based on MSF transform, that is parallel,preserves the topology, does not need prior minima extraction and suited for SMP machines.Proposed algorithm makes use of Jean Cousty streaming approach and it does not require any sortingstep, or the use of any hierarchical queue. This contribution came after an intensive study of allexisting watershed transform in the discrete case.(2)A similar study on thinning transform was conducted. It concerns sixteen parallel thinningalgorithms that preserve topology. In addition to performance criteria, we introduce two qualitativecriteria, to compare and classify them. New classification criteria are based on the relationshipbetween the medial axis and the obtained homotopic skeleton. After this classification, we tried toget better results through the proposal of a new adapted version of Couprie's filtered thinningalgorithm by applying our strategy.(3)An enhanced computation method for topological smoothing through combining parallelcomputation of Euclidean Distance Transform using Meijster algorithm and parallel Thinning-Thickening processes using the adapted version of Couprie's algorithm already mentioned.
6

Stabilization and Control of a Quad-Rotor Micro-UAV Using Vision Sensors

Fowers, Spencer G. 23 April 2008 (has links) (PDF)
Quad-rotor micro-UAVs have become an important tool in the field of indoor UAV research. Indoor flight poses problems not experienced in outdoor applications. The ability to be location- and movement-aware is paramount because of the close proximity of obstacles (walls, doorways, desks). The Helio-copter, an indoor quad-rotor platform that utilizes a compact FPGA board called Helios has been developed in the Robotic Vision Lab at Brigham Young University. Helios allows researchers to perform on-board vision processing and feature tracking without the aid of a ground station or wireless transmission. Using this on-board feature tracking system a drift stabilization control system has been developed that allows indoor flight of the Helio-copter without tethers. The Helio-copter uses an IMU to maintain level attitude while processing camera images on the FPGA. The FPGA then computes translation, scale, and rotation deviations from camera image feedback. An on-board system has been developed to control yaw, altitude and drift based solely on the vision sensors. Preliminary testing shows the Helio-copter capable of maintaining level, stable flight within a 6 foot by 6 foot area for over 40 seconds without human intervention using basic PID loop structures with minor tuning. The integration of the vision system into the control structures is explained.
7

A Design Methodology for Creating Programmable Logic-based Real-time Image Processing Hardware

Drayer, Thomas Hudson 24 January 1997 (has links)
A new design methodology that produces hardware solutions for performing real-time image processing is presented here. This design methodology provides significant advantages over traditional hardware design approaches by translating real-time image processing tasks into the gate-level resources of programmable logic-based hardware architectures. The use of programmable logic allows high-performance solutions to be realized with very efficient utilization of available logic and interconnection resources. These implementations provide comparable performance at a lower cost than other available programmable logic-based hardware architectures. This new design methodology is based on two components: a programmable logic-based destination hardware architecture and a suite of development system software. The destination hardware architecture is a Custom Computing Machine (CCM) that contains multiple Field Programmable Gate Array (FPGA) chips. FPGA chips provide gate-level programmability for the hardware architecture. Sophisticated software development tools, called the TRAVERSE development system software, are created to overcome the significant amount of time and expertise required to manually utilize this gate-level programmability. The new hardware architecture and development system software combine to establish a unique design methodology. There are several distinct contributions provided by this dissertation. The new flexible MORRPH hardware architecture provides a more efficient solution for creating real-time image processing computing machines than current commercial hardware architectures. The TRAVERSE development system software is the first integrated development system specifically for creating real-time image processing designs with multiple FPGA-based CCMs. New standards and design conventions are defined specifically for creating solutions to low-level image processing tasks, using the MORRPH architecture for verification. The circuit partitioning and global routing programs of the TRAVERSE development system software enable automated translation of image processing designs into the resources of multiple FPGA chips in the hardware architecture. In a broad sense, the individual contributions of this dissertation combine to create a new design methodology that will change the way hardware solutions are created for real-time image processing in the future. / Ph. D.
8

An Fpga Based High Performance Optical Flow Hardware Design For Autonomous Mobile Robotic Platforms

Gultekin, Gokhan Koray 01 September 2010 (has links) (PDF)
Optical flow is used in a number of computer vision applications. However, its use in mobile robotic applications is limited because of the high computational complexity involved and the limited availability of computational resources on such platforms. The lack of a hardware that is capable of computing optical flow vector field in real time is a factor that prevents the mobile robotics community to efficiently utilize some successful techniques presented in computer vision literature. In this thesis work, we design and implement a high performance FPGA hardware with a small footprint and low power consumption that is capable of providing over-realtime optical flow data and is hence suitable for this application domain. A well known differential optical flow algorithm presented by Horn &amp / Schunck is selected for this implementation. The complete hardware design of the proposed system is described in details. We also discuss the design alternatives and the selected approaches together with a discussion of the selection procedure. We present the performance analysis of the proposed hardware in terms of computation speed, power consumption and accuracy. The designed hardware is tested with some of the available test sequences that are frequently used for performance evaluations of the optical flow techniques in literature. The proposed hardware is capable of computing optical flow vector field on 256x256 pixels images in 3.89ms which corresponds to a processing speed of 257 fps. The results obtained from FPGA implementation are compared with a floating-point implementation of the same algorithm realized on a PC hardware. The obtained results show that the hardware implementation achieved a superior performance in terms of speed, power consumption and compactness while there is minimal loss of accuracy due to the fixed point implementation.
9

Implementation Of A Low-cost Smart Camera Apllication On A Cots System

Baykent, Hayri Kerem 01 January 2012 (has links) (PDF)
The objective of this study is to implement a low-cost smart camera application on a Commercial off the Shelf system that is based on Texas Instrument&rsquo / s DM3730 System on Chip processor. Although there are different architectures for smart camera applications, ARM plus DSP based System on Chip architecture is selected for implementation because of its different core abilities. Beagleboard-XM platform that has an ARM plus DSP based System on Chip processor is chosen as Commercial off the Shelf platform. During this thesis, firstly to start-up the Commercial off the Shelf platform the design steps of porting an embedded Linux to ARM core of System on Chip processor is described. Then design steps that are necessary for implementation of smart camera applications on both ARM and DSP cores in parallel are given in detail. Furthermore, the real-time image processing performance of the Beagleboard-xM platform for the smart camera applications is evaluated with simple implementations.
10

Real-Time Implementation of Vision Algorithm for Control, Stabilization, and Target Tracking for a Hovering Micro-UAV

Tippetts, Beau J. 23 April 2008 (has links) (PDF)
A lightweight, powerful, yet efficient quad-rotor platform was designed and constructed to obtain experimental results of completely autonomous control of a hovering micro-UAV using a complete on-board vision system. The on-board vision and control system is composed of a Helios FPGA board, an Autonomous Vehicle Toolkit daughterboard, and a Kestrel Autopilot. The resulting platform is referred to as the Helio-copter. An efficient algorithm to detect, correlate, and track features in a scene and estimate attitude information was implemented with a combination of hardware and software on the FPGA, and real-time performance was obtained. The algorithms implemented include a Harris feature detector, template matching feature correlator, RANSAC similarity-constrained homography, color segmentation, radial distortion correction, and an extended Kalman filter with a standard-deviation outlier rejection technique (SORT). This implementation was designed specifically for use as an on-board vision solution in determining movement of small unmanned air vehicles that have size, weight, and power limitations. Experimental results show the Helio-copter capable of maintaining level, stable flight within a 6 foot by 6 foot area for over 40 seconds without human intervention.

Page generated in 0.0978 seconds