261 |
Template Matching on Vector Fields using Clifford AlgebraEbling, J., Scheuermann, G. 14 December 2018 (has links)
Due to the amount of flow simulation and measurement data, automatic detection, classification and visualization of features is necessary for an inspection. Therefore, many automated feature detection methods have been developed in recent years. However, one feature class is visualized afterwards in most cases, and many algorithms have problems in the presence of noise or superposition effects. In contrast, image processing and computer vision have robust methods for feature extraction and computation of derivatives of scalar fields. Furthermore, interpolation and other filter can be analyzed in detail. An application of these methods to vector fields would provide a solid theoretical basis for feature extraction. The authors suggest Clifford algebra as a mathematical framework for this task. Clifford algebra provides a unified notation for scalars and vectors as well as a multiplication of all basis elements. The Clifford product of two vectors provides the complete geometric information of the relative positions of these vectors. Integration of this product results in Clifford correlation and convolution which can be used for template matching on vector fields. Furthermore, for frequency analysis of vector fields and the behavior of vector-valued filters, a Clifford Fourier transform has been derived for 2 and 3 dimensions. Convolution and other theorems have been proved, and fast algorithms for the computation of the Clifford Fourier transform exist. Therefore the computation of Clifford convolution can be accelerated by computing it in Clifford Fourier domain. Clifford convolution and Fourier transform can be used for a thorough analysis and subsequent visualization of vector fields
|
262 |
Efficient rendering of real-world environments in a virtual reality application, using segmented multi-resolution meshesChiromo, Tanaka Alois January 2020 (has links)
Virtual reality (VR) applications are becoming increasingly popular and are being used in various applications. VR applications can be used to simulate large real-world landscapes in a computer program for various purposes such as entertainment, education or business.
Typically, 3-dimensional (3D) and VR applications use environments that are made up of meshes of relatively small size. As the size of the meshes increase, the applications start experiencing lagging and run-time memory errors. Therefore, it is inefficient to upload large-sized meshes into a VR application directly. Manually modelling an accurate real-world environment can also be a complicated task, due to the large size and complex nature of the landscapes. In this research, a method to automatically convert 3D point-clouds of any size and complexity into a format that can be efficiently rendered in a VR application is proposed. Apart from reducing the cost on performance, the solution also reduces the risks of virtual reality induced motion sickness.
The pipeline of the system incorporates three main steps: a surface reconstruction step, a texturing step and a segmentation step. The surface reconstruction step is necessary to convert the 3D point-clouds into 3D triangulated meshes. Texturing is required to add a realistic feel to the appearance of themeshes. Segmentation is used to split large-sized meshes into smaller components that can be rendered individually without overflowing the memory.
A novel mesh segmentation algorithm, the Triangle Pool Algorithm (TPA) is designed to segment the mesh into smaller parts. To avoid using the complex geometric and surface features of natural scenes, the TPA algorithm uses the colour attribute of the natural scenes for segmentation. The TPA algorithm manages to produce comparable results to those of state-of-the-art 3D segmentation algorithms when segmenting regular 3D objects and also manages to outperform the state-of-the-art algorithms when segmenting meshes of real-world natural landscapes.
The VR application is designed using the Unreal and Unity 3D engines. Its principle of operation involves rendering regions closer to the user using highly-detailed multiple mesh segments, whilst regions further away from the user are comprised of a lower detailed mesh. The rest of the segments that are not rendered at a particular time, are stored in external storage. The principle of operation manages to free up memory and also to reduce the amount of computational power required to render highly-detailed meshes. / Dissertation (MEng)--University of Pretoria, 2020. / Electrical, Electronic and Computer Engineering / MEng / Unrestricted
|
263 |
Structuring of image databases for the suggestion of products for online advertising / Structuration des bases d’images pour la suggestion des produits pour la publicité en ligneYang, Lixuan 10 July 2017 (has links)
Le sujet de la thèse est l'extraction et la segmentation des vêtements à partir d'images en utilisant des techniques de la vision par ordinateur, de l'apprentissage par ordinateur et de la description d'image, pour la recommandation de manière non intrusive aux utilisateurs des produits similaires provenant d'une base de données de vente. Nous proposons tout d'abord un extracteur d'objets dédié à la segmentation de la robe en combinant les informations locales avec un apprentissage préalable. Un détecteur de personne localises des sites dans l'image qui est probable de contenir l'objet. Ensuite, un processus d'apprentissage intra-image en deux étapes est est développé pour séparer les pixels de l'objet de fond. L'objet est finalement segmenté en utilisant un algorithme de contour actif qui prend en compte la segmentation précédente et injecte des connaissances spécifiques sur la courbure locale dans la fonction énergie. Nous proposons ensuite un nouveau framework pour l'extraction des vêtements généraux en utilisant une procédure d'ajustement globale et locale à trois étapes. Un ensemble de modèles initialises un processus d'extraction d'objet par un alignement global du modèle, suivi d'une recherche locale en minimisant une mesure de l'inadéquation par rapport aux limites potentielles dans le voisinage. Les résultats fournis par chaque modèle sont agrégés, mesuré par un critère d'ajustement globale, pour choisir la segmentation finale. Dans notre dernier travail, nous étendons la sortie d'un réseau de neurones Fully Convolutional Network pour inférer le contexte à partir d'unités locales (superpixels). Pour ce faire, nous optimisons une fonction énergie, qui combine la structure à grande échelle de l'image avec le local structure superpixels, en recherchant dans l'espace de toutes les possibilité d'étiquetage. De plus, nous introduisons une nouvelle base de données RichPicture, constituée de 1000 images pour l'extraction de vêtements à partir d'images de mode. Les méthodes sont validées sur la base de données publiques et se comparent favorablement aux autres méthodes selon toutes les mesures de performance considérées. / The topic of the thesis is the extraction and segmentation of clothing items from still images using techniques from computer vision, machine learning and image description, in view of suggesting non intrusively to the users similar items from a database of retail products. We firstly propose a dedicated object extractor for dress segmentation by combining local information with a prior learning. A person detector is applied to localize sites in the image that are likely to contain the object. Then, an intra-image two-stage learning process is developed to roughly separate foreground pixels from the background. Finally, the object is finely segmented by employing an active contour algorithm that takes into account the previous segmentation and injects specific knowledge about local curvature in the energy function.We then propose a new framework for extracting general deformable clothing items by using a three stage global-local fitting procedure. A set of template initiates an object extraction process by a global alignment of the model, followed by a local search minimizing a measure of the misfit with respect to the potential boundaries in the neighborhood. The results provided by each template are aggregated, with a global fitting criterion, to obtain the final segmentation.In our latest work, we extend the output of a Fully Convolution Neural Network to infer context from local units(superpixels). To achieve this we optimize an energy function,that combines the large scale structure of the image with the locallow-level visual descriptions of superpixels, over the space of all possiblepixel labellings. In addition, we introduce a novel dataset called RichPicture, consisting of 1000 images for clothing extraction from fashion images.The methods are validated on the public database and compares favorably to the other methods according to all the performance measures considered.
|
264 |
Diarizace meetingové řeči - Kdo mluví kdy / Speaker Diarization of Meeting DataTůma, Radovan Unknown Date (has links)
This work is trying to propose Diarization System based on Bayesian Information Criterion (BIC). In this paper is possible to find description of background theory and short description of previously used systems. Idea of this work is to try to use methods proposed earlier in a faster and more reliable way. Proposed system was tested on some records to prove its error rate. Results of tests are not very good but some possible improvements are proposed.
|
265 |
ENHANCING FUZZY CLUSTERING METHODS FOR IMAGE SEGMENTATION USING SPATIAL INFORMATIONCHEN, SHANGYE 30 April 2019 (has links)
No description available.
|
266 |
Longitudinal variation in the axial muscles of snakesNicodemo, Philip, Jr. January 2012 (has links)
No description available.
|
267 |
Les corps professionnels dans un contexte de réforme d'un système de soins : le cas des omnipraticiens québécoisTucci, Carole January 1999 (has links)
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
|
268 |
Market segmentation, motivations, attitudes, and preferences of Virginia resident freshwater anglersO'Neill, Brendan Michael 21 June 2001 (has links)
For many years, the Virginia Department of Game and Inland Fisheries (VDGIF) has managed freshwater fisheries without fully understanding their stakeholders. To increase its knowledge and improve management, the VDGIF commissioned a market segmentation study to collect baseline information about its constituents and serve as a model for future studies. I developed a 16-page mail questionnaire that was sent to a stratified random sample of 5,378 Virginia resident freshwater fishing license holders. The questionnaire was use to collect information on characteristics, motivations, attitudes, and preferences of Virginia resident freshwater anglers. The response rate was 52%.
I examined the descriptive characteristics of resident freshwater anglers and anglers who purchased different types of licenses and anglers from different management regions. Differences in fishing behaviors, motivations for fishing, attitudes, and preferences for management existed among anglers based on license type and regions. Although satisfaction with freshwater fishing was high, in most cases, many anglers believed that fishing quality had declined. By adopting a marketing approach and providing the desired experiences to each segment of anglers, the Fisheries Division may improve its relationship with anglers, as well as increase participation and satisfaction.
I also segmented the Virginia anglers by species preference, specialization, and a multi-level approach that involved a combination of species preference and specialization. Anglers are not a homogenous group and they seek different experiences. Multi-level segmentation was the most useful method of segmentation because it identified within-species preference group differences. Within each species preference group I found several segments of anglers. Segments differed in their orientations (trophy or consumptive), preferred methods of fishing and information sources, and support for regulations. Specialist anglers from each species preference group were trophy oriented and some were consumptive oriented as well. Specialists also were the most supportive of restrictive regulations. Less specialized anglers in each species preference group generally were less trophy oriented, more consumptive, and less supportive of regulations than specialist anglers. My results provide better understanding of the different segments of anglers within each species preference group, which will allow managers to provide a more satisfying experience for their stakeholders. / Master of Science
|
269 |
Fast Head-and-shoulder SegmentationDeng, Xiaowei January 2016 (has links)
Many tasks of visual computing and communications such as object recognition, matting, compression, etc., need to extract and encode the outer boundary of the object in a digital image or video. In this thesis, we focus on a particular video segmentation task and propose an efficient method for head-and-shoulder of humans through video frames. The key innovations for our work are as follows: (1) a novel head descriptor in polar coordinate is proposed, which can characterize intrinsic head object well and make it easy for computer to process, classify
and recognize. (2) a learning-based method is proposed to provide highly precise and robust head-and-shoulder segmentation results in applications where the head-and-shoulder object in the question is a known prior and the background is too complex. The efficacy of our method is
demonstrated on a number of challenging experiments. / Thesis / Master of Applied Science (MASc)
|
270 |
CNN MODEL FOR RECOGNITION OF TEXT-BASED CAPTCHAS AND ANALYSIS OF LEARNING BASED ALGORITHMS’ VULNERABILITIES TO VISUAL DISTORTIONAmiri Golilarz, Noorbakhsh 01 May 2023 (has links) (PDF)
Due to the rapid progress and advancements in deep learning and neural networks, manyapproaches and state-of-the-art researches have been conducted in these fields which cause developing various learning-based attacks leading to vulnerability of websites and portals. This kind of attacks decrease the security of the websites which results in releasing the sensitive and important personal information. These days, preserving the security of the websites is one of the most challenging tasks. CAPTCHA (Completely Automated Public Turing Test to Tell Computers and Humans Apart) is kind of test which are developed by designers and are available in various websites to distinguish and differentiate humans from robots in order to protect the websites from possible attacks. In this dissertation, we proposed a CNN based approach to attack and break text-based CAPTCHAs. The proposed method has been compared with several state-of-the-art approaches in terms of recognition accuracy (RA). Based on the results, the developed method can break and recognize CAPTCHAs at high accuracy. Additionally, we wanted to check how to make these CAPTCHAs hard to be broken, so we employed five types of distortions in these CAPTCHAs. The recognition accuracy in presence of these noises has been calculated. The results indicate that adversarial noise can make CAPTCHAs much difficult to be broken. The results have been compared with some state-of-the-art approaches. This analysis can be helpful for CAPTCHA developers to consider these noises in their developed CAPTCHAs. This dissertation also presents a hybrid model based on CNN-SVM to solve text-based CAPTCHAs. The developed method contains four main steps, namely: segmentation, feature extraction, feature selection, and recognition. For segmentation, we suggested using histogram and k-means clustering. For feature extraction, we developed a new CNN structure. The extracted features are passed through the mRMR algorithm to select the most efficient features. These selected features are fed into SVM for further classification and recognition. The results have been compared with several state-of-the-art methods to show the superiority of the developed approach. In general, this dissertation presented deep learning-based methods to solve text-based CAPTCHAs. The efficiency and effectiveness of the developed methods have been compared with various state-of-the-art methods. The developed techniques can break CAPTCHAs at high accuracy and also in a short time. We utilized Peak Signal to Noise Ratio (PSNR), ROC, accuracy, sensitivity, specificity, and precision to evaluate and measure the performance analysis of different methods. The results indicate the superiority of the developed methods.
|
Page generated in 0.1081 seconds