• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 28
  • 28
  • 8
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Feature-Oriented Design Pattern Detection in Object-Oriented Systems

Hu, Lei 07 1900 (has links)
<p> Identifying design pattern instances within an existing software system can support understanding and reuse of the system functionality. Moreover, incorporating behavioral features through task scenario into the design pattern recovery would enhance both the scalability of the process and the usefulness of the design pattern instances. In this context, we present a novel method for recovering design pattern instances from the implementation of system behavioral features through a semi-automatic and multi-phase reverse engineering process.</p> <p> The proposed method consists of a feature-oriented dynamic analysis and a two-phase design pattern detection process. The feature-oriented dynamic analysis works on the software system behavioral features' run-time information and produces a mapping between features and their realization at class level. In the two-phase design pattern detection process, we employ an approximate matching and a structural matching to detect the instances of the target design pattern described in our proposed Pattern Description Language (PDL), which is an XML-based design pattern description language. The correspondence between system features and the identified design pattern instances can facilitate the construction of more reusable and configurable software components. Our target application domain is an evolutionary development of software product line which emphasizes on reusing software artifacts to construct a reference architecture for several similar products. We have implemented a prototype toolkit and conducted experimentations on three versions of JHotDraw systems to evaluate our approach.</p> / Thesis / Master of Applied Science (MASc)
12

[en] A MODEL-CENTRIC SEQUENTIAL APPROACH TO OUTLIER ENSEMBLES IN A MARKETING SCIENCE CONTEXT / [pt] ENSEMBLE SEQUENCIAL CENTRADO EM MODELOS PARA DETECÇÃO DE OUTLIERS NO CONTEXTO DE MARKETING SCIENCE

REBECCA PORPHIRIO DA COSTA DE AZEVEDO 19 February 2019 (has links)
[pt] O desenvolvimento visto nos últimos anos em dispositivos móveis tem tornado dramático o aumento na quantidade de dados e informações disponíveis para publicitários ao redor do mundo. Custo computacional e tempo disponível para processar dados e ser capaz de distinguir verdadeiros usuários de anomalias ou ruído têm crescido. Assim, a criação de um método para detecção de outliers poderia apoiar melhor os pesquisadores de Marketing e aumentar sua precisão na compreensão do comportamento digital. Estudos atuais mostram que, até o momento, o uso de meta-algoritmos tem sido pouco usado para detecção de outliers. Meta-algoritmos tendem a trazer benefícios porque reduzem a dependência que um único algoritmo pode gerar. Esta dissertação propõe um design de meta-algoritmo que utiliza diferentes algoritmos para obter resultados de detecção de outliers melhores do que aqueles obtidos por apenas um único algoritmo: centrado em modelo e sequencial. A novidade da abordagem consiste em (i) explorar a técnica sequencial, utilizando algoritmos que são aplicados sequencialmente, no qual um algoritmo impacta o próximo e o resultado final é uma combinação dos resultados obtidos; (ii) centralizar a performance no modelo e não nos dados, o que significa que o ensemble é aplicado a todo o conjunto de dados ao mesmo tempo e; (iii) apoiar pesquisadores de marketing que precisem operar ciência de dados de forma mais robusta e coerente. / [en] Latest years evolution in mobile devices has increased dramatically the amount of data and available information for advertisers around the world. Computational cost and available time to process data and be able to distinguish true users from anomalies or noise has only increased. Thus, the creation of a method to detect outliers could support Marketing researchers and increase their precision in understanding online behavior. Recent studies showthat, so far, meta-algorithms have not been used to detect outliers. Metaalgorithms tend to bring benefits because they reduce dependency that a single algorithm can generate. This work proposes a sequential model-centric ensemble design that uses different algorithms in outlier detection to obtain better results than those obtained by a single algorithm. The novelty in this approach consists in: (i) exploring the sequential technique, using algorithms that impact the next one and whose results are a combination of previously obtained results; (ii) centralizing performance around the model and not the data, which means the ensemble is applied in the whole dataset and not on different subsamples; (iii) support Marketing researchers that need to operate data Science in a more robust and coherent way.
13

Orientation Invariant Pattern Detection in Vector Fields with Clifford Algebra and Moment Invariants

Bujack, Roxana 19 December 2014 (has links)
The goal of this thesis is the development of a fast and robust algorithm that is able to detect patterns in flow fields independent from their orientation and adequately visualize the results for a human user. This thesis is an interdisciplinary work in the field of vector field visualization and the field of pattern recognition. A vector field can be best imagined as an area or a volume containing a lot of arrows. The direction of the arrow describes the direction of a flow or force at the point where it starts and the length its velocity or strength. This builds a bridge to vector field visualization, because drawing these arrows is one of the fundamental techniques to illustrate a vector field. The main challenge of vector field visualization is to decide which of them should be drawn. If you do not draw enough arrows, you may miss the feature you are interested in. If you draw too many arrows, your image will be black all over. We assume that the user is interested in a certain feature of the vector field: a certain pattern. To prevent clutter and occlusion of the interesting parts, we first look for this pattern and then apply a visualization that emphasizes its occurrences. In general, the user wants to find all instances of the interesting pattern, no matter if they are smaller or bigger, weaker or stronger or oriented in some other direction than his reference input pattern. But looking for all these transformed versions would take far too long. That is why, we look for an algorithm that detects the occurrences of the pattern independent from these transformations. In the second part of this thesis, we work with moment invariants. Moments are the projections of a function to a function space basis. In order to compare the functions, it is sufficient to compare their moments. Normalization is the act of transforming a function into a predefined standard position. Moment invariants are characteristic numbers like fingerprints that are constructed from moments and do not change under certain transformations. They can be produced by normalization, because if all the functions are in one standard position, their prior position has no influence on their normalized moments. With this technique, we were able to solve the pattern detection task for 2D and 3D flow fields by mathematically proving the invariance of the moments with respect to translation, rotation, and scaling. In practical applications, this invariance is disturbed by the discretization. We applied our method to several analytic and real world data sets and showed that it works on discrete fields in a robust way.
14

Uma abordagem para detecção de padrões emergentes. / An approach for detecting emerging patterns.

JOB, Ricardo de Sousa. 12 June 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-06-12T16:16:04Z No. of bitstreams: 1 RICARDO DE SOUSA JOB - DISSERTAÇÃO PPGCC 2014..pdf: 17381707 bytes, checksum: e786d3893958cbeb0121a19fae52628c (MD5) / Made available in DSpace on 2018-06-12T16:16:04Z (GMT). No. of bitstreams: 1 RICARDO DE SOUSA JOB - DISSERTAÇÃO PPGCC 2014..pdf: 17381707 bytes, checksum: e786d3893958cbeb0121a19fae52628c (MD5) Previous issue date: 2014-12-05 / Capes / Padrões de Projeto são soluções consolidadas para problemas de projeto de software recorrentes. São utilizados amplamente em projetos de software orientados a objetos, tornando-se um artifício de comunicação de soluções conhecidas dentro das equipes de desenvolvimento. É importante que o projetista consiga detectar e identificar os padrões de projetos numa base de código, para compreender as relações entre as classes, como fornecer sugestões úteis para a compreensão e evolução do sistema. Para detecção automática de padrões existem basicamente duas técnicas: análise estática e dinâmica. No primeiro passo, as relações e colaborações estruturais são extraídas. Já no segundo passo monitora-se a execução do programa, rastreando o conjunto de padrões selecionados no passo anterior para identificar quais padrões comportam-se como o esperado. As técnicas de detecção atuais, no entanto, limitam-se a análises estruturais restritivas, omitindo casos em que o comportamento de um padrão está presente, mesmo que não siga a organização estrutural prevista na literatura. Chamamos de padrões emergentes estes casos em que o comportamento de um determinado padrão está presente, mesmo que certa região do código apresente estruturação livre. Por exemplo, a essência do padrão de projeto Singleton está presente em uma classe qualquer quando esta possui apenas uma única instância durante as execuções de um programa, mesmo que não haja restrição sintática para que isso seja possível; ou seja, o padrão Singleton emerge deste comportamento de um determinado elemento do programa. Ao auxiliar o desenvolvedor na detecção de situações de projeto como esta, pode-se enriquecer o seu conhecimento sobre as consequências de suas decisões, além de propiciar a estruturação explícita do padrão como conhecida, facilitando assim a documentação e comunicação do projeto. Este trabalho explora o conceito de padrões emergentes através das seguintes contribuições: (i) uma revisão sistemática sobre abordagens automáticas de detecção de padrões de projeto, (ii) conceitos de padrões emergentes para vários padrões de projeto bem conhecidos, (iii) uma proposta de abordagem semi-automática de detecção de padrões emergentes e (iv) sua utilização para uma análise de ferramentas de detecção existente acerca de sua capacidade de identificação de padrões emergentes em alguns projetos de código aberto Java. / Design Patterns are Consolidated solutions to recurring software design problems. They are widely used in object-oriented software design, as communication device of well known solutions within development teams. It is important that the software designer detects and identifies design patterns in a code base, to understand the relationships between classes, provide useful suggestions for the understanding and evolution of the system. For automatic detection of patterns there are basically two techniques: static and dynamic analysis. On the first step, relations and structural collaborations are extracted. In the second step, the program execution is monitored, tracking the selected set of patterns in the first step to identify which patterns behave as expected. However, the current detection techniques are limited to restrictive structural analysis, omitting cases where the behavior of a pattem is presente, even if not follow the structural organization provided in the literature. We call emerging patterns when the behavior of a given pattern is present, even if some code's region presents a free structure. For example, the essence of the Singleton design pattern is present in any given class when it has only a single instance during the execution of a program, even without syntactic restriction for this to be possible; that is, the Singleton pattern emerged from this program element behavior. When developers are assisted in detecting design situations like this, they can enhance their knowledge about the consequences of their decisions, as well as providing the explicit structure of the pattern, facilitating the documentation and communication of the project. This paper explores the concept of emerging patterns through the following contributions: (i) a systematic review of automatic detection approaches of design patterns, (ii) concepts of emerging patterns for several well-known design patterns, (iii) a proposal for semi-automatic detection approach of emerging patterns and (iv) its use for an analysis of existing detection tools about tíieir ability to identify emerging patterns in an open-source Java project
15

Reducing Occlusion in Cinema Databases through Feature-Centric Visualizations

Bujack, Roxana, Rogers, David H., Ahrens, James 25 January 2019 (has links)
In modern supercomputer architectures, the I/O capabilities do not keep up with the computational speed. Image-based techniques are one very promising approach to a scalable output format for visual analysis, in which a reduced output that corresponds to the visible state of the simulation is rendered in-situ and stored to disk. These techniques can support interactive exploration of the data through image compositing and other methods, but automatic methods of highlighting data and reducing clutter can make these methods more effective. In this paper, we suggest a method of assisted exploration through the combination of feature-centric analysis with image space techniques and show how the reduction of the data to features of interest reduces occlusion in the output for a set of example applications.
16

[pt] DETECÇÃO DE PADRÕES EM IMAGENS BIDIMENSIONAIS: ESTUDO DE CASOS / [en] PATTERN DETECTION IN BIDIMENSIONAL IMAGENS: CASES STUDY

GUILHERME LUCIO ABELHA MOTA 10 November 2005 (has links)
[pt] A presente dissertação estudo dois problemas de detecção de padrões em imagens com fundo complexo, casos onde os algoritmos de segmentação convencionais não podem proporcionar bons resultados: a localização de Unidades Estruturais (UE`s) em imagens obtidas por Microscópio Eletrônico de Transmissão em Alta Resolução, e a detecção de faces frontais na posição vertical em imagens. Apesar de serem abordados problemas diferentes, as metodologias empregadas na solução de ambos os problemas possuem semelhanças. Uma operação de vizinhança é aplicada a imagem de entrada em busca de padrões de interesse. Sendo cada região extraída desta imagem submetida a um operador matemático composto por etapas de pré-processamento, redução de dimensionalidade e classificação. Na detecção de UE`s foram empregados três métodos distintos de redução de dimensionalidade - Análise de Componentes Principais (PCA), PCA do conjunto de treinamento equilibrado (PCAEq), e um método inédito, eixos que maximizam a distância ao centróide de uma classe (MAXDIST) - e dois modelos de classificador - classificador baseado na distância euclideana (EUC) e rede neural back-propagation (RN). A combinação PCAEq/RN forneceu taxa de detecção de 88% para 25 componentes. Já a combinação MAXDIST/EUC com apenas uma atributo forneceu 82% de detecção com menos falsas detecções. Na detecção de faces foi empregada uma nova abordagem, que utiliza uma rede neural back-propagation como classificador. Aplica-se a sua entrada recebe a representação no subespaço das faces e o erro de reconstrução. Em comparação com os resultados de referência da literatura na área, o método proposto atingiu taxas de detecção similares. / [en] This dissertation studies two pattern detection problems in images with complex background, in which standard segmentation techniques do not provide good results: the detection of structural units (SU`s) in images obtained through High resolution transmission Electron Microscopy and the detection of frontal human faces in images. The methods employed in the solution of both problems have many similarities - a neighborhood operator, basically composed of pre-processing, dimensionality reduction and classification steps, scans the input image searching for the patterns of interest. For SU detection three dimensionality reduction methods - Principal Component Analysis (PCA), PCA of the balanced training set (PACEq), and a new method, axis that maximize the distance to a given class centroid (MAXDIST) -, and two classifiers - Euclidean Distance (EUC) and back-propagation neural network (RN). The MAXDIST/EUC combination, with just one component, provided a detection rate of 82% with less false detections. For face detection a new approach was employed, using a back-propagation neural network as classifier. It takes as input a representation in the so-called face space and the reconstruction error (DFFS). In comparison with benchmark results from the literature, the proposed method reached similar detection rates.
17

Detecting flight patterns using deep learning

Carlsson, Victor January 2023 (has links)
With more aircraft in the air than ever before, there is a need for automating the surveillance of the airspace. It is widely known that aircraft with different intentions fly in different flight patterns. Support systems for finding different flight patterns are therefore needed. In this thesis, we investigate the possibility of detecting circular flight patterns using deep learning models. The basis for detection is ADS-B data which is continuously transmitted by aircraft containing information related to the aircraft status. Two deep learning models are constructed to solve the binary classification problem of detecting circular flight patterns. The first model is a Long Short-Term Memory (LSTM) model and utilizes techniques such as sliding window and bidirectional LSTM layers to solve the given task. The second model is a Convolutional Neural Network (CNN) and utilizes transfer learning. For the CNN model, the trajectory data is converted into image representations which are fed into a pre-trained model with a custom final dense layer. While ADS-B is openly available, finding specific flight patterns and producing a labeled data set of that pattern is hard and time-consuming. The data set is therefore expanded using other sources of data. Two additional sources of trajectory data are added to the data set; radar and simulated data. Training a model on data of a different distribution than the model is being evaluated on can be problematic and introduces a new source of error known as training-validation mismatch. One of the main goals of this thesis is to be able to quantify the size of this error to decide if using data from other sources is a viable option. The results show that the CNN model outperforms the LSTM model and achieves an accuracy of 98.2%. The results also show that there is a cost, in terms of accuracy, associated with not only training on ADS-B data. For the CNN model that cost was a 1-4% loss in accuracy depending on the training data used. The corresponding cost for the LSTM model was 2-10%.
18

Dynamic Laryngo-Tracheal Control for Airway Management in Dysphagia

Hadley, Aaron John 23 August 2013 (has links)
No description available.
19

Mise en correspondance robuste et détection de modèles visuels appliquées à l'analyse de façades / Robust feature correspondence and pattern detection for façade analysis

Ok, David 25 March 2013 (has links)
Depuis quelques années, avec l'émergence de larges bases d'images comme Google Street View, la capacité à traiter massivement et automatiquement des données, souvent très contaminées par les faux positifs et massivement ambiguës, devient un enjeu stratégique notamment pour la gestion de patrimoine et le diagnostic de l'état de façades de bâtiment. Sur le plan scientifique, ce souci est propre à faire avancer l'état de l'art dans des problèmes fondamentaux de vision par ordinateur. Notamment, nous traitons dans cette thèse les problèmes suivants: la mise en correspondance robuste, algorithmiquement efficace de caractéristiques visuelles et l'analyse d'images de façades par grammaire. L'enjeu est de développer des méthodes qui doivent également être adaptées à des problèmes de grande échelle. Tout d'abord, nous proposons une formalisation mathématique de la cohérence géométrique qui joue un rôle essentiel pour une mise en correspondance robuste de caractéristiques visuelles. A partir de cette formalisation, nous en dérivons un algorithme de mise en correspondance qui est algorithmiquement efficace, précise et robuste aux données fortement contaminées et massivement ambiguës. Expérimentalement, l'algorithme proposé se révèle bien adapté à des problèmes de mise en correspondance d'objets déformés, et à des problèmes de mise en correspondance précise à grande échelle pour la calibration de caméras. En s'appuyant sur notre algorithme de mise en correspondance, nous en dérivons ensuite une méthode de recherche d'éléments répétés, comme les fenêtres. Celle-ci s'avère expérimentalement très efficace et robuste face à des conditions difficiles comme la grande variabilité photométrique des éléments répétés et les occlusions. De plus, elle fait également peu d'hallucinations. Enfin, nous proposons des contributions méthodologiques qui exploitent efficacement les résultats de détections d'éléments répétés pour l'analyse de façades par grammaire, qui devient substantiellement plus précise et robuste / For a few years, with the emergence of large image database such as Google Street View, designing efficient, scalable, robust and accurate strategies have now become a critical issue to process very large data, which are also massively contaminated by false positives and massively ambiguous. Indeed, this is of particular interest for property management and diagnosing the health of building fac{c}ades. Scientifically speaking, this issue puts into question the current state-of-the-art methods in fundamental computer vision problems. More particularly, we address the following problems: (1) robust and scalable feature correspondence and (2) façade image parsing. First, we propose a mathematical formalization of the geometry consistency which plays a key role for a robust feature correspondence. From such a formalization, we derive a novel match propagation method. Our method is experimentally shown to be robust, efficient, scalable and accurate for highly contaminated and massively ambiguous sets of correspondences. Our experiments show that our method performs well in deformable object matching and large-scale and accurate matching problem instances arising in camera calibration. We build a novel repetitive pattern search upon our feature correspondence method. Our pattern search method is shown to be effective for accurate window localization and robust to the potentially great appearance variability of repeated patterns and occlusions. Furthermore, our pattern search method makes very few hallucinations. Finally, we propose methodological contributions that exploit our repeated pattern detection results, which results in a substantially more robust and more accurate façade image parsing
20

Détection de points d'intérêts dans une image multi ou hyperspectral par acquisition compressée / Feature detection in a multispectral image by compressed sensing

Rousseau, Sylvain 02 July 2013 (has links)
Les capteurs multi- et hyper-spectraux génèrent un énorme flot de données. Un moyende contourner cette difficulté est de pratiquer une acquisition compressée de l'objet multi- ethyper-spectral. Les données sont alors directement compressées et l'objet est reconstruitlorsqu'on en a besoin. L'étape suivante consiste à éviter cette reconstruction et à travaillerdirectement avec les données compressées pour réaliser un traitement classique sur un objetde cette nature. Après avoir introduit une première approche qui utilise des outils riemannienspour effectuer une détection de contours dans une image multispectrale, nous présentonsles principes de l'acquisition compressée et différents algorithmes utilisés pour résoudre lesproblèmes qu'elle pose. Ensuite, nous consacrons un chapitre entier à l'étude détaillée de l'und'entre eux, les algorithmes de type Bregman qui, par leur flexibilité et leur efficacité vontnous permettre de résoudre les minimisations rencontrées plus tard. On s'intéresse ensuiteà la détection de signatures dans une image multispectrale et plus particulièrement à unalgorithme original du Guo et Osher reposant sur une minimisation L1. Cet algorithme estgénéralisé dans le cadre de l'acquisition compressée. Une seconde généralisation va permettrede réaliser de la détection de motifs dans une image multispectrale. Et enfin, nous introduironsde nouvelles matrices de mesures qui simplifie énormément les calculs tout en gardant debonnes qualités de mesures. / Multi- and hyper-spectral sensors generate a huge stream of data. A way around thisproblem is to use a compressive acquisition of the multi- and hyper-spectral object. Theobject is then reconstructed when needed. The next step is to avoid this reconstruction and towork directly with compressed data to achieve a conventional treatment on an object of thisnature. After introducing a first approach using Riemannian tools to perform edge detectionin multispectral image, we present the principles of the compressive sensing and algorithmsused to solve its problems. Then we devote an entire chapter to the detailed study of one ofthem, Bregman type algorithms which by their flexibility and efficiency will allow us to solvethe minimization encountered later. We then focuses on the detection of signatures in amultispectral image relying on an original algorithm of Guo and Osher based on minimizingL1. This algorithm is generalized in connection with the acquisition compressed. A secondgeneralization will help us to achieve the pattern detection in a multispectral image. Andfinally, we introduce new matrices of measures that greatly simplifies calculations whilemaintaining a good quality of measurements.

Page generated in 0.1011 seconds