• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 73
  • 17
  • 8
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 146
  • 146
  • 146
  • 43
  • 29
  • 18
  • 17
  • 17
  • 17
  • 15
  • 14
  • 14
  • 14
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Imputação de dados em experimentos multiambientais: novos algoritmos utilizando a decomposição por valores singulares / Data imputation in multi-environment trials: new algorithms using the singular value decomposition

Sergio Arciniegas Alarcon 02 February 2016 (has links)
As análises biplot que utilizam os modelos de efeitos principais aditivos com inter- ação multiplicativa (AMMI) requerem matrizes de dados completas, mas, frequentemente os ensaios multiambientais apresentam dados faltantes. Nesta tese são propostas novas metodologias de imputação simples e múltipla que podem ser usadas para analisar da- dos desbalanceados em experimentos com interação genótipo por ambiente (G×E). A primeira, é uma nova extensão do método de validação cruzada por autovetor (Bro et al, 2008). A segunda, corresponde a um novo algoritmo não-paramétrico obtido por meio de modificações no método de imputação simples desenvolvido por Yan (2013). Também é incluído um estudo que considera sistemas de imputação recentemente relatados na literatura e os compara com o procedimento clássico recomendado para imputação em ensaios (G×E), ou seja, a combinação do algoritmo de Esperança-Maximização com os modelos AMMI ou EM-AMMI. Por último, são fornecidas generalizações da imputação simples descrita por Arciniegas-Alarcón et al. (2010) que mistura regressão com aproximação de posto inferior de uma matriz. Todas as metodologias têm como base a decomposição por valores singulares (DVS), portanto, são livres de pressuposições distribucionais ou estruturais. Para determinar o desempenho dos novos esquemas de imputação foram realizadas simulações baseadas em conjuntos de dados reais de diferentes espécies, com valores re- tirados aleatoriamente em diferentes porcentagens e a qualidade das imputações avaliada com distintas estatísticas. Concluiu-se que a DVS constitui uma ferramenta útil e flexível na construção de técnicas eficientes que contornem o problema de perda de informação em matrizes experimentais. / The biplot analysis using the additive main effects and multiplicative interaction models (AMMI) require complete data matrix, but often multi-environments trials have missing values. This thesis proposed new methods of single and multiple imputation that can be used to analyze unbalanced data in experiments with genotype by environment interaction (G×E). The first is a new extension of the cross-validation method by eigenvector (Bro et al., 2008). The second, corresponds to a new non-parametric algorithm obtained through modifications of the simple imputation method developed by Yan (2013). Also is included a study that considers imputation systems recently reported in the literature and compares them with the classic procedure recommended for imputation in trials (G×E), it means, the combination of the Expectation-Maximization (EM) algorithm with the additive main effects and multiplicative interaction (AMMI) model or EM-AMMI. Finally, are supplied generalizations of simple imputation described by Arciniegas-Alarcón et al. (2010) that combines regression with lower-rank approximation of a matrix. All methodologies are based on singular value decomposition (SVD), so, are free of any distributional or structural assumptions. In order to determine the performance of the new imputation schemes were performed simulations based on real data set of different species, with values deleted randomly at different percentages and the quality of the imputations was evaluated using different statistics. It was concluded that SVD provides a useful and flexible tool for the construction of efficient techniques that circumvent the problem of missing data in experimental matrices.
92

Análise modal operacional: métodos de identificação baseados em transmissibilidade / Operational modal analysis: identification methods based on transmissibility

Iván Darío Gómez Araújo 25 February 2015 (has links)
O presente trabalho tem como objetivo desenvolver novas alternativas de identificação modal para estruturas sob excitações em condição de operação baseadas em funções de transmissibilidade. Recentes metodologias formuladas sobre conceitos de transmissibilidade têm surgido como alternativa para a identificação de parâmetros modais de estruturas. A identificação nestas metodologias é independente do espectro da excitação, sendo uma vantagem importante com respeito a metodologias anteriores no domínio da frequência que supõem a excitação como ruído branco. Dessa forma, aproveitando os diferentes trabalhos dirigidos a avaliar parâmetros modais com uso da transmissibilidade, são propostas três novas alternativas. A primeira delas propõe a decomposição de valores singulares sobre matrizes de funções de transmissibilidade escalar com densidade espectral para estimar frequências naturais e modos de vibração. A segunda alternativa propõe o conceito de funções de transmissibilidade multivariável com diferente referência para a identificação modal. E a terceira introduz uma melhora na primeira alternativa incluindo a possibilidade da estimação de taxas de amortecimento. Uma ferramenta computacional para a análise modal é desenvolvida como apoio para as simulações numéricas de verificação das metodologias de identificação modal propostas. Diferentes exemplos numéricos com uma viga submetida a excitações de ruído colorido mostram que os métodos propostos são capazes de identificar parâmetros modais sem a introdução das frequências adicionais devido às excitações de ruído colorida utilizadas. Além disso, os dados de um teste de vibrações sobre uma ponte em operação foram utilizados para verificar os métodos. / This research aims to develop new alternatives of modal identification for structures under excitation in operation condition based on transmissibility functions. Latest methodologies based on transmissibility concepts have been arising as alternatives for modal parameter identification of structures. Modal parameter identification in this type methodology is input spectrum independent being an important advantage with respect previous frequency domain methods that assumes white noise excitation. Different alternatives of modal identification based on transmissibility functions are proposed in this work. The first of them proposes singular value decomposition on scalar transmissibility functions matrices with spectral density to estimate natural frequencies and vibration modes (PSDTM-SVD method). A second alternative proposes the concept of multivariable transmissibility functions with different transferring outputs for modal parameter identification. And the third alternative proposes an enhanced PSDTM-SVD method, which permits to identify modal damping. Computational tool for modal analysis is developed as a support for the numerical simulations of verification of modal identification methodologies proposed. Different numerical examples of a beam model subjected to colored noise excitations show that the proposed methods are capable of identifying modal parameters without the introduction of the additional frequencies due to the excitations used. Furthermore, data from an operational vibration bridge test were used to verify the methods.
93

Détection des changements de plans et extraction d'images représentatives dans une séquence vidéo / Video shot boundary detection and key-frame extraction using mathematical models

Bendraou, Youssef 16 November 2017 (has links)
Les technologies multimédias ont récemment connues une grande évolution surtout avec la croissance rapide d'internet ainsi que la création quotidienne de grands volumes de données vidéos. Tout ceci nécessite de nouvelles méthodes performantes permettant d'indexer, de naviguer, de rechercher et de consulter les informations stockées dans de grandes bases de données multimédia. La récupération de données basée sur le contenu vidéo, qui est devenue un domaine de recherche très actif durant cette décennie, regroupe les différentes techniques conçues pour le traitement de la vidéo. Dans le cadre de cette thèse de doctorat, nous présentons des applications permettant la segmentation temporelle d'une vidéo ainsi que la récupération d'information pertinente dans une séquence vidéo. Une fois le processus de classification effectué, il devient possible de rechercher l'information utile en ajoutant de nouveaux critères, et aussi de visualiser l'information d'une manière appropriée permettant d'optimiser le temps et la mémoire. Dans une séquence vidéo, le plan est considéré comme l'unité élémentaire de la vidéo. Un plan est défini comme une suite d'image capturée par une même caméra représentant une action dans le temps. Pour composer une vidéo, plusieurs plans sont regroupés en utilisant des séquences de transitions. Ces transitions se catégorisent en transitions brusques et transitions progressives. Détecter les transitions présentes dans une séquence vidéo a fait l'objet de nos premières recherches. Plusieurs techniques, basées sur différents modèles mathématiques, ont été élaborées pour la détection des changements de plans. L'utilisation de la décomposition en valeur singulière (SVD) ains que la norme Frobenius ont permis d'obtenir des résultats précis en un temps de calcul réduit. Le résumé automatique des séquences vidéo est actuellement un sujet d'une très grande actualité. Comme son nom l'indique, il s'agit d'une version courte de la vidéo qui doit contenir l'essentiel de l'information, tout en étant le plus concis possible. Ils existent deux grandes familles de résumé : le résumé statique et le résumé dynamique. Sélectionner une image représentative de chaque plan permet de créer un scénarimage. Ceci est considéré comme étant un résumé statique et local. Dans notre travail, une méthode de résumé globale est proposée. / With the recent advancement in multimedia technologies, in conjunction with the rapid increase of the volume of digital video data and the growth of internet ; it has becom mandatory to have the hability browse and search through information stored in large multimedia databases. For this purpose, content based video retrieval (CBVR) has become an active area of research durinf the last decade. The objective of this thesis is to present applications for temporal video segmentation and video retrieval based on different mathematical models. A shot is considered as the elementary unit of a video, and is defined as a continuous sequence of frames taken from a single camera, representing an action during time. The different types of transitions that may occur in a video sequence are categorized into : abrupt and gradual transition. In this work, through statistical analysis, we segment a video into its constituent units. This is achieved by identifying transitions between adjacent shots. The first proposed algorithm aims to detect abrupt shot transitions only by measuring the similarity between consecutive frames. Given the size of the vector containing distances, it can be modeled by a log normal distribution since all the values are positive. Gradual shot transition identification is a more difficult task when compared to cut detection. Generally, a gradual transition may share similar characteristics as a dynamic segment with camera or object motion. In this work, singular value decomposition (SVD) is performed to project features from the spatial domain to the singular space. Resulting features are reduced and more refined, which makes the remaining tasks easier. The proposed system, designed for detecting both abrupt and gradual transitions, has lead to reliable performances achieving high detection rates. In addition, the acceptable computational time allows to process in real time. Once a video is partitioned into its elementary units, high-level applications can be processed, such as the key-frame extraction. Selecting representative frames from each shot to form a storyboard is considered as a static and local video summarization. In our research, we opted for a global method based on local extraction. Using refined centrist features from the singular space, we select representative frames using modified k-means clustering based on important scenes. This leads to catch pertinent frames without redoudancy in the final storyboard.
94

Implementation av webbsida för rekommendationssystem med användaruppbyggd databas / Implementation of a recommendation system webservice with a usergenerated database

Brundin, Michelle, Morris, Peter, Åhlman, Gustav, Rosén, Emil January 2012 (has links)
The goal of this project was to create a web-based, crowd-sourced, correlational database, that easily allowed users to submit objects and receive correlated objects as results. The webservice was created in the web development languages of HTML, CSS, PHP and Javscript, with MySQL to handle the database. Simultaneous development was kept in check with the aid of the source code management system GIT. Upon completion, the service contained several HTML-views, the ability to add and rate objects, a per-object dedicated page with information parsed from Wikipedia.org, and a view with objects ranked in accordance to the preferences specific to the current user. Roughly a month after the beginning of development, the website was publicly launched and promoted in order to collect data, and improvements were added to the website as needed. Two weeks after the public launch, the collected data was measured and analyzed. The algorithm proved effective and scalable, especially with the introduction of tags and simultaneous computation of object features.
95

[en] ON THE APPLICATION OF SIGNAL ANALYSIS TECHNIQUES TO REAL TIME COMMUNICATION AND CLASSIFICATION / [pt] TÉCNICAS APLICADAS À COMUNICAÇÃO EM TEMPO REAL E À SUA CLASSIFICAÇÃO

BRUNO COSENZA DE CARVALHO 12 March 2003 (has links)
[pt] A técnica de análise de sinais corrompidos por ruído baseada no comportamento de subespaços vetoriais foi tema de alguns trabalhos publicados desde o início da década de 80. Esta nova técnica passou a ter grande importância no processamento de sinais digitais devido a fatores como robustez e precisão.Porém, o maior problema associado a este novo método é o seu elevado custo computacional. Esta característica limitou o emprego da técnica em sistemas - offline - . A preocupação então passou a ser rastrear a variação do comportamento dos subespaços vetoriais de modo eficiente. O objetivo deste rastreamento seria o emprego da técnica em alguns sistemas que operam em tempo real. Este trabalho de tese propõe um novo algoritmo de rastreamento de subespaços vetoriais. O objetivo é apresentar um algoritmo que demonstre um bom desempenho, com relação aos demais já existentes, permitindo eventual aplicação em sistemas que atuem em tempo real. Como contribuição adicional, são apresentadas uma nova análise e caracterização de sistemas que se assemelham aos circulantes, sendo para isto reinterpretada a decomposição de matrizes circulantes. O conjunto de contribuições é aplicado a um novo sistema automático de classificação de sinais comunicação, quanto ao tipo de modulação. / [en] The signal subspace analysis technique, usually applied to signals corrupted by noise, is theme of some papers since the beginning of the 80s decade. This new technique has presented important features, as robustness and precision, and became widely employed in digital signal processing. However, the main problem associated to this new method is the high computational cost. This characteristic has restricted the use of signal subspace analysis to some off-line systems. A possible way to overcome this burden was to track the signal and noise subspace behavior in the time-domain. The main objective of these methods is to allow the signal subspace analysis technique application to real time systems, sometimes at the expense of limiting analysis precision or scope. This work proposes a new subspace tracking procedure. The goal is to describe a new algorithm with good performance (precision-speed), allowing some real time systems applications. A new analysis and characterization of almost circulant systems is introduced by reinterpreting the circulating matrix decomposition scheme. The set of contributions is applied to a new analogue modulation communication signals automatic recognition structure.
96

Beam position diagnostics with higher order modes in third harmonic superconducting accelerating cavities

Zhang, Pei January 2013 (has links)
Higher order modes (HOM) are electromagnetic resonant fields. They can be excited by an electron beam entering an accelerating cavity, and constitute a component of the wakefield. This wakefield has the potential to dilute the beam quality and, in the worst case, result in a beam-break-up instability. It is therefore important to ensure that these fields are well suppressed by extracting energy through special couplers. In addition, the effect of the transverse wakefield can be reduced by aligning the beam on the cavity axis. This is due to their strength depending on the transverse offset of the excitation beam. For suitably small offsets the dominant components of the transverse wakefield are dipole modes, with a linear dependence on the transverse offset of the excitation bunch. This fact enables the transverse beam position inside the cavity to be determined by measuring the dipole modes extracted from the couplers, similar to a cavity beam position monitor (BPM), but requires no additional vacuum instrumentation.At the FLASH facility in DESY, 1.3 GHz (known as TESLA) and 3.9 GHz (third harmonic) cavities are installed. Wakefields in 3.9 GHz cavities are significantly larger than in the 1.3 GHz cavities. It is therefore important to mitigate the adverse effects of HOMs to the beam by aligning the beam on the electric axis of the cavities. This alignment requires an accurate beam position diagnostics inside the 3.9 GHz cavities. It is this aspect that is focused on in this thesis. Although the principle of beam diagnostics with HOM has been demonstrated on 1.3 GHz cavities, the realization in 3.9 GHz cavities is considerably more challenging. This is due to the dense HOM spectrum and the relatively strong coupling of most HOMs amongst the four cavities in the third harmonic cryo-module. A comprehensive series of simulations and HOM spectra measurements have been performed in order to study the modal band structure of the 3.9 GHz cavities. The dependencies of various dipole modes on the offset of the excitation beam were subsequently studied using a spectrum analyzer. Various data analysis methods were used: modal identification, direct linear regression, singular value decomposition and k-means clustering. These studies lead to three modal options promising for beam position diagnostics, upon which a set of test electronics has been built. The experiments with these electronics suggest a resolution of 50 micron accuracy in predicting local beam position in the cavity and a global resolution of 20 micron over the complete module. This constitutes the first demonstration of HOM-based beam diagnostics in a third harmonic 3.9 GHz superconducting cavity module. These studies have finalized the design of the online HOM-BPM for 3.9 GHz cavities at FLASH.
97

Forecasting Mortality Rates using the Weighted Hyndman-Ullah Method

Ramos, Anthony Kojo January 2021 (has links)
The performance of three methods of mortality modelling and forecasting are compared. These include the basic Lee–Carter and two functional demographic models; the basic Hyndman–Ullah and the weighted Hyndman–Ullah. Using age-specific data from the Human Mortality Database of two developed countries, France and the UK (England&Wales), these methods are compared; through within-sample forecasting for the years 1999-2018. The weighted Hyndman–Ullah method is adjudged superior among the three methods through a comparison of mean forecast errors and qualitative inspection per the dataset of the selected countries. The weighted HU method is then used to conduct a 32–year ahead forecast to the year 2050.
98

Hierarchische Tensordarstellung

Kühn, Stefan 07 November 2012 (has links)
In der vorliegenden Arbeit wird ein neues Tensorformat vorgestellt und eingehend analysiert. Das hierarchische Format verwendet einen binären Baum, um den Tensorraum der Ordnung d mit einer geschachtelten Unterraumstruktur zu versehen. Der Speicheraufwand für diese Darstellung ist von der Größenordnung O(dnr + dr^3), wobei n den Speicheraufwand in den Ansatzräumen kennzeichnet und r ein Rangparameter ist, der durch die Dimensionen der geschachtelten Unterräume bestimmt wird. Das hierarchische Format umfasst verschiedene Standardformate zur Tensordarstellung wie das kanonische oder r-Term-Format und die Unterraum-/Tucker-Darstellung. Die in dieser Arbeit entwickelte zugehörige Arithmetik inklusive mehrerer Approximationsmethoden basiert auf stabilen Methoden der Linearen Algebra, insbesondere die Singulärwertzerlegung und die QR-Zerlegung sind von zentraler Bedeutung. Die rechnerische Komplexität ist hierbei O(dnr^2+dr^4). Die lineare Abhängigkeit von der Ordnung d des Tensorraumes ist hervorzuheben. Für die verschiedenen Approximationsmethoden, deren Effizienz und Effektivität für die Anwendbarkeit des neuen Formates entscheidend sind, werden qualitative und quantitative Fehlerabschätzungen gezeigt. Umfassende numerische Experimente mit einem Fokus auf den Approximationsmethoden bestätigen zum einen die theoretischen Resultate und belegen die Stärken der neuen Tensordarstellung, zeigen aber zum anderen auch weitere, eher überraschende positive Eigenschaften der mit FastHOSVD bezeichneten schnellsten Kürzungsmethode. / In this dissertation we present and a new format for the representation of tensors and analyse its properties. The hierarchical format uses a binary tree in order to define a hierarchical structure of nested subspaces in the tensor space of order d. The strorage requirements are O(dnr+dr^3) where n is determined by the storage requirements in the ansatz spaces and r is a rank parameter determined by the dimensions of the nested subspaces. The hierarchichal representation contains the standard representation like canonical or r-term representation and subspace or Tucker representation. The arithmetical operations that have been developed in this work, including several approximation methods, are based on stable Linear Alebra methods, especially the singular value decomposition (SVD) and the QR decomposition are of importance. The computational complexity is O(dnr^2+dr^4). The linear dependence from the order d of the tensor space is important. The approximation methods are one of the key ingredients for the applicability of the new format and we present qualitative and quantitative error estimates. Numerical experiments approve the theoretical results and show some additional, but unexpected positive aspects of the fastest method called FastHOSVD.
99

Vyhledávání osob ve fotografii / Recognizing Faces within Image

Svoboda, Pavel January 2009 (has links)
The essence of face recognition within the image is generally computer vision, which provides methods and algorithms for the implementation. Some of them are described just in this work. Whole process is split in to three main phases. These are detection, aligning of detected faces and finally its recognition. Algorithms which are used to applied in given issue and which are still in progress from todays view are mentioned in every phase. Implementation is build up on three main algorithms, AdaBoost to obtain the classifier for detection, method of aligning face by principal features and method of Eigenfaces for recognizing. There are theoretically described except already mentioned algorithms neural networks for detection, ASM - Active Shape Models algorithm for aligning and AAM - Active Appearance Model for recognition. In the end there are tables of data retrieved by implemented system, which evaluated the main implementation.
100

An Estimation Technique for Spin Echo Electron Paramagnetic Resonance

Golub, Frank 29 August 2013 (has links)
No description available.

Page generated in 0.1107 seconds