• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 3
  • 2
  • Tagged with
  • 11
  • 11
  • 5
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Analysis of surface ships engineering readiness and training

Landreth, Brant T. 03 1900 (has links)
Approved for public release, distribution is unlimited / This thesis analyzes engineering readiness and training onboard United States Navy surface ships. On the west coast, the major contributor to training is the Afloat Training Group, Pacific (ATGPAC). The primary objective is to determine whether the readiness standards provide pertinent insight to the surface force Commander and generate alternatives that may assist in better characterization of force-wide engineering readiness. The Type Commander has many questions that should be answered. Some of these are addressed with Poisson and binomial models. The results include: first, age of a ship has no association with performance of drills and that the number of discrepancies is associated with the performance of drills; second, drill performance decreased from the first initial assessment (IA) to the second IA; third, on average, the number of material discrepancies decreases from the IA to the underway demonstration (UD) for ships observed over two cycles; fourth, good ships do well on four programs; finally, training is effective. A table characterizing ships as above average, average, or below average in drill effectiveness at the IA and UD is supplied. / Lieutenant, United States Navy
2

Machine learning methods for discrete multi-scale fows : application to finance / Méthodes d'apprentissage pour des flots discrets multi-échelles : application à la finance

Mahler, Nicolas 05 June 2012 (has links)
Ce travail de recherche traite du problème d'identification et de prédiction des tendances d'une série financière considérée dans un cadre multivarié. Le cadre d'étude de ce problème, inspiré de l'apprentissage automatique, est défini dans le chapitre I. L'hypothèse des marchés efficients, qui entre en contradiction avec l'objectif de prédiction des tendances, y est d'abord rappelée, tandis que les différentes écoles de pensée de l'analyse de marché, qui s'opposent dans une certaine mesure à l'hypothèse des marchés efficients, y sont également exposées. Nous explicitons les techniques de l'analyse fondamentale, de l'analyse technique et de l'analyse quantitative, et nous nous intéressons particulièrement aux techniques de l'apprentissage statistique permettant le calcul de prédictions sur séries temporelles. Les difficultés liées au traitement de facteurs temporellement dépendants et/ou non-stationnaires sont soulignées, ainsi que les pièges habituels du surapprentrissage et de la manipulation imprudente des données. Les extensions du cadre classique de l'apprentissage statistique, particulièrement l'apprentissage par transfert, sont présentées. La contribution principale de ce chapitre est l'introduction d'une méthodologie de recherche permettant le développement de modèles numériques de prédiction de tendances. Cette méthodologie est fondée sur un protocole d'expérimentation, constitué de quatre modules. Le premier module, intitulé Observation des Données et Choix de Modélisation, est un module préliminaire dévoué à l'expression de choix de modélisation, d'hypothèses et d'objectifs très généraux. Le second module, Construction de Bases de Données, transforme la variable cible et les variables explicatives en facteurs et en labels afin d'entraîner les modèles numériques de prédiction de tendances. Le troisième module, intitulé Construction de Modèles, a pour but la construction de modèles numériques de prédiction de tendances. Le quatrième et dernier module, intitulé Backtesting et Résultats Numériques, évalue la précision des modèles de prédiction de tendances sur un ensemble de test significatif, à l'aide de deux procédures génériques de backtesting. Le première procédure renvoie les taux de reconnaissance des tendances de hausse et de baisse. La seconde construit des règles de trading au moyen des predictions calculées sur l'ensemble de test. Le résultat (P&L) de chacune des règles de trading correspond aux gains et aux pertes accumulés au cours de la période de test. De plus, ces procédures de backtesting sont complétées par des fonctions d'interprétation, qui facilite l'analyse du mécanisme décisionnel des modèles numériques. Ces fonctions peuvent être des mesures de la capacité de prédiction des facteurs, ou bien des mesures de fiabilité des modèles comme des prédictions délivrées. Elles contribuent de façon décisive à la formulation d'hypothèses mieux adaptées aux données, ainsi qu'à l'amélioration des méthodes de représentation et de construction de bases de données et de modèles. Ceci est explicité dans le chapitre IV. Les modèles numériques, propres à chacune des méthodes de construction de modèles décrites au chapitre IV, et visant à prédire les tendances des variables cibles introduites au chapitre II, sont en effet calculés et backtestés. Les raisons du passage d'une méthode de construction de modèles à une autre sont particulièrement étayées. L'influence du choix des paramètres - et ceci à chacune des étapes du protocole d'expérimentation - sur la formulation de conclusions est elle aussi mise en lumière. La procédure PPVR, qui ne requiert aucun calcul annexe de paramètre, a ainsi été utilisée pour étudier de façon fiable l'hypothèse des marchés efficients. De nouvelles directions de recherche pour la construction de modèles prédictifs sont finalement proposées. / This research work studies the problem of identifying and predicting the trends of a single financial target variable in a multivariate setting. The machine learning point of view on this problem is presented in chapter I. The efficient market hypothesis, which stands in contradiction with the objective of trend prediction, is first recalled. The different schools of thought in market analysis, which disagree to some extent with the efficient market hypothesis, are reviewed as well. The tenets of the fundamental analysis, the technical analysis and the quantitative analysis are made explicit. We particularly focus on the use of machine learning techniques for computing predictions on time-series. The challenges of dealing with dependent and/or non-stationary features while avoiding the usual traps of overfitting and data snooping are emphasized. Extensions of the classical statistical learning framework, particularly transfer learning, are presented. The main contribution of this chapter is the introduction of a research methodology for developing trend predictive numerical models. It is based on an experimentation protocol, which is made of four interdependent modules. The first module, entitled Data Observation and Modeling Choices, is a preliminary module devoted to the statement of very general modeling choices, hypotheses and objectives. The second module, Database Construction, turns the target and explanatory variables into features and labels in order to train trend predictive numerical models. The purpose of the third module, entitled Model Construction, is the construction of trend predictive numerical models. The fourth and last module, entitled Backtesting and Numerical Results, evaluates the accuracy of the trend predictive numerical models over a "significant" test set via two generic backtesting plans. The first plan computes recognition rates of upward and downward trends. The second plan designs trading rules using predictions made over the test set. Each trading rule yields a profit and loss account (P&L), which is the cumulated earned money over time. These backtesting plans are additionally completed by interpretation functionalities, which help to analyze the decision mechanism of the numerical models. These functionalities can be measures of feature prediction ability and measures of model and prediction reliability. They decisively contribute to formulating better data hypotheses and enhancing the time-series representation, database and model construction procedures. This is made explicit in chapter IV. Numerical models, aiming at predicting the trends of the target variables introduced in chapter II, are indeed computed for the model construction methods described in chapter III and thoroughly backtested. The switch from one model construction approach to another is particularly motivated. The dramatic influence of the choice of parameters - at each step of the experimentation protocol - on the formulation of conclusion statements is also highlighted. The RNN procedure, which does not require any parameter tuning, has thus been used to reliably study the efficient market hypothesis. New research directions for designing trend predictive models are finally discussed.
3

Problèmes inverses et analyse en ondelettes adaptées

Pham Ngoc, Thanh Mai 27 November 2009 (has links) (PDF)
Nous abordons l'étude de deux problèmes inverses, le problème des moments de Hausdorff et celui de la déconvolution sur la sphère ainsi qu'un problème de régression en design aléatoire. Le problème des moments de Hausdorff consiste à estimer une densité de probabilité à partir d'une séquence de moments bruités. Nous établissons une borne supérieure pour notre estimateur ainsi qu'une borne inférieure pour la vitesse de convergence, démontrant ainsi que notre estimateur converge à la vitesse optimale pour les classes de régularité de type Sobolev. Quant au problème de déconvolution sur la sphère, nous proposons un nouvel algorithme qui combine la méthode SVD traditionnelle et une procédure de seuillage dans la base des Needlets sphériques. Nous donnons une borne supérieure en perte Lp et menons une étude numérique qui montre des résultats fort prometteurs. Le problème de la régression en design aléatoire est abordé sous le prisme bayésien et sur la base des ondelettes déformées. Nous considérons deux scenarios de modèles a priori faisant intervenir des gaussiennes à faible et à grande variance et fournissons des bornes supérieures pour l'estimateur de la médiane a posteriori. Nous menons aussi une étude numérique qui révèle de bonnes performances numériques.
4

Machine learning methods for discrete multi-scale fows : application to finance

Mahler, Nicolas 05 June 2012 (has links) (PDF)
This research work studies the problem of identifying and predicting the trends of a single financial target variable in a multivariate setting. The machine learning point of view on this problem is presented in chapter I. The efficient market hypothesis, which stands in contradiction with the objective of trend prediction, is first recalled. The different schools of thought in market analysis, which disagree to some extent with the efficient market hypothesis, are reviewed as well. The tenets of the fundamental analysis, the technical analysis and the quantitative analysis are made explicit. We particularly focus on the use of machine learning techniques for computing predictions on time-series. The challenges of dealing with dependent and/or non-stationary features while avoiding the usual traps of overfitting and data snooping are emphasized. Extensions of the classical statistical learning framework, particularly transfer learning, are presented. The main contribution of this chapter is the introduction of a research methodology for developing trend predictive numerical models. It is based on an experimentation protocol, which is made of four interdependent modules. The first module, entitled Data Observation and Modeling Choices, is a preliminary module devoted to the statement of very general modeling choices, hypotheses and objectives. The second module, Database Construction, turns the target and explanatory variables into features and labels in order to train trend predictive numerical models. The purpose of the third module, entitled Model Construction, is the construction of trend predictive numerical models. The fourth and last module, entitled Backtesting and Numerical Results, evaluates the accuracy of the trend predictive numerical models over a "significant" test set via two generic backtesting plans. The first plan computes recognition rates of upward and downward trends. The second plan designs trading rules using predictions made over the test set. Each trading rule yields a profit and loss account (P&L), which is the cumulated earned money over time. These backtesting plans are additionally completed by interpretation functionalities, which help to analyze the decision mechanism of the numerical models. These functionalities can be measures of feature prediction ability and measures of model and prediction reliability. They decisively contribute to formulating better data hypotheses and enhancing the time-series representation, database and model construction procedures. This is made explicit in chapter IV. Numerical models, aiming at predicting the trends of the target variables introduced in chapter II, are indeed computed for the model construction methods described in chapter III and thoroughly backtested. The switch from one model construction approach to another is particularly motivated. The dramatic influence of the choice of parameters - at each step of the experimentation protocol - on the formulation of conclusion statements is also highlighted. The RNN procedure, which does not require any parameter tuning, has thus been used to reliably study the efficient market hypothesis. New research directions for designing trend predictive models are finally discussed.
5

Estimação não parametrica da trajetoria percorrrida por um veiculo autonomo / Non-parametric curve estimation of an autonomous vehicle trajectory

Zambom, Adriano Zanin, 1982- 03 June 2008 (has links)
Orientador: Nancy Lopes Garcia / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-10T10:07:23Z (GMT). No. of bitstreams: 1 Zambom_AdrianoZanin_M.pdf: 5665938 bytes, checksum: c420b9bb8c5b861a4f0fb73c3bee30c1 (MD5) Previous issue date: 2008 / Resumo: O objetivo deste estudo 'e encontrar a melhor trajetória para um veiculo autônomo que tem que se locomover de um ponto A 'a um ponto B na menor distancia possível evitando os possíveis obstáculos fixos entre esses pontos. Além disso, assumimos que existe uma distância segura r para ser mantida entre o veículo e os obstáculos. A locomoção do veículo não 'e fácil, isto 'e, o veículo não pode fazer movimentos abruptos e a trajetória tem que seguir uma curva suave. Obviamente, se não ha obstáculos, a melhor rota é uma linha reta entre A e B. Neste trabalho propomos um método não paramétrico de encontrar o melhor caminho. Se ha erro de medida, um estimador estocástico consistente 'e proposto no sentido de que quando o numero de observações aumenta, a trajetória estocástica converge para a determinística / Abstract: The objective of this study is to find a smooth function joining two points A and B with minimum length constrained to avoid fixed subsets. A penalized nonparametric method of finding the best path is proposed. The method is generalized to the situation where stochastic measurement errors are present. In this case, the proposed estimator is consistent, in the sense that as the number of observations increases the stochastic trajectory converges to the deterministic one. Two applications are immediate, searching the optimal path for an autonomous vehicle while avoiding all fixed obstacles between two points and flight planning to avoid threat or turbulence zones / Mestrado / Mestre em Estatística
6

STATISTICS IN THE BILLERA-HOLMES-VOGTMANN TREESPACE

Weyenberg, Grady S. 01 January 2015 (has links)
This dissertation is an effort to adapt two classical non-parametric statistical techniques, kernel density estimation (KDE) and principal components analysis (PCA), to the Billera-Holmes-Vogtmann (BHV) metric space for phylogenetic trees. This adaption gives a more general framework for developing and testing various hypotheses about apparent differences or similarities between sets of phylogenetic trees than currently exists. For example, while the majority of gene histories found in a clade of organisms are expected to be generated by a common evolutionary process, numerous other coexisting processes (e.g. horizontal gene transfers, gene duplication and subsequent neofunctionalization) will cause some genes to exhibit a history quite distinct from the histories of the majority of genes. Such “outlying” gene trees are considered to be biologically interesting and identifying these genes has become an important problem in phylogenetics. The R sofware package kdetrees, developed in Chapter 2, contains an implementation of the kernel density estimation method. The primary theoretical difficulty involved in this adaptation concerns the normalizion of the kernel functions in the BHV metric space. This problem is addressed in Chapter 3. In both chapters, the software package is applied to both simulated and empirical datasets to demonstrate the properties of the method. A few first theoretical steps in adaption of principal components analysis to the BHV space are presented in Chapter 4. It becomes necessary to generalize the notion of a set of perpendicular vectors in Euclidean space to the BHV metric space, but there some ambiguity about how to best proceed. We show that convex hulls are one reasonable approach to the problem. The Nye-PCA- algorithm provides a method of projecting onto arbitrary convex hulls in BHV space, providing the core of a modified PCA-type method.
7

Algoritmos não-paramétricos para detecção de pontos de mudança em séries temporais de alta frequência / Non-parametric change-point detection algorithms for high-frequency time series

Cardoso, Vitor Mendes 05 July 2018 (has links)
A área de estudos econométricos visando prever o comportamento dos mercados financeiros aparece cada vez mais como uma área de pesquisas dinâmica e abrangente. Dentro deste universo, podemos de maneira geral separar os modelos desenvolvidos em paramétricos e não paramétricos. O presente trabalho tem como objetivo investigar técnicas não-paramétricas derivadas do CUSUM, ferramenta gráfica que se utiliza do conceito de soma acumulada originalmente desenvolvida para controles de produção e de qualidade. As técnicas são utilizadas na modelagem de uma série cambial (USD/EUR) de alta frequência com diversos pontos de negociação dentro de um mesmo dia / The area of econometric studies to predict the behavior of financial markets increasingly proves itself as a dynamic and comprehensive research area. Within this universe, we can generally separate the models developed in parametric and non-parametric. The present work aims to investigate non-parametric techniques derived from CUSUM, a graphical tool that uses the cumulative sum concept originally developed for production and quality controls. The techniques are used in the modeling of a high frequency exchange series (USD/EUR) with several trading points within the same day
8

Algoritmos não-paramétricos para detecção de pontos de mudança em séries temporais de alta frequência / Non-parametric change-point detection algorithms for high-frequency time series

Vitor Mendes Cardoso 05 July 2018 (has links)
A área de estudos econométricos visando prever o comportamento dos mercados financeiros aparece cada vez mais como uma área de pesquisas dinâmica e abrangente. Dentro deste universo, podemos de maneira geral separar os modelos desenvolvidos em paramétricos e não paramétricos. O presente trabalho tem como objetivo investigar técnicas não-paramétricas derivadas do CUSUM, ferramenta gráfica que se utiliza do conceito de soma acumulada originalmente desenvolvida para controles de produção e de qualidade. As técnicas são utilizadas na modelagem de uma série cambial (USD/EUR) de alta frequência com diversos pontos de negociação dentro de um mesmo dia / The area of econometric studies to predict the behavior of financial markets increasingly proves itself as a dynamic and comprehensive research area. Within this universe, we can generally separate the models developed in parametric and non-parametric. The present work aims to investigate non-parametric techniques derived from CUSUM, a graphical tool that uses the cumulative sum concept originally developed for production and quality controls. The techniques are used in the modeling of a high frequency exchange series (USD/EUR) with several trading points within the same day
9

Sobre coleções e aspectos de centralidade em dados multidimensionais / On collections and centrality aspects of multidimensional data

Oliveira, Douglas Cedrim 14 June 2016 (has links)
A análise de dados multidimensionais tem sido por muitos anos tópico de contínua investigação e uma das razões se deve ao fato desse tipo de dados ser encontrado em diversas áreas da ciência. Uma tarefa comum ao se analisar esse tipo de dados é a investigação de padrões pela interação em projeções multidimensionais dos dados para o espaço visual. O entendimento da relação entre as características do conjunto de dados (dataset) e a técnica utilizada para se obter uma representação visual desse dataset é de fundamental importância uma vez que esse entendimento pode fornecer uma melhor intuição a respeito do que se esperar da projeção. Por isso motivado, no presente trabalho investiga-se alguns aspectos de centralidade dos dados em dois cenários distintos: coleções de documentos com grafos de coautoria; dados multidimensionais mais gerais. No primeiro cenário, o dado multidimensional que representa os documentos possui informações mais específicas, o que possibilita a combinação de diferentes aspectos para analisá-los de forma sumarizada, bem como a noção de centralidade e relevância dentro da coleção. Isso é levado em consideração para propor uma metáfora visual combinada que possibilite a exploração de toda a coleção, bem como de documentos individuais. No segundo cenário, de dados multidimensionais gerais, assume-se que tais informações não estão disponíveis. Ainda assim, utilizando um conceito de estatística não-paramétrica, deno- minado funções de profundidade de dados (data-depth functions), é feita a avaliação da ação de técnicas de projeção multidimensionais sobre os dados, possibilitando entender como suas medidas de profundidade (centralidade) foram alteradas ao longo do processo, definindo uma também medida de qualidade para projeções. / Analysis of multidimensional data has been for many years a topic of continuous research and one of the reasons is such kind of data can be found on several different areas of science. A common task analyzing such data is to investigate patterns by interacting with spatializations of the data onto the visual space. Understanding the relation between underlying dataset characteristics and the technique used to provide a visual representation of such dataset is of fundamental importance since it can provide a better intuition on what to expect from the spatialization. Motivated by this, in this work we investigate some aspects of centrality on the data in two different scenarios: document collection with co-authorship graphs; general multidimensional data. In the first scenario, the multidimensional data which encodes the documents is much more information specific, meaning it makes possible to combine different aspects such as a summarized analysis, as well as the centrality and relevance notions among the documents in the collection. In order to propose a combined visual metaphor, this is taken into account make possible the visual exploration of the whole document collection as well as individual document analysis. In the second case, of general multidimensional data, there is an assumption that such additional information is not available. Nevertheless, using the concept of data-depth functions from non-parametric statistics it is analyzed the action of multidimensional projection techniques on the data, during the projection process, in order to make possible to understand how depth measures computed in the data have been modified along the process, which also defines a quality measure for multidimensional projections.
10

Sobre coleções e aspectos de centralidade em dados multidimensionais / On collections and centrality aspects of multidimensional data

Douglas Cedrim Oliveira 14 June 2016 (has links)
A análise de dados multidimensionais tem sido por muitos anos tópico de contínua investigação e uma das razões se deve ao fato desse tipo de dados ser encontrado em diversas áreas da ciência. Uma tarefa comum ao se analisar esse tipo de dados é a investigação de padrões pela interação em projeções multidimensionais dos dados para o espaço visual. O entendimento da relação entre as características do conjunto de dados (dataset) e a técnica utilizada para se obter uma representação visual desse dataset é de fundamental importância uma vez que esse entendimento pode fornecer uma melhor intuição a respeito do que se esperar da projeção. Por isso motivado, no presente trabalho investiga-se alguns aspectos de centralidade dos dados em dois cenários distintos: coleções de documentos com grafos de coautoria; dados multidimensionais mais gerais. No primeiro cenário, o dado multidimensional que representa os documentos possui informações mais específicas, o que possibilita a combinação de diferentes aspectos para analisá-los de forma sumarizada, bem como a noção de centralidade e relevância dentro da coleção. Isso é levado em consideração para propor uma metáfora visual combinada que possibilite a exploração de toda a coleção, bem como de documentos individuais. No segundo cenário, de dados multidimensionais gerais, assume-se que tais informações não estão disponíveis. Ainda assim, utilizando um conceito de estatística não-paramétrica, deno- minado funções de profundidade de dados (data-depth functions), é feita a avaliação da ação de técnicas de projeção multidimensionais sobre os dados, possibilitando entender como suas medidas de profundidade (centralidade) foram alteradas ao longo do processo, definindo uma também medida de qualidade para projeções. / Analysis of multidimensional data has been for many years a topic of continuous research and one of the reasons is such kind of data can be found on several different areas of science. A common task analyzing such data is to investigate patterns by interacting with spatializations of the data onto the visual space. Understanding the relation between underlying dataset characteristics and the technique used to provide a visual representation of such dataset is of fundamental importance since it can provide a better intuition on what to expect from the spatialization. Motivated by this, in this work we investigate some aspects of centrality on the data in two different scenarios: document collection with co-authorship graphs; general multidimensional data. In the first scenario, the multidimensional data which encodes the documents is much more information specific, meaning it makes possible to combine different aspects such as a summarized analysis, as well as the centrality and relevance notions among the documents in the collection. In order to propose a combined visual metaphor, this is taken into account make possible the visual exploration of the whole document collection as well as individual document analysis. In the second case, of general multidimensional data, there is an assumption that such additional information is not available. Nevertheless, using the concept of data-depth functions from non-parametric statistics it is analyzed the action of multidimensional projection techniques on the data, during the projection process, in order to make possible to understand how depth measures computed in the data have been modified along the process, which also defines a quality measure for multidimensional projections.

Page generated in 0.5642 seconds