• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 669
  • 188
  • 90
  • 90
  • 55
  • 29
  • 18
  • 18
  • 18
  • 16
  • 9
  • 9
  • 9
  • 9
  • 9
  • Tagged with
  • 1421
  • 369
  • 207
  • 166
  • 160
  • 144
  • 130
  • 125
  • 115
  • 105
  • 100
  • 92
  • 88
  • 74
  • 73
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
471

Time fields : chamber concerto #3 for fifteen instruments / Time fields : chamber concerto #3 for fifteen instruments (2004)

Baker, Robert A., 1970- January 2004 (has links)
Time Fields: Chamber Concerto #3 for fifteen players (2004) is a composition for five woodwinds, three brass, one percussionist, piano, string quartet and double bass with an approximate duration of 14 minutes. This work addresses the nature of musical time and its role in the organisation of the large-scale structure of the piece. Four differing states of musical time, which I refer to as Temporal Textures (Static, Obscure, Temporal Counterpoint and Clear), are employed in particular alternations to form a single movement work in four distinct sections.
472

Sur l'algèbre et la combinatoire des sous-graphes d'un graphe

Buchwalder, Xavier 30 November 2009 (has links) (PDF)
On introduit une nouvelle structure algébrique qui formalise bien les problèmes de reconstruction, assortie d'une conjecture qui permettrait de traiter directement des symétries. Le cadre fournit par cette étude permet de plus d'engendrer des relations qui ont lieu entre les nombres de sous-structures, et d'une certaine façon, la conjecture formulée affirme qu'on les obtient toutes. De plus, la généralisation des résultats précédemment obtenus pour la reconstruction permet de chercher 'a en apprécier les limites en recherchant des cas où ces relations sont optimales. Ainsi, on montre que les théorèmes de V.Müller et de L.Lovasz sont les meilleurs possibles en exhibant des cas limites. Cette généralisation aux algèbres d'invariants, déjà effectuée par P.J.Cameron et V.B.Mnukhin, permet de placer les problèmes de reconstruction en tenaille entre d'une part des relations (fournies) que l'on veut exploiter, et des exemples qui établissent l'optimalité du résultat. Ainsi, sans aucune donnée sur le groupe, le résultat de L.Lovasz est le meilleur possible, et si l'on considère l'ordre du groupe, le résultat de V.Müller est le meilleur possible.
473

Rank statistics of forecast ensembles

Siegert, Stefan 08 March 2013 (has links) (PDF)
Ensembles are today routinely applied to estimate uncertainty in numerical predictions of complex systems such as the weather. Instead of initializing a single numerical forecast, using only the best guess of the present state as initial conditions, a collection (an ensemble) of forecasts whose members start from slightly different initial conditions is calculated. By varying the initial conditions within their error bars, the sensitivity of the resulting forecasts to these measurement errors can be accounted for. The ensemble approach can also be applied to estimate forecast errors that are due to insufficiently known model parameters by varying these parameters between ensemble members. An important (and difficult) question in ensemble weather forecasting is how well does an ensemble of forecasts reproduce the actual forecast uncertainty. A widely used criterion to assess the quality of forecast ensembles is statistical consistency which demands that the ensemble members and the corresponding measurement (the ``verification\'\') behave like random independent draws from the same underlying probability distribution. Since this forecast distribution is generally unknown, such an analysis is nontrivial. An established criterion to assess statistical consistency of a historical archive of scalar ensembles and verifications is uniformity of the verification rank: If the verification falls between the (k-1)-st and k-th largest ensemble member it is said to have rank k. Statistical consistency implies that the average frequency of occurrence should be the same for each rank. A central result of the present thesis is that, in a statistically consistent K-member ensemble, the (K+1)-dimensional vector of rank probabilities is a random vector that is uniformly distributed on the K-dimensional probability simplex. This behavior is universal for all possible forecast distributions. It thus provides a way to describe forecast ensembles in a nonparametric way, without making any assumptions about the statistical behavior of the ensemble data. The physical details of the forecast model are eliminated, and the notion of statistical consistency is captured in an elementary way. Two applications of this result to ensemble analysis are presented. Ensemble stratification, the partitioning of an archive of ensemble forecasts into subsets using a discriminating criterion, is considered in the light of the above result. It is shown that certain stratification criteria can make the individual subsets of ensembles appear statistically inconsistent, even though the unstratified ensemble is statistically consistent. This effect is explained by considering statistical fluctuations of rank probabilities. A new hypothesis test is developed to assess statistical consistency of stratified ensembles while taking these potentially misleading stratification effects into account. The distribution of rank probabilities is further used to study the predictability of outliers, which are defined as events where the verification falls outside the range of the ensemble, being either smaller than the smallest, or larger than the largest ensemble member. It is shown that these events are better predictable than by a naive benchmark prediction, which unconditionally issues the average outlier frequency of 2/(K+1) as a forecast. Predictability of outlier events, quantified in terms of probabilistic skill scores and receiver operating characteristics (ROC), is shown to be universal in a hypothetical forecast ensemble. An empirical study shows that in an operational temperature forecast ensemble, outliers are likewise predictable, and that the corresponding predictability measures agree with the analytically calculated ones.
474

Investigation of training data issues in ensemble classification based on margin concept : application to land cover mapping / Investigation des problèmes des données d'apprentissage en classification ensembliste basée sur le concept de marge : application à la cartographie d'occupation du sol

Feng, Wei 19 July 2017 (has links)
La classification a été largement étudiée en apprentissage automatique. Les méthodes d’ensemble, qui construisent un modèle de classification en intégrant des composants d’apprentissage multiples, atteignent des performances plus élevées que celles d’un classifieur individuel. La précision de classification d’un ensemble est directement influencée par la qualité des données d’apprentissage utilisées. Cependant, les données du monde réel sont souvent affectées par les problèmes de bruit d’étiquetage et de déséquilibre des données. La marge d'ensemble est un concept clé en apprentissage d'ensemble. Elle a été utilisée aussi bien pour l'analyse théorique que pour la conception d'algorithmes d'apprentissage automatique. De nombreuses études ont montré que la performance de généralisation d'un classifieur ensembliste est liée à la distribution des marges de ses exemples d'apprentissage. Ce travail se focalise sur l'exploitation du concept de marge pour améliorer la qualité de l'échantillon d'apprentissage et ainsi augmenter la précision de classification de classifieurs sensibles au bruit, et pour concevoir des ensembles de classifieurs efficaces capables de gérer des données déséquilibrées. Une nouvelle définition de la marge d'ensemble est proposée. C'est une version non supervisée d'une marge d'ensemble populaire. En effet, elle ne requière pas d'étiquettes de classe. Les données d'apprentissage mal étiquetées sont un défi majeur pour la construction d'un classifieur robuste que ce soit un ensemble ou pas. Pour gérer le problème d'étiquetage, une méthode d'identification et d'élimination du bruit d'étiquetage utilisant la marge d'ensemble est proposée. Elle est basée sur un algorithme existant d'ordonnancement d'instances erronées selon un critère de marge. Cette méthode peut atteindre un taux élevé de détection des données mal étiquetées tout en maintenant un taux de fausses détections aussi bas que possible. Elle s'appuie sur les valeurs de marge des données mal classifiées, considérant quatre différentes marges d'ensemble, incluant la nouvelle marge proposée. Elle est étendue à la gestion de la correction du bruit d'étiquetage qui est un problème plus complexe. Les instances de faible marge sont plus importantes que les instances de forte marge pour la construction d'un classifieur fiable. Un nouvel algorithme, basé sur une fonction d'évaluation de l'importance des données, qui s'appuie encore sur la marge d'ensemble, est proposé pour traiter le problème de déséquilibre des données. Cette méthode est évaluée, en utilisant encore une fois quatre différentes marges d'ensemble, vis à vis de sa capacité à traiter le problème de déséquilibre des données, en particulier dans un contexte multi-classes. En télédétection, les erreurs d'étiquetage sont inévitables car les données d'apprentissage sont typiquement issues de mesures de terrain. Le déséquilibre des données d'apprentissage est un autre problème fréquent en télédétection. Les deux méthodes d'ensemble proposées, intégrant la définition de marge la plus pertinente face à chacun de ces deux problèmes majeurs affectant les données d'apprentissage, sont appliquées à la cartographie d'occupation du sol. / Classification has been widely studied in machine learning. Ensemble methods, which build a classification model by integrating multiple component learners, achieve higher performances than a single classifier. The classification accuracy of an ensemble is directly influenced by the quality of the training data used. However, real-world data often suffers from class noise and class imbalance problems. Ensemble margin is a key concept in ensemble learning. It has been applied to both the theoretical analysis and the design of machine learning algorithms. Several studies have shown that the generalization performance of an ensemble classifier is related to the distribution of its margins on the training examples. This work focuses on exploiting the margin concept to improve the quality of the training set and therefore to increase the classification accuracy of noise sensitive classifiers, and to design effective ensemble classifiers that can handle imbalanced datasets. A novel ensemble margin definition is proposed. It is an unsupervised version of a popular ensemble margin. Indeed, it does not involve the class labels. Mislabeled training data is a challenge to face in order to build a robust classifier whether it is an ensemble or not. To handle the mislabeling problem, we propose an ensemble margin-based class noise identification and elimination method based on an existing margin-based class noise ordering. This method can achieve a high mislabeled instance detection rate while keeping the false detection rate as low as possible. It relies on the margin values of misclassified data, considering four different ensemble margins, including the novel proposed margin. This method is extended to tackle the class noise correction which is a more challenging issue. The instances with low margins are more important than safe samples, which have high margins, for building a reliable classifier. A novel bagging algorithm based on a data importance evaluation function relying again on the ensemble margin is proposed to deal with the class imbalance problem. In our algorithm, the emphasis is placed on the lowest margin samples. This method is evaluated using again four different ensemble margins in addressing the imbalance problem especially on multi-class imbalanced data. In remote sensing, where training data are typically ground-based, mislabeled training data is inevitable. Imbalanced training data is another problem frequently encountered in remote sensing. Both proposed ensemble methods involving the best margin definition for handling these two major training data issues are applied to the mapping of land covers.
475

Evaluation et application de méthodes de criblage in silico / Evaluation and application of virtual screening methods

Guillemain, Hélène 25 October 2012 (has links)
Lors de la conception de médicaments, le criblage in silico est de plus en plus utilisé et lesméthodes disponibles nécessitent d'être évaluées. L'évaluation de 8 méthodes a mis enévidence l'efficacité des méthodes de criblage in silico et des problèmes de construction de labanque d'évaluation de référence (DUD), la conformation choisie pour les sites de liaisonn'étant pas toujours adaptée à tous les actifs. La puissance informatique actuelle le permettant,plusieurs structures expérimentales ont été choisies pour tenter de mimer la flexibilité dessites de liaison. Un autre problème a été mis en évidence : les métriques d'évaluation desméthodes souffrent de biais. De nouvelles métriques ont donc été proposées, telles queBEDROC et RIE. Une autre alternative est proposée ici, mesurant la capacité prédictive d'uneméthode en actifs. Enfin, une petite molécule active sur le TNFα in vitro et in vivo sur souris aété identifiée par un protocole de criblage in silico. Ainsi, malgré le besoin d'amélioration desméthodes, le criblage in silico peut être d'un important soutien à l'identification de nouvellesmolécules a visée thérapeutique. / Since the introduction of virtual screening in the drug discovery process, the number ofvirtual screening methods has been increasing and available methods have to be evaluated.In this work, eight virtual screening methods were evaluated in the DUD database, showingadequate efficiency. This also revealed some shortcomings of the DUD database as thebinding site conformation used in the DUD was not relevant for all the actives.As computational power now permits to address this issue, classical docking runs have beenperformed on several X-ray structures, used to represent the binding site flexibility. This alsorevealed that evaluation metrics show some biases. New evaluation metrics have thus beenproposed, e.g. BEDROC and RIE. An alternative method was also proposed usingpredictiveness curves, based on compound activity probabilityFinally, a virtual screening procedure has been applied to TNFa. A small molecule inhibitor,showing in vitro and in vivo activity in mice, has been identified. This demonstrated the valueof virtual screening for the drug discovery process, although virtual screening methods needto be improved.
476

Estudo da aplicação de redes neurais artificiais para predição de séries temporais financeiras / Study of the application of artificial neural networks for the prediction of financial time series

Dametto, Ronaldo César 06 August 2018 (has links)
Submitted by Ronaldo Cesar Dametto (rdametto@uol.com.br) on 2018-09-18T19:17:34Z No. of bitstreams: 1 Dissertação_Completa_Final.pdf: 2885777 bytes, checksum: 05b2d5417efbec72f927cf8a62eef3fb (MD5) / Approved for entry into archive by Lucilene Cordeiro da Silva Messias null (lubiblio@bauru.unesp.br) on 2018-09-20T12:19:07Z (GMT) No. of bitstreams: 1 dametto_rc_me_bauru.pdf: 2877027 bytes, checksum: cee33d724090a01372e1292109af2ce9 (MD5) / Made available in DSpace on 2018-09-20T12:19:07Z (GMT). No. of bitstreams: 1 dametto_rc_me_bauru.pdf: 2877027 bytes, checksum: cee33d724090a01372e1292109af2ce9 (MD5) Previous issue date: 2018-08-06 / O aprendizado de máquina vem sendo utilizado em diferentes segmentos da área financeira, como na previsão de preços de ações, mercado de câmbio, índices de mercado e composição de carteira de investimento. Este trabalho busca comparar e combinar três tipos de algoritmos de aprendizagem de máquina, mais especificamente, o método Ensemble de Redes Neurais Artificias com as redes Multilayer Perceptrons (MLP), auto-regressiva com entradas exógenas (NARX) e Long Short-Term Memory (LSTM) para predição do Índice Bovespa. A amostra da série do Ibovespa foi obtida pelo Yahoo!Finance no período de 04 de janeiro de 2010 a 28 de dezembro de 2017, de periodicidade diária. Foram utilizadas as séries temporais referentes a cotação do Dólar, além de indicadores numéricos da Análise Técnica como variáveis independentes para compor a predição. Os algoritmos foram desenvolvidos através da linguagem Python usando framework Keras. Para avaliação dos algoritmos foram utilizadas as métricas de desempenho MSE, RMSE e MAPE, além da comparação entre as previsões obtidas e os valores reais. Os resultados das métricas indicam bom desempenho de predição pelo modelo Ensemble proposto, obtendo 70% de acerto no movimento do índice, porém, não conseguiu atingir melhores resultados que as redes MLP e NARX, ambas com 80% de acerto. / Different segments of the financial area, such as the forecast of stock prices, the foreign exchange market, the market indices and the composition of investment portfolio, use machine learning. This work aims to compare and combine two types of machine learning algorithms, the Artificial Neural Network Ensemble method with Multilayer Perceptrons (MLP), auto-regressive with exogenous inputs (NARX) and Long Short-Term Memory (LSTM) for prediction of the Bovespa Index. The Bovespa time series samples were obtained daily, using Yahoo! Finance, from January 4th, 2010 to December 28th, 2017. Dollar quotation, Google trends and numerical indicators of the Technical Analysis were used as independent variables to compose the prediction. The algorithms were developed using Python and Keras framework. Finally, in order to evaluate the algorithms, the MSE, RMSE and MAPE performance metrics, as well as the comparison between the obtained predictions and the actual values, were used. The results of the metrics indicate good prediction performance by the proposed Ensemble model, obtaining a 70% accuracy in the index movement, but failed to achieve better results than the MLP and NARX networks, both with 80% accuracy.
477

Assimilation de données pour l'initialisation et l'estimation de paramètres d'un modèle d'évolution de calotte polaire / Data assimilation for initialisation and parameter estimation of an ice sheet evolution model

Bonan, Bertrand 15 November 2013 (has links)
L'évolution des calottes polaires est régie à la fois par une dynamique d'écoulement complexe et par des mécanismes tel le glissement à la base, la température de la glace ou le bilan de masse en surface. De plus, de nombreuses boucles de rétroactions sont constatées entre les différents phénomènes impliquées. Tout ceci rend la modélisation de cette évolution complexe. Malgré tout, un certain nombre de modèles ont été développés dans cette optique. Ceux-ci font tous intervenir des paramètres influents qui dans certains cas sont peu ou pas connus. Ils nécessitent donc d'être correctement spécifiés. L'assimilation de données peut permettre une meilleure estimation de ces paramètres grâce à l'utilisation d'observations qui sont peu nombreuses en glaciologie. Dans cette thèse, nous nous intéressons à la mise en place de systèmes d'assimilation performants pour deux problèmes inverses concernant l'évolution des calottes polaires. Pour mieux nous concentrer sur ce point, nous avons travaillé avec un modèle d'évolution de calotte simplifié (appelé Winnie) qui, cependant, représente bien la plupart des processus complexes de la dynamique de la glace, et permet de travailler à différentes échelles de temps. Dans un premier temps, nous mettons en place une approche 4D-Var pour la reconstruction de l'évolution d'un paramètre climatique influant sur l'évolution d'une calotte sur une échelle de temps typique de 20 000 ans. Elle nécessite notamment l'écriture du code adjoint du modèle. Dans un second temps, nous nous intéressons au problème du spin-up. Ce problème de calibration du modèle pour des simulations à échelle de temps courtes (pas plus de 100 ans) consiste plus particulièrement en la reconstruction conjointe de l'état initial, de la topographie du socle rocheux et des paramètres de glissement basal. Nous développons ici une approche filtre de Kalman d'ensemble pour résoudre ce problème. / Ice sheet evolution is both driven by a complex flow dynamics and by physical mechanisms such as basal sliding, ice temperature or surface mass balance. In addition to those, many feedback loops are observed between the different implicated phenomena. That explains how complex is to model this evolution. However several models have been developed in that purpose. These models depend on influential parameters, which often are unfortunately poorly known. So they need to be correctly specified. Data assimilation can give a better estimation of these parameters thanks to observations which are quite rare in glaciology. In this thesis, we work on the setup of efficient data assimilation systems for two inverses problems involving ice sheet evolution. We work with a simplified ice sheet evolution model called Winnie in order to focus on the setup. Nevertheless Winnie takes into account the major complex processes of ice dynamics and can be used for studies with different time scales. The first part of the thesis focuses on developing a 4D-Var approach in order to retrieve the evolution of a climatic parameter for a typical time scale of 20 000 years. This approach require the implementation the adjoint code of the evolution model. In a second part, we focus on the spin-up problem. This calibration problem for short term (maximum 100 years) simulations involve retrieving jointly the initial state, the bedrock topography and basal sliding parameters. In order to solve this problem we develop an Ensemble Kalman Filter approach.
478

Strategies for Combining Tree-Based Ensemble Models

Zhang, Yi 01 January 2017 (has links)
Ensemble models have proved effective in a variety of classification tasks. These models combine the predictions of several base models to achieve higher out-of-sample classification accuracy than the base models. Base models are typically trained using different subsets of training examples and input features. Ensemble classifiers are particularly effective when their constituent base models are diverse in terms of their prediction accuracy in different regions of the feature space. This dissertation investigated methods for combining ensemble models, treating them as base models. The goal is to develop a strategy for combining ensemble classifiers that results in higher classification accuracy than the constituent ensemble models. Three of the best performing tree-based ensemble methods – random forest, extremely randomized tree, and eXtreme gradient boosting model – were used to generate a set of base models. Outputs from classifiers generated by these methods were then combined to create an ensemble classifier. This dissertation systematically investigated methods for (1) selecting a set of diverse base models, and (2) combining the selected base models. The methods were evaluated using public domain data sets which have been extensively used for benchmarking classification models. The research established that applying random forest as the final ensemble method to integrate selected base models and factor scores of multiple correspondence analysis turned out to be the best ensemble approach.
479

Feature extraction and selection for background modeling and foreground detection / Extraction et sélection de caractéristiques pour la détection d’objets mobiles dans des vidéos

Pacheco Do Espirito Silva, Caroline 10 May 2017 (has links)
Dans ce manuscrit de thèse, nous présentons un descripteur robuste pour la soustraction d’arrière-plan qui est capable de décrire la texture à partir d’une séquence d’images. Ce descripteur est moins sensible aux bruits et produit un histogramme court, tout en préservant la robustesse aux changements d’éclairage. Un autre descripteur pour la reconnaissance dynamique des textures est également proposé. Le descripteur permet d’extraire non seulement des informations de couleur, mais aussi des informations plus détaillées provenant des séquences vidéo. Enfin, nous présentons une approche de sélection de caractéristiques basée sur le principe d'apprentissage par ensemble qui est capable de sélectionner les caractéristiques appropriées pour chaque pixel afin de distinguer les objets de premier plan de l’arrière plan. En outre, notre proposition utilise un mécanisme pour mettre à jour l’importance relative de chaque caractéristique au cours du temps. De plus, une approche heuristique est utilisée pour réduire la complexité de la maintenance du modèle d’arrière-plan et aussi sa robustesse. Par contre, cette méthode nécessite un grand nombre de caractéristiques pour avoir une bonne précision. De plus, chaque classificateur de base apprend un ensemble de caractéristiques au lieu de chaque caractéristique individuellement. Pour compenser ces limitations, nous avons amélioré cette approche en proposant une nouvelle méthodologie pour sélectionner des caractéristiques basées sur le principe du « wagging ». Nous avons également adopté une approche basée sur le concept de « superpixel » au lieu de traiter chaque pixel individuellement. Cela augmente non seulement l’efficacité en termes de temps de calcul et de consommation de mémoire, mais aussi la qualité de la détection des objets mobiles. / In this thesis, we present a robust descriptor for background subtraction which is able to describe texture from an image sequence. The descriptor is less sensitive to noisy pixels and produces a short histogram, while preserving robustness to illumination changes. Moreover, a descriptor for dynamic texture recognition is also proposed. This descriptor extracts not only color information, but also a more detailed information from video sequences. Finally, we present an ensemble for feature selection approach that is able to select suitable features for each pixel to distinguish the foreground objects from the background ones. Our proposal uses a mechanism to update the relative importance of each feature over time. For this purpose, a heuristic approach is used to reduce the complexity of the background model maintenance while maintaining the robustness of the background model. However, this method only reaches the highest accuracy when the number of features is huge. In addition, each base classifier learns a feature set instead of individual features. To overcome these limitations, we extended our previous approach by proposing a new methodology for selecting features based on wagging. We also adopted a superpixel-based approach instead of a pixel-level approach. This does not only increases the efficiency in terms of time and memory consumption, but also can improves the segmentation performance of moving objects.
480

Graduate recital

Carlson, Kirsten Therese 11 1900 (has links)
This document contains scores for the works performed at a recital of compositions by Kirsten Carlson at 8:00 p.m. April 21, 1995 at the University of British Columbia Recital Hall. Model-Deviation is written for solo flute and was composed in 1992. Those That Follow is written for two flutes and was composed in 1993. The Distance is written for soprano voice, spoken voice, violin and cello and was composed in 1993. The text is by Kirsten Carlson. For will is written for flute and trumpet and was originally composed in 1994 and revised in 1995. This is a photograph of me is written for soprano, clarinet, bassoon and violin. It was composed in 1995 with text by Margaret Atwood. The Swimmer is written for two soprano voices, flute, clarinet, two violins, viola, and cello. It was composed during 1994-95 with text by the composer. We Are Still One is written for 10 flutes and was composed in 1994. / Arts, Faculty of / Music, School of / Sound recording included. / Graduate

Page generated in 0.0395 seconds