Spelling suggestions: "subject:"istatistical modeling"" "subject:"bystatistical modeling""
61 |
Análise de custo-eficácia dos pagamentos por serviços ambientais em paisagens fragmentadas: estudo de caso de São Paulo / Cost-effectiveness analysis of payments for environmental services in fragmented landscapes: case study in the State of São PauloArthur Nicolaus Fendrich 14 November 2017 (has links)
Mesmo com o crescimento da dependência da vida humana em relação aos serviços ecossistêmicos, a taxa de perda de diversidade genética no planeta tem alcançado níveis semelhantes à de grandes eventos de extinção, evidenciando a necessidade de ações para a conservação dos recursos naturais. Em adição aos tradicionais instrumentos de comando e controle para a conservação, os instrumentos econômicos têm tido crescente atenção no mundo nos últimos anos, com especial destaque para os Pagamentos por Serviços Ambientais (PSA). A abordagem de pagamentos de incentivos tem crescido na última década e, apesar das potencialidades que o PSA apresenta, muitos programas falham em incorporar o conhecimento científico em sua execução, sendo esse um dos aspectos que podem acarretar baixo desempenho ambiental e econômico. Neste contexto, o presente projeto buscou avaliar a custo-eficácia do PSA em paisagens fragmentadas. A área de estudo é o estado de São Paulo, cuja fragmentação historicamente ocorre pela expansão agropecuária e pelos diversos impactos decorrentes do grande crescimento populacional em seu território. Foram distribuídos questionários para a obtenção das preferências dos proprietários rurais paulistas em relação aos programas de PSA para restauração de vegetação nativa. Os dados coletados foram relacionados a características socioeconômicas e ambientais e um modelo beta inflacionado de zero misto dentro da classe GAMLSS foi utilizado. Em seguida, o modelo foi utilizado para predizer os resultados para os proprietários não entrevistados e a curva de investimento para diferentes retornos para conservação foi construída. Os resultados apontaram que o PSA é uma alternativa muito custosa frente aos orçamentos ambientais paulistas e que traz poucos benefícios para a restauração no estado de São Paulo. A pesquisa possui uma vertente teórica, pois contribui para a compreensão da adequabilidade do PSA em paisagens fragmentadas, e uma vertente prática, pois explicita a quantidade de recursos necessária para a execução dos programas analisados. / Although the dependence of human activities on ecosystem services has risen in the past decades, the current rate of genetic diversity loss has substantially declined and reached alarming levels. In addition to the traditional command and control approach for the promotion of conservation, growing attention has been given to economic instruments, especially to Payments for Environmental Services (PES). Despite all potentialities of the PES instrument, many programs fail to introduce scientic knowledge in the execution. Such a lack of foundation may result in low environmental and economic performance. The present research aims at evaluating the cost-effectiveness of PES in fragmented landscapes. The study area is the state of São Paulo, which has been fragmented by the agricultural and pasture expansion, and the impacts linked to the large population growth. A survey with dierent PES programs was sent to rural landowners and responses were analyzed and linked to socioeconomic and environmental characteristics through a zero-inflated beta mixed model, within the GAMLSS framework. The model was used to predict enrollment of non-respondents in different PES programs. Finally, the relationship between total area for restoration and the amount of resources needed for each program was compared to the environmental budget of the state of São Paulo. Results show that PES is a very costly alternative that can provide only few results for restoration. The present work has a theoretical orientation, as it contributes to the comprehension of the feasibility of PES programs in fragmented landscapes, and a practical orientation, as it quantifies the amount of resources required by the programs analyzed.
|
62 |
Caractérisation et modélisation de la variabilité au niveau du dispositif dans les MOSFET FD-SOI avancés / Characterization and modelling of device level variability in advanced FD-SOI MOSFETsPradeep, Krishna 08 April 2019 (has links)
Selon l’esprit de la “loi de Moore” utilisant des techniques innovantes telles que l’intégration 3D et de nouvelles architectures d’appareils, le marché a également évolué pour commencer à imposer des exigences spécifiques aux composants, comme des appareils à faible consommation et à faible fuite, requis par l’Internet des objets (IoT) applications et périphériques hautes performances demandés par les applications 5-G et les centres de données. Ainsi, le secteur des semi-conducteurs s’est peu à peu laissé guider par les avancées technologiques, mais aussi par les applications.La réduction de la tension d’alimentation est encore plus importante pour les applications à faible puissance, comme dans l’IoT, cela est limité par la variabilité du périphérique. L’abaissement de la tension d’alimentation implique une marge réduite pour que les concepteurs gèrent la variabilité du dispositif. Cela nécessite un accès à des outils améliorés permettant aux concepteurs de prévoir la variabilité des périphériques et d’évaluer son effet sur les performances des leur conception, ainsi que des innovations technologiques permettant de réduire la variabilité des périphériques.Cette thèse se concentre dans la première partie et examine comment la variabilité du dispositif peut être modélisée avec précision et comment sa prévision peut être incluse dans les modèles compacts utilisés par les concepteurs dans leurs simulations SPICE. La thèse analyse d’abord la variabilité du dispositif dans les transistors FD-SOI avancés à l’aide de mesures directes. À l’échelle spatiale, en fonction de la distance entre les deux dispositifs considérés, la variabilité peut être classée en unités de fabrication intra-matrice, inter-matrice, inter-tranche, inter-lot ou même entre différentes usines de fabrication. Par souci de simplicité, toute la variabilité d’une même matrice peut être regroupée en tant que variabilité locale, tandis que d’autres en tant que variabilité globale. Enfin, entre deux dispositifs arbitraires, il y aura des contributions de la variabilité locale et globale, auquel cas il est plus facile de l’appeler la variabilité totale. Des stratégies de mesure dédiées sont développées à l’aide de structures de test spécialisées pour évaluer directement la variabilité à différentes échelles spatiales à l’aide de caractérisations C-V et I-V. L’effet de la variabilité est d’abord analysé sur des facteurs de qualité (FOM) sélectionnés et des paramètres de procédés extraits des courbes C-V et I-V, pour lesquels des méthodologies d’extraction de paramètres sont développées ou des méthodes existantes améliorées. Cette analyse aide à identifier la distribution des paramétres et les corrélations possibles présentes entre les paramètres.Ensuite, nous analysons la variabilité dépendante de la polarisation dans les courbes I-V et C-V. Pour cela, une métrique universelle, qui fonctionne quelle que soit l’échelle spatiale de la variabilité, est definée sur la base de l’analyse des appariement précédemment rapportée pour la variabilité locale. Cette thèse étend également cette approche à la variabilité globale et totale. L’analyse de l’ensemble des courbes permet de ne pas manquer certaines informations critiques dans une plage de polarisation particulière, qui n’apparaissaient pas dans les FOM sélectionnés.Une approche de modélisation satistique est utilisée pour modéliser la variabilité observée et identifier les sources de variations, en termes de sensibilité à chaque source de variabilité, en utilisant un modèle physique compact comme Leti-UTSOI. Le modèle compact est d’abord étalonné sur les courbes C-V et I-V dans différentes conditions de polarisation et géométries. L’analyse des FOM et de leurs corrélations a permis d’identifier les dépendances manquantes dans le modèle compact. Celles-ci ont également été incluses en apportant de petites modifications au modèle compact. / The ``Moore's Law'' has defined the advancement of the semi-conductor industry for almost half a century. The device dimensions have reduced with each new technology node, and the design community and the market for the semiconductor have always followed this advancement of the industry and created applications which took better advantage of these new devices. But during the past decade, with the device dimensions approaching the fundamental limits imposed by the materials, the pace of this scaling down of device dimensions has decreased. While the technology struggled to keep alive the spirit of ``Moore's Law'' using innovative techniques like 3-D integration and new device architectures, the market also evolved to start making specific demands on the devices, like low power, low leakage devices demanded by Internet of Things (IoT) applications and high performance devices demanded by 5-G and data centre applications. So the semiconductor industry has slowly moved away from being driven by technology advancement, and rather it is now being driven by applications.Increasing power dissipation is an unavoidable outcome of the scaling process, while also targeting higher frequency applications. Historically, this issue has been handled by replacing the basic transistors (BJTs by MOSFETs), freezing the operation frequency in the system, lowering supply voltage, etc. The reduction of supply voltage is even more important for low power applications like in IoT, but this is limited by the device variability. Lowering the supply voltage implies reduced margin for the designers to handle the device variability. This calls for access to improved tools for the designers to predict the variability in the devices and evaluate its effect on the performance of their design and innovations in technology to reduce the variability in the devices. This thesis concentrates in the first part, and evaluates how the device variability can be accurately modelled and how its prediction can be included in the compact models used by the designers in their SPICE simulations.At first the thesis analyses the device variability in advanced FD-SOI transistors using direct measurements. In the spatial scale, depending on the distance between the two devices being considered, the variability can be classified into intra-die, inter-die, inter-wafer, inter-lot or even between different fabs. For the sake of simplicity all the variability within a single die can be grouped together as local variability, while others as global variability. Finally between two arbitrary device, there will be contributions from both local and global variability, in which case it is easier to term it as the total variability. Dedicated measurement strategies are developed using specialized test structures to directly evaluate the variability in different spatial scales using C-V and I-V characterisations. The effect of variability is first analysed on selected figure of merits (FOMs) and process parameters extracted from the C-V and I-V curves, for which parameter extraction methodologies are developed or existing methods are improved. This analysis helps identify the distribution of the parameters and the possible correlations present between the parameters.A very detailed analysis of the device variability in advanced FD-SOI transistors is undertaken in this thesis and a novel and unique characterisation and modelling methodology for the different types of variability is presented in great detail. The dominant sources of variability in the device behaviour, in terms of C-V and I-V and also in terms of parasitics (like gate leakage current) are identified and quantified. This work paves the way to a greater understanding of the device variability in FD-SOI transistors and can be easily adopted to improve the predictability of the commercial SPICE compact models for device variability.
|
63 |
MECHANISMS AND APPLICATIONS OF SOLID-STATE HYDROGEN DEUTERIUM EXCHANGERishabh Tukra (10900263) 17 August 2021 (has links)
<div><div><div><p>To prolong their long-term stability, protein molecules are commonly dispensed as lyophilized powders to be reconstituted before use. Evaluating the stability of these biomolecules in the solid state is routinely done by using various analytical techniques such as glass transition temperature, residual moisture content and other spectroscopic techniques. However, these techniques often show poor correlation with long term storage stability studies. As a result, time intensive long term storage stability studies are still the golden standard for evaluating protein formulations in the solid state. Over the past few years, our lab has developed solid-state hydrogen deuterium exchange- mass spectrometry (ssHDX-MS) as an analytical tool that probes the backbone of a protein molecule in the solid state. ssHDX-MS gives a snapshot of protein-matrix interactions in the solid state and has a quick turnaround of a few weeks as opposed to a few months for accelerated stability testing. Additionally, various studies in the past have demonstrated that ssHDX-MS can be used for a wide range of biomolecules and shows strong correlation to long term stability studies routinely employed.</p><p>The main aim of this dissertation is to provide an initial understanding of the mechanism behind ssHDX-MS in structured protein formulations. Specifically, this dissertation is an attempt at studying the effects of various experimental variables on the ssHDX-MS of myoglobin formulations as well as demonstrating the utility of this analytical technique. Firstly, the effects of varying temperature and relative humidity on ssHDX-MS of myoglobin formulations is studied with the help of statistical modeling. Secondly, the effects of pressure on ssHDX-MS of myoglobin formulations are evaluated at an intact and peptide digest levels. Finally, ssHDX-MS is used as a characterization tool to evaluate the effects of two different lyophilization methods on the structure and stability of myoglobin formulations. The results of studies described in this dissertation show ssHDX-MS to be sensitive to changes in experimental parameters, namely temperature, relative humidity, pressure, and excipients. Additionally, ssHDX-MS results were in good agreement with other routinely employed analytical and stability testing techniques when used to compare the effects of two lyophilization methods on myoglobin formulations.</p></div></div></div>
|
64 |
Machine Learning Approaches to Reveal Discrete Signals in Gene ExpressionChanglin Wan (12450321) 24 April 2022 (has links)
<p>Gene expression is an intricate process that determines different cell types and functions in metazoans, where most of its regulation is communicated through discrete signals, like whether the DNA helix is open, whether an enzyme binds with its target, etc. Understanding the regulation signals of the selective expression process is essential to the full comprehension of biological mechanism and complicated biological systems. In this research, we seek to reveal the discrete signals in gene expression by utilizing novel machine learning approaches. Specifically, we focus on two types of data chromatin conformation capture (3C) and single cell RNA sequencing (scRNA-seq). To identify potential regulators, we utilize a new hypergraph neural network to predict genome interactions, where we find the gene co-regulation may result from the shared enhancer element. To reveal the discrete expression state from scRNA-seq data, we propose a novel model called LTMG that considered the biological noise and showed better goodness of fitting compared with existing models. Next, we applied Boolean matrix factorization to find the co-regulation modules from the identified expression states, where we revealed the general property in cancer cells across different patients. Lastly, to find more reliable modules, we analyze the bias in the data and proposed BIND, the first algorithm to quantify the column- and row-wise bias in binary matrix.</p>
|
65 |
Methods and algorithms to learn spatio-temporal changes from longitudinal manifold-valued observations / Méthodes et algorithmes pour l’apprentissage de modèles d'évolution spatio-temporels à partir de données longitudinales sur une variétéSchiratti, Jean-Baptiste 23 January 2017 (has links)
Dans ce manuscrit, nous présentons un modèle à effets mixtes, présenté dans un cadre Bayésien, permettant d'estimer la progression temporelle d'un phénomène biologique à partir d'observations répétées, à valeurs dans une variété Riemannienne, et obtenues pour un individu ou groupe d'individus. La progression est modélisée par des trajectoires continues dans l'espace des observations, que l'on suppose être une variété Riemannienne. La trajectoire moyenne est définie par les effets mixtes du modèle. Pour définir les trajectoires de progression individuelles, nous avons introduit la notion de variation parallèle d'une courbe sur une variété Riemannienne. Pour chaque individu, une trajectoire individuelle est construite en considérant une variation parallèle de la trajectoire moyenne et en reparamétrisant en temps cette parallèle. Les transformations spatio-temporelles sujet-spécifiques, que sont la variation parallèle et la reparamétrisation temporelle sont définnies par les effets aléatoires du modèle et permettent de quantifier les changements de direction et vitesse à laquelle les trajectoires sont parcourues. Le cadre de la géométrie Riemannienne permet d'utiliser ce modèle générique avec n'importe quel type de données définies par des contraintes lisses. Une version stochastique de l'algorithme EM, le Monte Carlo Markov Chains Stochastic Approximation EM (MCMC-SAEM), est utilisé pour estimer les paramètres du modèle au sens du maximum a posteriori. L'utilisation du MCMC-SAEM avec un schéma numérique permettant de calculer le transport parallèle est discutée dans ce manuscrit. De plus, le modèle et le MCMC-SAEM sont validés sur des données synthétiques, ainsi qu'en grande dimension. Enfin, nous des résultats obtenus sur différents jeux de données liés à la santé. / We propose a generic Bayesian mixed-effects model to estimate the temporal progression of a biological phenomenon from manifold-valued observations obtained at multiple time points for an individual or group of individuals. The progression is modeled by continuous trajectories in the space of measurements, which is assumed to be a Riemannian manifold. The group-average trajectory is defined by the fixed effects of the model. To define the individual trajectories, we introduced the notion of « parallel variations » of a curve on a Riemannian manifold. For each individual, the individual trajectory is constructed by considering a parallel variation of the average trajectory and reparametrizing this parallel in time. The subject specific spatiotemporal transformations, namely parallel variation and time reparametrization, are defined by the individual random effects and allow to quantify the changes in direction and pace at which the trajectories are followed. The framework of Riemannian geometry allows the model to be used with any kind of measurements with smooth constraints. A stochastic version of the Expectation-Maximization algorithm, the Monte Carlo Markov Chains Stochastic Approximation EM algorithm (MCMC-SAEM), is used to produce produce maximum a posteriori estimates of the parameters. The use of the MCMC-SAEM together with a numerical scheme for the approximation of parallel transport is discussed. In addition to this, the method is validated on synthetic data and in high-dimensional settings. We also provide experimental results obtained on health data.
|
66 |
近代以降の日本小説の文体変化に関する計量的研究 / キンダイ イコウ ノ ニホン ショウセツ ノ ブンタイ ヘンカ ニカンスル ケイリョウテキ ケンキュウ李 広微 22 March 2022 (has links)
本論文では,近現代日本語小説を分析対象とし,社会的文体の経時的変化と個人文体の意図的な変化をめぐって統計的分析法及び機械学習法のアプローチで計量分析を行った。まず,口語体が確立されてから日本の近現代小説の文体にどのような変化が起こっているかについて考察した。そして,現代作家の水村美苗が近代文学への憧れを原点として創作した一連の作品を取り上げ,水村の意図的な文体変化について分析した。 / 博士(文化情報学) / Doctor of Culture and Information Science / 同志社大学 / Doshisha University
|
67 |
Segmentation of high frequency 3D ultrasound images for skin disease characterizationAnxionnat, Adrien January 2017 (has links)
This work is rooted in a need for dermatologists to explore skin characteristicsin depth. The inuence of skin disease such as acne in dermal tissues is stilla complex task to assess. Among the possibilities, high frequency ultrasoundimaging is a paradigm shift to probe and characterizes upper and deep dermis.For this purpose, a cohort of 58 high-frequency 3D images has been acquiredby the French laboratory Pierre Fabre in order to study acne vulgaris disease.This common skin disorder is a societal challenge and burden aecting late adolescentsacross the world. The medical protocol developed by Pierre Fabre wasto screen a lesion every day during 9 days for dierent patients with ultrasoundimaging. The provided data features skin epidermis and dermis structure witha fantastic resolution. The strategy we led to study these data can be explainedin three steps. First, epidermis surface is detected among artifacts and noisethanks to a robust level-set algorithm. Secondly, acne spots are located on theresulting height map and associated to each other among the data by computingand thresholding a local variance. And eventually potential inammatorydermal cavities related to each lesion are geometrically and statistically characterizedin order to assess the evolution of the disease. The results presentan automatic algorithm which permits dermatologists to screen acne vulgarislesions and to characterize them in a complete data set. It can hence be a powerfultoolbox to assess the eciency of a treatment. / Detta arbete är grundat i en dermatologs behov att undersöka hudens egenskaperpå djupet. Påverkan av hudsjukdomar så som acne på dermala vävanderär fortfarande svårt att bedöma. Bland möjligheterna är högfrekvent ultraljudsavbildningett paradigmskifte för undersökning och karakterisering av övre ochdjupa dermis. I detta syfte har en kohort av 58 högfrekventa 3D bilder förvärvatsav det Franska laboratoriet Pierre Fabre för att studera sjukdomen acne vulgaris.Denna vanliga hudsjukdom är en utmaning för samhället och en bördasom påverkar de i slutet av tonåren över hela världen. Protokollet utvecklatav Pierre Fabre innebar att undersöka en lesion varje dag över 9 dagar förolika patienter med ultraljudavbildning. Den insamlade datan visar hudens epidermisoch dermis struktur med en fantastiskt hög upplösning. Strategin vianvände för att studera denna data kan förklaras i tre steg. För det första,hittas epidermis yta bland artifakter och brus tack vare en robust level-set algoritm.För det andra, acne äckar hittas på höjdkartan och associeras tillvarandra bland mätdatan genom en tröskeljämförelse över lokala variationer.Även potentiellt inammatoriska dermala hålrum relaterade till varje lesion blirgeometriskt ochj statistiskt kännetecknade för att bedöma sjukdomens förlopp.Resultaten framför en automatisk algoritm som gör det möjligt för dermatologeratt undersöka acne vulgaris lesioner och utmärka de i ett dataset. Detta kandärmed vara en kraftfull verktygslåda för att undersöka inverkan av en behandlingtill denna sjukdom.
|
68 |
Characterization of Vehicular Exhaust Emissions and Indoor Air Quality of Public Transport Buses Operating on Alternative Diesel FuelsVijayan, Abhilash January 2007 (has links)
No description available.
|
69 |
Exploration de méthodes statistiques pour la modélisation de la relation séquence-activité de protéines d'intérêt industriel / Exploration of statistical methods for the modeling of sequence to activity relationship of proteins of industrial interest.Berland, Magali 29 October 2013 (has links)
Par l'accumulation de mutations bénéfiques lors de cycles successifs de mutagénèse, l'évolution dirigée offre un cadre rationnel pour l'amélioration des protéines à vocation industrielle. Elle permet une exploration large de l'espace possible des séquences ainsi que leurs capacités fonctionnelles. Elle est cependant lourde à mettre en oeuvre et nécessite des moyens importants. Des approches in silico font usage d'un jeu minimal de données expérimentales et utilisent la modélisation statistique combinée à des algorithmes d'apprentissage machine. Elles ont été développées pour explorer de façon heuristique l'espace possible des séquences et de la fitness et d'identifier les mutations et interactions entre résidus les plus intéressantes. C'est l'objet de cette thèse qui explore la construction et l'application de modèles statistiques s'appuyant sur des jeux minimaux de données expérimentales pour relier fitness, ou activité, à la séquence biologique des variants. L'étude s'articule autour d'un choix crucial d'une méthode de numérisation, de descripteurs de la séquence et de méthodes de régression. La méthode ProSAR de R. Fox (2005) et les limites de son applicabilité sur des jeux de données expérimentales ont été étudiées. De nouvelles méthodes ont aussi été développées, prenant en compte les propriétés physico-chimiques des acides aminés et leurs périodicités. Elle a permis de découvrir de nouveaux descripteurs reliant la séquence à l'activité et propose des approches innovantes qui ont la capacité de traiter des cadres biologiques très divers, même lorsque peu de données biologiques sont disponibles. / Via the accumulation of beneficial mutations through successive rounds of mutations, directed evolution offers a rational framework for the amelioration of protein of industrial interest. It enables the large exploration of the sequence space and fitness. However, they are wet-lab intensive and may reveal to be time consuming and costly. In silico approaches using minimal sets of experimental data and statistical models combined with machine learning algorithms have been developed to explore heuristically the sequence space and to identify the effect of the potential epistatic interactions between residues on protein fitness. This work focused on the construction and application of statistical models relying on minimal experimental datasets to study protein sequence to activity relationships (ProSAR). In particular, the choices of appropriate numerical encoding methods, of descriptors extracted from protein sequences and of regression methods were investigated. The original ProSAR method from R. Fox (2005) and the limits of its applicability on experimental datasets have been studied. New methods that consider physico-chemical features of amino acids and their periodicities have been explored. This study unveils novel descriptors of the sequence-activity relationship and provides innovative approaches that can deal with very diverse biological datasets, even when few biological data are available.
|
70 |
Automatic Music Transcription based on Prior Knowledge from Musical Acoustics. Application to the repertoires of the Marovany zither of Madagascar / Transcription automatique de musique basé sur des connaissances a prior issues de l'Acoustique Musicale. Application aux répertoires de la cithare marovany de MadagascarCazau, Dorian 12 October 2015 (has links)
L’ethnomusicologie est l’étude de la musique en mettant l’accent sur les aspects culturels, sociaux, matériels, cognitifs et/ou biologiques. Ce sujet de thèse, motivé par Pr. Marc Chemillier, ethnomusicologue au laboratoire CAMS-EHESS, traite du développement d’un système automatique de transcription dédié aux répertoires de musique de la cithare marovany de Madagascar. Ces répertoires sont transmis oralement, résultant d’un processus de mémorisation/ transformation de motifs musicaux de base. Ces motifs sont un patrimoine culturel important du pays, et évoluent en permanence sous l’influence d’autres pratiques et genres musicaux. Les études ethnomusicologiques actuelles visent à comprendre l’évolution du répertoire traditionnel, et de préserver ce patrimoine. Pour servir cette cause, notre travail consiste à fournir des outils informatiques d’analyse musicale pour organiser et structurer des enregistrements audio de cet instrument. La transcription automatique de musique consiste à estimer les notes d’un enregistrement à travers les trois attributs : temps de début, hauteur et durée de note. Notre travail sur cette thématique repose sur l’incorporation de connaissances musicales a priori dans les systèmes informatiques. Une première étape de cette thèse fût donc de générer cette connaissance et de la formaliser en vue de cette incorporation. Cette connaissance explorer les caractéristiques multi-modales du signal musical, incluant le timbre, le langage musical et les techniques de jeu. La recherche effectée dans cette thèse se distingue en deux axes : un premier plus appliqué, consistant à développer un système de transcription de musique dédié à la marovany, et un second plus fondamental, consistant à fournir une analyse plus approfondie des contributions de la connaissance dans la transcription automatique de musique. Notre premier axe de recherche requiert une précision de transcription très bonne (c.a.d. une F-measure supérieure à 95 % avec des tolérances d’erreur standardes) pour faire office de supports analytiques dans des études musicologiques. Pour cela, nous utilisons une technologie de captation multicanale appliquée aux instruments à cordes pincées. Les systèmes développés à partir de cette technologie utilisent un capteur par corde, permettant de décomposer un signal polyphonique en une somme de signaux monophoniques respectifs à chaque corde, ce qui simplifie grandement la tâche de transcription. Différents types de capteurs (optiques, piézoélectriques, électromagnétiques) ont été testés. Après expérimentation, les capteurs piézoélectriques, bien qu’invasifs, se sont avérés avoir les meilleurs rapports signal-sur-bruit et séparabilité inter-capteurs. Cette technologie a aussi permis le développement d’une base de données dite “ground truth" (vérité de terrain), indispensable pour l’évaluation quantitative des systèmes de transcription de musique. Notre second axe de recherche propose des investigations plus approfondies concernant l’incorporation de connaissance a priori dans les systèmes automatiques de transcription de musique. Deux méthodes statistiques ont été utilisées comme socle théorique, à savoir le PLCA (Probabilistic Latent Component Analysis) pour l’estimation multi-pitch et le HMM (Hidden Markov Models). / Ethnomusicology is the study of musics around the world that emphasize their cultural, social, material, cognitive and/or biological. This PhD sub- ject, initiated by Pr. Marc CHEMILLIER, ethnomusicolog at the laboratory CAMS-EHESS, deals with the development of an automatic transcription system dedicated to the repertoires of the traditional marovany zither from Madagascar. These repertoires are orally transmitted, resulting from a pro- cess of memorization/transformation of original base musical motives. These motives represent an important culture patrimony, and are evolving contin- ually under the inuences of other musical practices and genres mainly due to globalization. Current ethnomusicological studies aim at understanding the evolution of the traditional repertoire through the transformation of its original base motives, and preserving this patrimony. Our objectives serve this cause by providing computational tools of musical analysis to organize and structure audio recordings of this instrument. Automatic Music Transcription (AMT) consists in automatically estimating the notes in a recording, through three attributes: onset time, duration and pitch. On the long range, AMT systems, with the purpose of retrieving meaningful information from complex audio, could be used in a variety of user scenarios such as searching and organizing music collections with barely any human labor. One common denominator of our diferent approaches to the task of AMT lays in the use of explicit music-related prior knowledge in our computational systems. A step of this PhD thesis was then to develop tools to generate automatically this information. We chose not to restrict ourselves to a speciprior knowledge class, and rather explore the multi-modal characteristics of musical signals, including both timbre (i.e. modeling of the generic \morphological" features of the sound related to the physics of an instrument, e.g. intermodulation, sympathetic resonances, inharmonicity) and musicological (e.g. harmonic transition, playing dynamics, tempo and rhythm) classes. This prior knowledge can then be used in com- putational systems of transcriptions. The research work on AMT performed in this PhD can be divided into a more \applied research" (axis 1), with the development of ready-to-use operational transcription tools meeting the cur- rent needs of ethnomusicologs to get reliable automatic transcriptions, and a more \basic research" (axis 2), providing deeper insight into the functioning of these tools. Our axis of research requires a transcription accuracy high enough 1 (i.e. average F-measure superior to 95 % with standard error tolerances) to provide analytical supports for musicological studies. Despite a large enthusiasm for AMT challenges, and several audio-to-MIDI converters available commercially, perfect polyphonic AMT systems are out of reach of today's al- gorithms. In this PhD, we explore the use of multichannel capturing sensory systems for AMT of several acoustic plucked string instruments, including the following traditional African zithers: the marovany (Madagascar), the Mvet (Cameroun), the N'Goni (Mali). These systems use multiple string- dependent sensors to retrieve discriminatingly some physical features of their vibrations. For the AMT task, such a system has an obvious advantage in this application, as it allows breaking down a polyphonic musical signal into the sum of monophonic signals respective to each string.
|
Page generated in 0.1215 seconds