• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 278
  • 53
  • 44
  • 35
  • 31
  • 13
  • 10
  • 9
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 582
  • 582
  • 313
  • 251
  • 134
  • 92
  • 84
  • 67
  • 57
  • 54
  • 51
  • 44
  • 42
  • 40
  • 38
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Event-based failure prediction an extended hidden Markov model approach

Salfner, Felix January 2008 (has links)
Zugl.: Berlin, Humboldt-Univ., Diss., 2008
222

Speech Recognition under Stress

Wang, Yonglian 01 December 2009 (has links)
ABSTRACT OF THE DISSERTATION OF Yonglian Wang, for Doctor of Philosophy degree in Electrical and Computer Engineering, presented on May 19, 2009, at Southern Illinois University- Carbondale. TITLE: SPEECH RECOGNITION UNDER STRESS MAJOR PROFESSOR: Dr. Nazeih M. Botros In this dissertation, three techniques, Dynamic Time Warping (DTW), Hidden Markov Models (HMM), and Hidden Control Neural Network (HCNN) are utilized to realize talker-independent isolated word recognition. DTW is a technique utilized to measure the distance between two input patterns or vectors; HMM is a tool utilized to model speech signals using stochastic process in five states to compare the similarity between signals; and HCNN calculates the errors between actual output and target output and it is mainly built for the stress compensated speech recognition. When stress (Angry, Question and Soft) is induced into the normal talking speech, speech recognition performance degrades greatly. Therefore hypothesis driven approach, a stress compensation technique is introduced to cancel the distortion caused by stress. The database for this research is SUSAS (Speech under Simulated and Actual Stress) which includes five domains encompassing a wide variety of stress, 16,000 isolated-word speech signal samples available from 44 speakers. Another database, called TIMIT (10 speakers and 6300 sentences in total) is used as a minor in DTW algorithm. The words used for speech recognition are speaker-independent. The characteristic feature analysis has been carried out in three domains: pitch, intensity, and glottal spectrum. The results showed that speech spoken under angry and question stress indicates extremely wide fluctuations with average higher pitch, higher RMS intensity, and more energy compared to neutral. In contrast, the soft talking style has lower pitch, lower RMS intensity, and less energy compared to neutral. The Linear Predictive Coding (LPC) cepstral feature analysis is used to obtain the observation vector and the input vector for DTW, HMM, and stress compensation. Both HMM and HCNN consist of training and recognition stages. Training stage is to form references, while recognition stage is to compare an unknown word against all the reference models. The unknown word is recognized by the model with highest similarity. Our results showed that HMM technique can achieve 91% recognition rate for Normal speech; however, the recognition rate dropped to 60% for Angry stress condition, 65% for Question stress condition, and 76% for Soft stress condition. After compensation was applied for the cepstral tilts, the recognition rate increased by 10% for Angry stress condition, 8% for Question stress condition, and 4% for Soft stress condition. Finally, HCNN technique increased the recognition rate to 90% for Angry stress condition and it also differentiated the Angry stress from other stress group.
223

Real-time gesture recognition using MEMS acceleration sensors. / 基於MEMS加速度傳感器的人體姿勢實時識別系統 / Ji yu MEMS jia su du chuan gan qi de ren ti zi shi shi shi shi bie xi tong

January 2009 (has links)
by Zhou, Shengli. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (leaves 70-75). / Abstract also in Chinese. / Chapter Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background of Gesture Recognition --- p.1 / Chapter 1.2 --- HCI System --- p.2 / Chapter 1.2.1 --- Vision Based HCI System --- p.2 / Chapter 1.2.2 --- Accelerometer Based HCI System --- p.4 / Chapter 1.3 --- Pattern Recognition Methods --- p.6 / Chapter 1.4 --- Thesis Outline --- p.7 / Chapter Chapter 2 --- 2D Hand-Written Character Recognition --- p.8 / Chapter 2.1 --- Introduction to Accelerometer Based Hand-Written Character Recognition --- p.8 / Chapter 2.1.1 --- Character Recognition Based on Trajectory Reconstruction --- p.9 / Chapter 2.1.2 --- Character Recognition Based on Classification --- p.10 / Chapter 2.2 --- Neural Network --- p.11 / Chapter 2.2.1 --- Mathematical Model of Neural Network (NN) --- p.11 / Chapter 2.2.2 --- Types of Neural Network Learning --- p.13 / Chapter 2.2.3 --- Self-Organizing Maps (SOMs) --- p.14 / Chapter 2.2.4 --- Properties of Neural Network --- p.16 / Chapter 2.3 --- Experimental Setup --- p.17 / Chapter 2.4 --- Configuration of Sensing Mote --- p.18 / Chapter 2.5 --- Data Acquisition Methods --- p.19 / Chapter 2.6 --- Data Preprocessing Methods --- p.20 / Chapter 2.6.1 --- Fast Fourier Transform (FFT) --- p.21 / Chapter 2.6.2 --- Discrete Cosine Transform (DCT) --- p.23 / Chapter 2.6.3 --- Problem Analysis --- p.25 / Chapter 2.7 --- Hand-written Character Classification using SOMs --- p.26 / Chapter 2.7.1 --- Recognition of All Characters in the Same Group --- p.27 / Chapter 2.7.2 --- Recognize the Numbers and Letters Respectively --- p.28 / Chapter 2.8 --- Conclusion --- p.29 / Chapter Chapter 3 --- Human Gesture Recognition --- p.32 / Chapter 3.1 --- Introduction to Human Gesture Recognition --- p.32 / Chapter 3.1.1 --- Dynamic Gesture Recognition --- p.32 / Chapter 3.1.2 --- Hidden Markov Models (HMMs) --- p.33 / Chapter 3.1.2.1 --- Applications of HMMs --- p.34 / Chapter 3.1.2.2 --- Training Algorithm --- p.35 / Chapter 3.1.2.3 --- Recognition Algorithm --- p.35 / Chapter 3.2 --- System Architecture --- p.36 / Chapter 3.2.1 --- Experimental Devices --- p.36 / Chapter 3.2.2 --- Data Acquisition Methods --- p.38 / Chapter 3.2.3 --- System Work Flow --- p.39 / Chapter 3.3 --- Real-Time Gesture Spotting --- p.40 / Chapter 3.3.1 --- Introduction --- p.40 / Chapter 3.3.2 --- Gesture Segmentation Based on Standard Deviation Calculation --- p.42 / Chapter 3.3.3 --- Evaluation of Gesture Spotting Program --- p.47 / Chapter 3.4 --- Comparison of Data Processing Methods --- p.48 / Chapter 3.4.1 --- Discrete Cosine Transform (DCT) --- p.48 / Chapter 3.4.2 --- Discrete Wavelet Transform (DWT) --- p.49 / Chapter 3.4.3 --- Zero Bias Compensation and Filtering (ZBC&F) --- p.51 / Chapter 3.4.4 --- Comparison of Experimental Results --- p.52 / Chapter 3.5 --- Data Base Setup --- p.53 / Chapter 3.6 --- Experimental Results Based on the Database Obtained from Ten Test Subjects --- p.53 / Chapter 3.6.1 --- Experimental Results when Gestures are Manually and Automatically “cut´ح --- p.54 / Chapter 3.6.2 --- The Influence of Number of Dominant Frequencies on Recognition --- p.55 / Chapter 3.6.3 --- The Influence of Sampling Frequencies on Recognition --- p.59 / Chapter 3.6.4 --- Influence of Number of Test Subjects on Recognition --- p.62 / Chapter 3.6.4.1 --- Experimental Results When Training and Testing Subjects Are Overlaped --- p.61 / Chapter 3.6.4.2 --- Experimental Results When Training and Testing Subjects Are Not Overlap --- p.62 / Chapter 3.6.4.3 --- Discussion --- p.65 / Chapter Chapter 4 --- Conclusion --- p.68 / Bibliography --- p.70
224

Segmentação de nome e endereço por meio de modelos escondidos de Markov e sua aplicação em processos de vinculação de registros / Segmentation of names and addresses through hidden Markov models and its application in record linkage

Rita de Cássia Braga Gonçalves 11 December 2013 (has links)
A segmentação dos nomes nas suas partes constitutivas é uma etapa fundamental no processo de integração de bases de dados por meio das técnicas de vinculação de registros. Esta separação dos nomes pode ser realizada de diferentes maneiras. Este estudo teve como objetivo avaliar a utilização do Modelo Escondido de Markov (HMM) na segmentação nomes e endereços de pessoas e a eficiência desta segmentação no processo de vinculação de registros. Foram utilizadas as bases do Sistema de Informações sobre Mortalidade (SIM) e do Subsistema de Informação de Procedimentos de Alta Complexidade (APAC) do estado do Rio de Janeiro no período entre 1999 a 2004. Uma metodologia foi proposta para a segmentação de nome e endereço sendo composta por oito fases, utilizando rotinas implementadas em PL/SQL e a biblioteca JAHMM, implementação na linguagem Java de algoritmos de HMM. Uma amostra aleatória de 100 registros de cada base foi utilizada para verificar a correção do processo de segmentação por meio do modelo HMM.Para verificar o efeito da segmentação do nome por meio do HMM, três processos de vinculação foram aplicados sobre uma amostra das duas bases citadas acima, cada um deles utilizando diferentes estratégias de segmentação, a saber: 1) divisão dos nomes pela primeira parte, última parte e iniciais do nome do meio; 2) divisão do nome em cinco partes; (3) segmentação segundo o HMM. A aplicação do modelo HMM como mecanismo de segmentação obteve boa concordância quando comparado com o observador humano. As diferentes estratégias de segmentação geraram resultados bastante similares na vinculação de registros, tendo a estratégia 1 obtido um desempenho pouco melhor que as demais. Este estudo sugere que a segmentação de nomes brasileiros por meio do modelo escondido de Markov não é mais eficaz do que métodos tradicionais de segmentação. / The segmentation of names into its constituent parts is a fundamental step in the integration of databases by means of record linkage techniques. This segmentation can be accomplished in different ways. This study aimed to evaluate the use of Hidden Markov Models (HMM) in the segmentation names and addresses of people and the efficiency of the segmentation on the record linkage process. Databases of the Information System on Mortality (SIM in portuguese) and Information Subsystem for High Complexity Procedures (APAC in portuguese) of the state of Rio de Janeiro between 1999 and 2004 were used. A method composed of eight stages has been proposed for segmenting the names and addresses using routines implemented in PL/SQL and a library called JAHMM, a Java implementation of HMM algorithms. A random sample of 100 records in each database was used to verify the correctness of the segmentation process using the hidden Markov model. In order to verify the effect of segmenting the names through the HMM, three record linkage process were applied on a sample of the aforementioned databases, each of them using a different segmentation strategy, namely: 1) dividing the name into first name , last name, and middle initials; 2) division of the name into five parts; 3) segmentation by HMM. The HMM segmentation mechanism was in good agreement when compared to a human observer. The three linkage processes produced very similar results, with the first strategy performing a little better than the others. This study suggests that the segmentation of Brazilian names by means of HMM is not more efficient than the traditional segmentation methods.
225

Avaliando um rotulador estatístico de categorias morfo-sintáticas para a língua portuguesa / Evaluating a stochastic part-of-speech tagger for the portuguese language

Villavicencio, Aline January 1995 (has links)
O Processamento de Linguagem Natural (PLN) é uma área da Ciência da Computação, que vem tentando, ao longo dos anos, aperfeiçoar a comunicação entre o homem e o computador. Varias técnicas tem sido utilizadas para aperfeiçoar esta comunicação, entre elas a aplicação de métodos estatísticos. Estes métodos tem sido usados por pesquisadores de PLN, com um crescente sucesso e uma de suas maiores vantagens é a possibilidade do tratamento de textos irrestritos. Em particular, a aplicação dos métodos estatísticos, na marcação automática de "corpus" com categorias morfo-sintáticas, tem se mostrado bastante promissora, obtendo resultados surpreendentes. Assim sendo, este trabalho descreve o processo de marcação automática de categorias morfo-sintáticas. Inicialmente, são apresentados e comparados os principais métodos aplicados a marcação automática: os métodos baseados em regras e os métodos estatísticos. São descritos os principais formalismos e técnicas usadas para esta finalidade pelos métodos estatísticos. E introduzida a marcação automática para a Língua Portuguesa, algo até então inédito. O objetivo deste trabalho é fazer um estudo detalhado e uma avaliação do sistema rotulador de categorias morfo-sintáticas, a fim de que se possa definir um padrão no qual o sistema apresente a mais alta precisão possível. Para efetuar esta avaliação, são especificados alguns critérios: a qualidade do "corpus" de treinamento, o seu tamanho e a influencia das palavras desconhecidas. A partir dos resultados obtidos, espera-se poder aperfeiçoar o sistema rotulador, de forma a aproveitar, da melhor maneira possível, os recursos disponíveis para a Língua Portuguesa. / Natural Language Processing (NLP) is an area of Computer Sciences, that have been trying to improve communication between human beings and computers. A number of different techniques have been used to improve this communication and among them, the use of stochastic methods. These methods have successfully being used by NLP researchers and one of their most remarkable advantages is that they are able to deal with unrestricted texts. Namely, the use of stochastic methods for part-of-speech tagging has achieving some extremely good results. Thus, this work describes the process of part-of-speech tagging. At first, we present and compare the main tagging methods: the rule-based methods and the stochastic ones. We describe the main stochastic tagging formalisms and techniques for part-of-speech tagging. We also introduce part-of-speech tagging for the Portuguese Language. The main purpose of this work is to study and evaluate a part-of-speech tagger system in order to establish a pattern in which it is possible to achieve the greatest accuracy. To perform this evaluation, several parameters were set: the corpus quality, its size and the relation between unknown words and accuracy. The results obtained will be used to improve the tagger, in order to use better the available Portuguese Language resources.
226

Avaliando um rotulador estatístico de categorias morfo-sintáticas para a língua portuguesa / Evaluating a stochastic part-of-speech tagger for the portuguese language

Villavicencio, Aline January 1995 (has links)
O Processamento de Linguagem Natural (PLN) é uma área da Ciência da Computação, que vem tentando, ao longo dos anos, aperfeiçoar a comunicação entre o homem e o computador. Varias técnicas tem sido utilizadas para aperfeiçoar esta comunicação, entre elas a aplicação de métodos estatísticos. Estes métodos tem sido usados por pesquisadores de PLN, com um crescente sucesso e uma de suas maiores vantagens é a possibilidade do tratamento de textos irrestritos. Em particular, a aplicação dos métodos estatísticos, na marcação automática de "corpus" com categorias morfo-sintáticas, tem se mostrado bastante promissora, obtendo resultados surpreendentes. Assim sendo, este trabalho descreve o processo de marcação automática de categorias morfo-sintáticas. Inicialmente, são apresentados e comparados os principais métodos aplicados a marcação automática: os métodos baseados em regras e os métodos estatísticos. São descritos os principais formalismos e técnicas usadas para esta finalidade pelos métodos estatísticos. E introduzida a marcação automática para a Língua Portuguesa, algo até então inédito. O objetivo deste trabalho é fazer um estudo detalhado e uma avaliação do sistema rotulador de categorias morfo-sintáticas, a fim de que se possa definir um padrão no qual o sistema apresente a mais alta precisão possível. Para efetuar esta avaliação, são especificados alguns critérios: a qualidade do "corpus" de treinamento, o seu tamanho e a influencia das palavras desconhecidas. A partir dos resultados obtidos, espera-se poder aperfeiçoar o sistema rotulador, de forma a aproveitar, da melhor maneira possível, os recursos disponíveis para a Língua Portuguesa. / Natural Language Processing (NLP) is an area of Computer Sciences, that have been trying to improve communication between human beings and computers. A number of different techniques have been used to improve this communication and among them, the use of stochastic methods. These methods have successfully being used by NLP researchers and one of their most remarkable advantages is that they are able to deal with unrestricted texts. Namely, the use of stochastic methods for part-of-speech tagging has achieving some extremely good results. Thus, this work describes the process of part-of-speech tagging. At first, we present and compare the main tagging methods: the rule-based methods and the stochastic ones. We describe the main stochastic tagging formalisms and techniques for part-of-speech tagging. We also introduce part-of-speech tagging for the Portuguese Language. The main purpose of this work is to study and evaluate a part-of-speech tagger system in order to establish a pattern in which it is possible to achieve the greatest accuracy. To perform this evaluation, several parameters were set: the corpus quality, its size and the relation between unknown words and accuracy. The results obtained will be used to improve the tagger, in order to use better the available Portuguese Language resources.
227

Segmentação de nome e endereço por meio de modelos escondidos de Markov e sua aplicação em processos de vinculação de registros / Segmentation of names and addresses through hidden Markov models and its application in record linkage

Rita de Cássia Braga Gonçalves 11 December 2013 (has links)
A segmentação dos nomes nas suas partes constitutivas é uma etapa fundamental no processo de integração de bases de dados por meio das técnicas de vinculação de registros. Esta separação dos nomes pode ser realizada de diferentes maneiras. Este estudo teve como objetivo avaliar a utilização do Modelo Escondido de Markov (HMM) na segmentação nomes e endereços de pessoas e a eficiência desta segmentação no processo de vinculação de registros. Foram utilizadas as bases do Sistema de Informações sobre Mortalidade (SIM) e do Subsistema de Informação de Procedimentos de Alta Complexidade (APAC) do estado do Rio de Janeiro no período entre 1999 a 2004. Uma metodologia foi proposta para a segmentação de nome e endereço sendo composta por oito fases, utilizando rotinas implementadas em PL/SQL e a biblioteca JAHMM, implementação na linguagem Java de algoritmos de HMM. Uma amostra aleatória de 100 registros de cada base foi utilizada para verificar a correção do processo de segmentação por meio do modelo HMM.Para verificar o efeito da segmentação do nome por meio do HMM, três processos de vinculação foram aplicados sobre uma amostra das duas bases citadas acima, cada um deles utilizando diferentes estratégias de segmentação, a saber: 1) divisão dos nomes pela primeira parte, última parte e iniciais do nome do meio; 2) divisão do nome em cinco partes; (3) segmentação segundo o HMM. A aplicação do modelo HMM como mecanismo de segmentação obteve boa concordância quando comparado com o observador humano. As diferentes estratégias de segmentação geraram resultados bastante similares na vinculação de registros, tendo a estratégia 1 obtido um desempenho pouco melhor que as demais. Este estudo sugere que a segmentação de nomes brasileiros por meio do modelo escondido de Markov não é mais eficaz do que métodos tradicionais de segmentação. / The segmentation of names into its constituent parts is a fundamental step in the integration of databases by means of record linkage techniques. This segmentation can be accomplished in different ways. This study aimed to evaluate the use of Hidden Markov Models (HMM) in the segmentation names and addresses of people and the efficiency of the segmentation on the record linkage process. Databases of the Information System on Mortality (SIM in portuguese) and Information Subsystem for High Complexity Procedures (APAC in portuguese) of the state of Rio de Janeiro between 1999 and 2004 were used. A method composed of eight stages has been proposed for segmenting the names and addresses using routines implemented in PL/SQL and a library called JAHMM, a Java implementation of HMM algorithms. A random sample of 100 records in each database was used to verify the correctness of the segmentation process using the hidden Markov model. In order to verify the effect of segmenting the names through the HMM, three record linkage process were applied on a sample of the aforementioned databases, each of them using a different segmentation strategy, namely: 1) dividing the name into first name , last name, and middle initials; 2) division of the name into five parts; 3) segmentation by HMM. The HMM segmentation mechanism was in good agreement when compared to a human observer. The three linkage processes produced very similar results, with the first strategy performing a little better than the others. This study suggests that the segmentation of Brazilian names by means of HMM is not more efficient than the traditional segmentation methods.
228

Spatio-Temporal Data Mining to Detect Changes and Clusters in Trajectories

January 2012 (has links)
abstract: With the rapid development of mobile sensing technologies like GPS, RFID, sensors in smartphones, etc., capturing position data in the form of trajectories has become easy. Moving object trajectory analysis is a growing area of interest these days owing to its applications in various domains such as marketing, security, traffic monitoring and management, etc. To better understand movement behaviors from the raw mobility data, this doctoral work provides analytic models for analyzing trajectory data. As a first contribution, a model is developed to detect changes in trajectories with time. If the taxis moving in a city are viewed as sensors that provide real time information of the traffic in the city, a change in these trajectories with time can reveal that the road network has changed. To detect changes, trajectories are modeled with a Hidden Markov Model (HMM). A modified training algorithm, for parameter estimation in HMM, called m-BaumWelch, is used to develop likelihood estimates under assumed changes and used to detect changes in trajectory data with time. Data from vehicles are used to test the method for change detection. Secondly, sequential pattern mining is used to develop a model to detect changes in frequent patterns occurring in trajectory data. The aim is to answer two questions: Are the frequent patterns still frequent in the new data? If they are frequent, has the time interval distribution in the pattern changed? Two different approaches are considered for change detection, frequency-based approach and distribution-based approach. The methods are illustrated with vehicle trajectory data. Finally, a model is developed for clustering and outlier detection in semantic trajectories. A challenge with clustering semantic trajectories is that both numeric and categorical attributes are present. Another problem to be addressed while clustering is that trajectories can be of different lengths and also have missing values. A tree-based ensemble is used to address these problems. The approach is extended to outlier detection in semantic trajectories. / Dissertation/Thesis / Ph.D. Industrial Engineering 2012
229

Adaptive Methods within a Sequential Bayesian Approach for Structural Health Monitoring

January 2013 (has links)
abstract: Structural integrity is an important characteristic of performance for critical components used in applications such as aeronautics, materials, construction and transportation. When appraising the structural integrity of these components, evaluation methods must be accurate. In addition to possessing capability to perform damage detection, the ability to monitor the level of damage over time can provide extremely useful information in assessing the operational worthiness of a structure and in determining whether the structure should be repaired or removed from service. In this work, a sequential Bayesian approach with active sensing is employed for monitoring crack growth within fatigue-loaded materials. The monitoring approach is based on predicting crack damage state dynamics and modeling crack length observations. Since fatigue loading of a structural component can change while in service, an interacting multiple model technique is employed to estimate probabilities of different loading modes and incorporate this information in the crack length estimation problem. For the observation model, features are obtained from regions of high signal energy in the time-frequency plane and modeled for each crack length damage condition. Although this observation model approach exhibits high classification accuracy, the resolution characteristics can change depending upon the extent of the damage. Therefore, several different transmission waveforms and receiver sensors are considered to create multiple modes for making observations of crack damage. Resolution characteristics of the different observation modes are assessed using a predicted mean squared error criterion and observations are obtained using the predicted, optimal observation modes based on these characteristics. Calculation of the predicted mean square error metric can be computationally intensive, especially if performed in real time, and an approximation method is proposed. With this approach, the real time computational burden is decreased significantly and the number of possible observation modes can be increased. Using sensor measurements from real experiments, the overall sequential Bayesian estimation approach, with the adaptive capability of varying the state dynamics and observation modes, is demonstrated for tracking crack damage. / Dissertation/Thesis / Ph.D. Electrical Engineering 2013
230

Analyse probabiliste, étude combinatoire et estimation paramétrique pour une classe de modèles de croissance de plantes avec organogenèse stochastique / Probability analysis, combinatorial study and parametric estimation for a class of growth models of plants with stochastic development

Loi, Cédric 31 May 2011 (has links)
Dans cette thèse, nous nous intéressons à une classe particulière de modèles stochastiques de croissance de plantes structure-fonction à laquelle appartient le modèle GreenLab. L’objectif est double. En premier lieu, il s’agit d’étudier les processus stochastiques sous-jacents à l’organogenèse. Un nouveau cadre de travail combinatoire reposant sur l’utilisation de grammaires formelles a été établi dans le but d’étudier la distribution des nombres d’organes ou plus généralement des motifs dans la structure des plantes. Ce travail a abouti `a la mise en place d’une méthode symbolique permettant le calcul de distributions associées `a l’occurrence de mots dans des textes générés aléatoirement par des L-systèmes stochastiques. La deuxième partie de la thèse se concentre sur l’estimation des paramètres liés au processus de création de biomasse par photosynthèse et de son allocation. Le modèle de plante est alors écrit sous la forme d’un modèle de Markov caché et des méthodes d’inférence bayésienne sont utilisées pour résoudre le problème. / This PhD focuses on a particular class of stochastic models of functional-structural plant growth to which the GreenLab model belongs. First, the stochastic processes underlying the organogenesis phenomenon were studied. A new combinatorial framework based on formal grammars was built to study the distributions of the number of organs or more generally patterns in plant structures. This work led to the creation of a symbolic method which allows the computation of the distributions associated to word occurrences in random texts generated by stochastic L-systems. The second part of the PhD tackles the estimation of the parameters of the functional submodel (linked to the creation of biomass by photosynthesis and its allocation). For this purpose, the plant model was described by a hidden Markov model and Bayesian inference methods were used to solve the problem.

Page generated in 0.4148 seconds