• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 165
  • 30
  • 15
  • 10
  • 9
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 293
  • 293
  • 143
  • 82
  • 59
  • 46
  • 46
  • 37
  • 32
  • 31
  • 31
  • 26
  • 24
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

L'électrophysiologie temps-réel en neuroscience cognitive : vers des paradigmes adaptatifs pour l'étude de l'apprentissage et de la prise de décision perceptive chez l'homme / Real-time electrophysiology in cognitive neuroscience : towards adaptive paradigms to study perceptual learning and decision making in humans

Sanchez, Gaëtan 27 June 2014 (has links)
Aujourd’hui, les modèles computationnels de l'apprentissage et de la prise de décision chez l'homme se sont raffinés et complexifiés pour prendre la forme de modèles génératifs des données psychophysiologiques de plus en plus réalistes d’un point de vue neurobiologique et biophysique. Dans le même temps, le nouveau champ de recherche des interfaces cerveau-machine (ICM) s’est développé de manière exponentielle. L'objectif principal de cette thèse était d'explorer comment le paradigme de l'électrophysiologie temps-réel peut contribuer à élucider les processus d'apprentissage et de prise de décision perceptive chez l’homme. Au niveau expérimental, j'ai étudié les décisions perceptives somatosensorielles grâce à des tâches de discrimination de fréquence tactile. En particulier, j'ai montré comment un contexte sensoriel implicite peut influencer nos décisions. Grâce à la magnétoencéphalographie (MEG), j'ai pu étudier les mécanismes neuronaux qui sous-tendent cette adaptation perceptive. L’ensemble de ces résultats renforce l'hypothèse de la construction implicite d’un a priori ou d'une référence interne au cours de l'expérience. Aux niveaux théoriques et méthodologiques, j'ai proposé une vue générique de la façon dont l'électrophysiologie temps-réel pourrait être utilisée pour optimiser les tests d'hypothèses, en adaptant le dessin expérimental en ligne. J'ai pu fournir une première validation de cette démarche adaptative pour maximiser l'efficacité du dessin expérimental au niveau individuel. Ce travail révèle des perspectives en neurosciences fondamentales et cliniques ainsi que pour les ICM / Today, psychological as well as physiological models of perceptual learning and decision-making processes have recently become more biologically plausible, leading to more realistic (and more complex) generative models of psychophysiological observations. In parallel, the young but exponentially growing field of Brain-Computer Interfaces (BCI) provides new tools and methods to analyze (mostly) electrophysiological data online. The main objective of this PhD thesis was to explore how the BCI paradigm could help for a better understanding of perceptual learning and decision making processes in humans. At the empirical level, I studied decisions based on tactile stimuli, namely somatosensory frequency discrimination. More specifically, I showed how an implicit sensory context biases our decisions. Using magnetoencephalography (MEG), I was able to decipher some of the neural correlates of those perceptual adaptive mechanisms. These findings support the hypothesis that an internal perceptual-reference builds up along the course of the experiment. At the theoretical and methodological levels, I propose a generic view and method of how real-time electrophysiology could be used to optimize hypothesis testing, by adapting the experimental design online. I demonstrated the validity of this online adaptive design optimization (ADO) approach to maximize design efficiency at the individual level. I also discussed the implications of this work for basic and clinical neuroscience as well as BCI itself
252

"Testes de hipótese e critério bayesiano de seleção de modelos para séries temporais com raiz unitária" / "Hypothesis testing and bayesian model selection for time series with a unit root"

Silva, Ricardo Gonçalves da 23 June 2004 (has links)
A literatura referente a testes de hipótese em modelos auto-regressivos que apresentam uma possível raiz unitária é bastante vasta e engloba pesquisas oriundas de diversas áreas. Nesta dissertação, inicialmente, buscou-se realizar uma revisão dos principais resultados existentes, oriundos tanto da visão clássica quanto da bayesiana de inferência. No que concerne ao ferramental clássico, o papel do movimento browniano foi apresentado de forma detalhada, buscando-se enfatizar a sua aplicabilidade na dedução de estatísticas assintóticas para a realização dos testes de hipótese relativos à presença de uma raíz unitária. Com relação à inferência bayesiana, foi inicialmente conduzido um exame detalhado do status corrente da literatura. A seguir, foi realizado um estudo comparativo em que se testa a hipótese de raiz unitária com base na probabilidade da densidade a posteriori do parâmetro do modelo, considerando as seguintes densidades a priori: Flat, Jeffreys, Normal e Beta. A inferência foi realizada com base no algoritmo Metropolis-Hastings, usando a técnica de simulação de Monte Carlo por Cadeias de Markov (MCMC). Poder, tamanho e confiança dos testes apresentados foram computados com o uso de séries simuladas. Finalmente, foi proposto um critério bayesiano de seleção de modelos, utilizando as mesmas distribuições a priori do teste de hipótese. Ambos os procedimentos foram ilustrados com aplicações empíricas à séries temporais macroeconômicas. / Testing for unit root hypothesis in non stationary autoregressive models has been a research topic disseminated along many academic areas. As a first step for approaching this issue, this dissertation includes an extensive review highlighting the main results provided by Classical and Bayesian inferences methods. Concerning Classical approach, the role of brownian motion is discussed in a very detailed way, clearly emphasizing its application for obtaining good asymptotic statistics when we are testing for the existence of a unit root in a time series. Alternatively, for Bayesian approach, a detailed discussion is also introduced in the main text. Then, exploring an empirical façade of this dissertation, we implemented a comparative study for testing unit root based on a posteriori model's parameter density probability, taking into account the following a priori densities: Flat, Jeffreys, Normal and Beta. The inference is based on the Metropolis-Hastings algorithm and on the Monte Carlo Markov Chains (MCMC) technique. Simulated time series are used for calculating size, power and confidence intervals for the developed unit root hypothesis test. Finally, we proposed a Bayesian criterion for selecting models based on the same a priori distributions used for developing the same hypothesis tests. Obviously, both procedures are empirically illustrated through application to macroeconomic time series.
253

Inferência estatística em métodos de análise de ressonância magnética funcional / Statistical Inference in Methods of Analysis of Functional Magnetic Resonance

Cabella, Brenno Caetano Troca 11 April 2008 (has links)
No presente trabalho, conceitos de inferência estatística são utilizados para aplicação e comparação de diferentes métodos de análise de sinais de ressonância magnética funcional. A idéia central baseia-se na obtenção da distribuição de probabilidade da variável aleatória de interesse, para cada método estudado e sob diferentes valores da relação sinal-ruído (SNR). Este objetivo é atingido através de simulações numéricas da função resposta hemodinâmica (HRF) acrescida de ruído gaussiano. Tal procedimento nos permite avaliar a sensibilidade e a especificidade dos métodos empregados através da construção das curvas ROC (receiver operating characteristic) para diferentes valores de SNR. Sob específicas condições experimentais, aplicamos métodos clássicos de análise (teste t de Student e correlação), medidas de informação (distância de Kullback-Leibler e sua forma generalizada) e um método Bayesiano (método do pixel independente). Em especial, mostramos que a distância de Kullback-Leibler (D) (ou entropia relativa) e sua forma generalizada são medidas úteis para análise de sinais dentro do cenário de teoria da informação. Estas entropias são usadas como medidas da \"distância\"entre as funções de probabilidade p1 e p2 dos níveis do sinal relacionados a estímulo e repouso. Para prevenir a ocorrência de valores divergentes de D, introduzimos um pequeno parâmetro d nas definições de p1 e p2. Estendemos a análise, apresentando um estudo original da distância de Kullback-Leibler generalizada Dq (q é o parâmetro de Tsallis). Neste caso, a escolha apropriada do intervalo 0 < q < 1 permite assegurar que Dq seja finito. Obtemos as densidades de probabilidade f (D) e f (Dq) das médias amostrais das variáveis D e Dq , respectivamente, calculadas ao longo das N épocas de todo o experimento. Para pequenos valores de N (N < 30), mostramos que f (D) e f (Dq) são muito bem aproximadas por distribuições Gamma (qui^2 < 0,0009). Em seguida, estudamos o método (Bayesiano) do pixel independente, considerando a probabilidade a posteriori como variável aleatória e obtendo sua distribuição para várias SNR\'s e probabilidades a priori. Os resultados das simulações apontam para o fato de que a correlação e o método do pixel independente apresentam melhor desempenho do que os demais métodos empregados (para SNR > -20 dB). Contudo, deve-se ponderar que o teste t e os métodos entrópicos compartilham da vantagem de não se utilizarem de um modelo para HRF na análise de dados reais. Finalmente, para os diferentes métodos, obtemos os mapas funcionais correspondentes a séries de dados reais de um voluntário assintomático submetido a estímulo motor de evento relacionado, os quais demonstram ativação nas áreas cerebrais motoras primária e secundária. Enfatizamos que o procedimento adotado no presente estudo pode, em princípio, ser utilizado em outros métodos e sob diferentes condições experimentais. / In the present work, concepts of statistical inference are used for application and comparison of different methods of signal analysis in functional magnetic resonance imaging. The central idea is based on obtaining the probability distribution of the random variable of interest, for each method studied under different values of signal-to-noise ratio (SNR). This purpose is achieved by means of numerical simulations of the hemodynamic response function (HRF) with gaussian noise. This procedure allows us to assess the sensitivity and specificity of the methods employed by the construction of the ROC curves (receiver operating characteristic) for different values of SNR. Under specific experimental conditions, we apply classical methods of analysis (Student\'s t test and correlation), information measures (distance of Kullback-Leibler and its generalized form) and a Bayesian method (independent pixel method). In particular, we show that the distance of Kullback-Leibler D (or relative entropy) and its generalized form are useful measures for analysis of signals within the information theory scenario. These entropies are used as measures of the \"distance\"between the probability functions p1 and p2 of the signal levels related to stimulus and non-stimulus. In order to avoid undesirable divergences of D, we introduced a small parameter d in the definitions of p1 and p2. We extend such analysis, by presenting an original study of the generalized Kullback-Leibler distance Dq (q is Tsallis parameter). In this case, the appropriate choice of range 0 < q < 1 ensures that Dq is finite. We obtain the probability densities f (D) and f (Dq) of the sample averages of the variables D and Dq, respectively, calculated over the N epochs of the entire experiment. For small values of N (N < 30), we show that f (D) and f (Dq) are well approximated by Gamma distributions (qui^2 < 0.0009). Afterward, we studied the independent pixel bayesian method, considering the probability a posteriori as a random variable, and obtaining its distribution for various SNR\'s and probabilities a priori. The results of simulations point to the fact that the correlation and the independent pixel method have better performance than the other methods used (for SNR> -20 dB). However, one should consider that the Student\'s t test and the entropic methods share the advantage of not using a model for HRF in real data analysis. Finally, we obtain the maps corresponding to real data series from an asymptomatic volunteer submitted to an event-related motor stimulus, which shows brain activation in the primary and secondary motor brain areas. We emphasize that the procedure adopted in this study may, in principle, be used in other methods and under different experimental conditions.
254

Application of random matrix theory to future wireless flexible networks. / Application des matrices aléatoires aux futurs réseaux flexibles de communications sans fil

Couillet, Romain 12 November 2010 (has links)
Il est attendu que les radios flexibles constituent un tournant technologique majeur dans le domaine des communications sans fil. Le point de vue adopté en radios flexibles est de considérer les canaux de communication comme un ensemble de ressources qui peuvent être accédées sur demande par un réseau primaire sous licence ou de manière opportuniste par un réseau secondaire à plus faible priorité. Du point de vue de la couche physique, le réseau primaire n’a aucune information sur l’existence de réseaux secondaires, de sorte que ces derniers doivent explorer l’environnement aérien de manière autonome à la recherche d’opportunités spectrales et exploiter ces ressources de manière optimale. Les phases d’exploration et d’exploitation, qui impliquent la gestion de nombreux agents, doivent être très fiables, rapides et efficaces. L’objectif de cette thèse est de modéliser, d’analyser et de proposer des solutions efficaces et quasi optimales pour ces dernières opérations.En ce qui concerne la phase d’exploration, nous calculons le test optimal de Neyman-Pearson de détection de plusieurs sources primaires via un réseau de capteurs. Cette procédure permet à un réseau secondaire d’établir la présence de ressources spectrales disponibles. La complexité calculatoire de l’approche optimale appelle cependant la mise en place de méthodes moins onéreuses, que nous rappelons et discutons. Nous étendons alors le test de détection en l’estimation aveugle de la position de sources multiples, qui permet l’acquisition d’informations détaillées sur les ressources spectrales disponibles.Le second volet de cette thèse est consacré à la phase d’exploitation optimale des ressources au niveau du réseau secondaire. Pour ce faire, nous obtenons une approximation fine du débit ergodique d’un canal multi-antennes à accès multiples et proposons des solutions peu coûteuses en termes de feedback afin que les réseaux secondaires s’adaptent rapidement aux évolutions rapides du réseau primaire. / Future cognitive radio networks are expected to come as a disruptive technological advance in the currently saturated field of wireless communications. The idea behind cognitive radios is to think of the wireless channels as a pool of communication resources, which can be accessed on-demand by a primary licensed network or opportunistically preempted (or overlaid) by a secondary network with lower access priority. From a physical layer point of view, the primary network is ideally oblivious of the existence of a co-localized secondary networks. The latter are therefore required to autonomously explore the air in search for resource left-overs, and then to optimally exploit the available resource. The exploration and exploitation procedures, which involve multiple interacting agents, are requested to be highly reliable, fast and efficient. The objective of the thesis is to model, analyse and propose computationally efficient and close-to-optimal solutions to the above operations.Regarding the exploration phase, we first resort to the maximum entropy principle to derive communication models with many unknowns, from which we derive the optimal multi-source multi-sensor Neyman-Pearson signal sensing procedure. The latter allows for a secondary network to detect the presence of spectral left-overs. The computational complexity of the optimal approach however calls for simpler techniques, which are recollected and discussed. We then proceed to the extension of the signal sensing approach to the more advanced blind user localization, which provides further valuable information to overlay occupied spectral resources.The second part of the thesis is dedicaded to the exploitation phase, that is, the optimal sharing of available resources. To this end, we derive an (asymptotically accurate) approximated expression for the uplink ergodic sum rate of a multi-antenna multiple-access channel and propose solutions for cognitive radios to adapt rapidly to the evolution of the primary network at a minimum feedback cost for the secondary networks.
255

Modélisation statistique de la mortalité maternelle et néonatale pour l'aide à la planification et à la gestion des services de santé en Afrique Sub-Saharienne / Statistical modeling of maternal and neonatal mortality for help in planning and management of health services in sub-Saharan Africa

Ndour, Cheikh 19 May 2014 (has links)
L'objectif de cette thèse est de proposer une méthodologie statistique permettant de formuler une règle de classement capable de surmonter les difficultés qui se présentent dans le traitement des données lorsque la distribution a priori de la variable réponse est déséquilibrée. Notre proposition est construite autour d'un ensemble particulier de règles d'association appelées "class association rules". Dans le chapitre II, nous avons exposé les bases théoriques qui sous-tendent la méthode. Nous avons utilisé les indicateurs de performance usuels existant dans la littérature pour évaluer un classifieur. A chaque règle "class association rule" est associée un classifieur faible engendré par l'antécédent de la règle que nous appelons profils. L'idée de la méthode est alors de combiner un nombre réduit de classifieurs faibles pour constituer une règle de classement performante. Dans le chapitre III, nous avons développé les différentes étapes de la procédure d'apprentissage statistique lorsque les observations sont indépendantes et identiquement distribuées. On distingue trois grandes étapes: (1) une étape de génération d'un ensemble initial de profils, (2) une étape d'élagage de profils redondants et (3) une étape de sélection d'un ensemble optimal de profils. Pour la première étape, nous avons utilisé l'algorithme "apriori" reconnu comme l'un des algorithmes de base pour l'exploration des règles d'association. Pour la deuxième étape, nous avons proposé un test stochastique. Et pour la dernière étape un test asymptotique est effectué sur le rapport des valeurs prédictives positives des classifieurs lorsque les profils générateurs respectifs sont emboîtés. Il en résulte un ensemble réduit et optimal de profils dont la combinaison produit une règle de classement performante. Dans le chapitre IV, nous avons proposé une extension de la méthode d'apprentissage statistique lorsque les observations ne sont pas identiquement distribuées. Il s'agit précisément d'adapter la procédure de sélection de l'ensemble optimal lorsque les données ne sont pas identiquement distribuées. L'idée générale consiste à faire une estimation bayésienne de toutes les valeurs prédictives positives des classifieurs faibles. Par la suite, à l'aide du facteur de Bayes, on effectue un test d'hypothèse sur le rapport des valeurs prédictives positives lorsque les profils sont emboîtés. Dans le chapitre V, nous avons appliqué la méthodologie mise en place dans les chapitres précédents aux données du projet QUARITE concernant la mortalité maternelle au Sénégal et au Mali. / The aim of this thesis is to design a supervised statistical learning methodology that can overcome the weakness of standard methods when the prior distribution of the response variable is unbalanced. The proposed methodology is built using class association rules. Chapter II deals with theorical basis of statistical learning method by relating various classifiers performance metrics with class association rules. Since the classifier corresponding to a class association rules is a weak classifer, we propose to select a small number of such weak classifiers and to combine them in the aim to build an efficient classifier. In Chapter III, we develop the different steps of the statistical learning method when observations are independent and identically distributed. There are three main steps: In the first step, an initial set of patterns correlated with the target class is generated using "apriori" algorithm. In the second step, we propose a hypothesis test to prune redondant patterns. In the third step, an hypothesis test is performed based on the ratio of the positive predictive values of the classifiers when respective generating patterns are nested. This results in a reduced and optimal set of patterns whose combination provides an efficient classifier. In Chapter IV, we extend the classification method that we proposed in Chapter III in order to handle the case where observations are not identically distributed. The aim being here to adapt the procedure for selecting the optimal set of patterns when data are grouped data. In this setting we compute the estimation of the positive predictive values as the mean of the posterior distribution of the target class probability by using empirical Bayes method. Thereafter, using Bayes factor, a hypothesis test based on the ratio of the positive predictive values is carried out when patterns are nested. Chapter V is devoted to the application of the proposed methodology to process a real world dataset. We studied the QUARITE project dataset on maternal mortality in Senegal and Mali in order to provide a decision making tree that health care professionals can refer to when managing patients delivering in their health facilities.
256

APPRENABILITÉ DANS LES PROBLÈMES DE L'INFÉRENCE SÉQUENTIELLE

Ryabko, Daniil 19 December 2011 (has links) (PDF)
Les travaux présentés sont dédiés à la possibilité de faire de l'inférence statistique à partir de données séquentielles. Le problème est le suivant. Étant donnée une suite d'observations x_1,...,x_n,..., on cherche à faire de l'inférence sur le processus aléatoire ayant produit la suite. Plusieurs problèmes, qui d'ailleurs ont des applications multiples dans différents domaines des mathématiques et de l'informatique, peuvent être formulés ainsi. Par exemple, on peut vouloir prédire la probabilité d'apparition de l'observation suivante, x_{n+1} (le problème de prédiction séquentielle); ou répondre à la question de savoir si le processus aléatoire qui produit la suite appartient à un certain ensemble H_0 versus appartient à un ensemble différent H_1 (test d'hypothèse) ; ou encore, effectuer une action avec le but de maximiser une certain fonction d'utilité. Dans chacun de ces problèmes, pour rendre l'inférence possible il faut d'abord faire certaines hypothèses sur le processus aléatoire qui produit les données. La question centrale adressée dans les travaux présentés est la suivante : sous quelles hypothèses l'inférence est-elle possible ? Cette question est posée et analysée pour des problèmes d'inférence différents, parmi lesquels se trouvent la prédiction séquentielle, les tests d'hypothèse, la classification et l'apprentissage par renforcement.
257

A Study of Gamma Distributions and Some Related Works

Chou, Chao-Wei 11 May 2004 (has links)
Characterization of distributions has been an important topic in statistical theory for decades. Although there have been many well known results already developed, it is still of great interest to find new characterizations of commonly used distributions in application, such as normal or gamma distribution. In practice, sometimes we make guesses on the distribution to be fitted to the data observed, sometimes we use the characteristic properties of those distributions to do so. In this paper we will restrict our attention to the characterizations of gamma distribution as well as some related studies on the corresponding parameter estimation based on the characterization properties. Some simulation studies are also given.
258

An adaptive modeling and simulation environment for combined-cycle data reconciliation and degradation estimation.

Lin, TsungPo 26 June 2008 (has links)
Performance engineers face the major challenge in modeling and simulation for the after-market power system due to system degradation and measurement errors. Currently, the majority in power generation industries utilizes the deterministic data matching method to calibrate the model and cascade system degradation, which causes significant calibration uncertainty and also the risk of providing performance guarantees. In this research work, a maximum-likelihood based simultaneous data reconciliation and model calibration (SDRMC) is used for power system modeling and simulation. By replacing the current deterministic data matching with SDRMC one can reduce the calibration uncertainty and mitigate the error propagation to the performance simulation. A modeling and simulation environment for a complex power system with certain degradation has been developed. In this environment multiple data sets are imported when carrying out simultaneous data reconciliation and model calibration. Calibration uncertainties are estimated through error analyses and populated to performance simulation by using principle of error propagation. System degradation is then quantified by performance comparison between the calibrated model and its expected new & clean status. To mitigate smearing effects caused by gross errors, gross error detection (GED) is carried out in two stages. The first stage is a screening stage, in which serious gross errors are eliminated in advance. The GED techniques used in the screening stage are based on multivariate data analysis (MDA), including multivariate data visualization and principle component analysis (PCA). Subtle gross errors are treated at the second stage, in which the serial bias compensation or robust M-estimator is engaged. To achieve a better efficiency in the combined scheme of the least squares based data reconciliation and the GED technique based on hypotheses testing, the Levenberg-Marquardt (LM) algorithm is utilized as the optimizer. To reduce the computation time and stabilize the problem solving for a complex power system such as a combined cycle power plant, meta-modeling using the response surface equation (RSE) and system/process decomposition are incorporated with the simultaneous scheme of SDRMC. The goal of this research work is to reduce the calibration uncertainties and, thus, the risks of providing performance guarantees arisen from uncertainties in performance simulation.
259

A comparative study of permutation procedures

Van Heerden, Liske 30 November 1994 (has links)
The unique problems encountered when analyzing weather data sets - that is, measurements taken while conducting a meteorological experiment- have forced statisticians to reconsider the conventional analysis methods and investigate permutation test procedures. The problems encountered when analyzing weather data sets are simulated for a Monte Carlo study, and the results of the parametric and permutation t-tests are compared with regard to significance level, power, and the average coilfidence interval length. Seven population distributions are considered - three are variations of the normal distribution, and the others the gamma, the lognormal, the rectangular and empirical distributions. The normal distribution contaminated with zero measurements is also simulated. In those simulated situations in which the variances are unequal, the permutation test procedure was performed using other test statistics, namely the Scheffe, Welch and Behrens-Fisher test statistics. / Mathematical Sciences / M. Sc. (Statistics)
260

Inferência estatística em métodos de análise de ressonância magnética funcional / Statistical Inference in Methods of Analysis of Functional Magnetic Resonance

Brenno Caetano Troca Cabella 11 April 2008 (has links)
No presente trabalho, conceitos de inferência estatística são utilizados para aplicação e comparação de diferentes métodos de análise de sinais de ressonância magnética funcional. A idéia central baseia-se na obtenção da distribuição de probabilidade da variável aleatória de interesse, para cada método estudado e sob diferentes valores da relação sinal-ruído (SNR). Este objetivo é atingido através de simulações numéricas da função resposta hemodinâmica (HRF) acrescida de ruído gaussiano. Tal procedimento nos permite avaliar a sensibilidade e a especificidade dos métodos empregados através da construção das curvas ROC (receiver operating characteristic) para diferentes valores de SNR. Sob específicas condições experimentais, aplicamos métodos clássicos de análise (teste t de Student e correlação), medidas de informação (distância de Kullback-Leibler e sua forma generalizada) e um método Bayesiano (método do pixel independente). Em especial, mostramos que a distância de Kullback-Leibler (D) (ou entropia relativa) e sua forma generalizada são medidas úteis para análise de sinais dentro do cenário de teoria da informação. Estas entropias são usadas como medidas da \"distância\"entre as funções de probabilidade p1 e p2 dos níveis do sinal relacionados a estímulo e repouso. Para prevenir a ocorrência de valores divergentes de D, introduzimos um pequeno parâmetro d nas definições de p1 e p2. Estendemos a análise, apresentando um estudo original da distância de Kullback-Leibler generalizada Dq (q é o parâmetro de Tsallis). Neste caso, a escolha apropriada do intervalo 0 < q < 1 permite assegurar que Dq seja finito. Obtemos as densidades de probabilidade f (D) e f (Dq) das médias amostrais das variáveis D e Dq , respectivamente, calculadas ao longo das N épocas de todo o experimento. Para pequenos valores de N (N < 30), mostramos que f (D) e f (Dq) são muito bem aproximadas por distribuições Gamma (qui^2 < 0,0009). Em seguida, estudamos o método (Bayesiano) do pixel independente, considerando a probabilidade a posteriori como variável aleatória e obtendo sua distribuição para várias SNR\'s e probabilidades a priori. Os resultados das simulações apontam para o fato de que a correlação e o método do pixel independente apresentam melhor desempenho do que os demais métodos empregados (para SNR > -20 dB). Contudo, deve-se ponderar que o teste t e os métodos entrópicos compartilham da vantagem de não se utilizarem de um modelo para HRF na análise de dados reais. Finalmente, para os diferentes métodos, obtemos os mapas funcionais correspondentes a séries de dados reais de um voluntário assintomático submetido a estímulo motor de evento relacionado, os quais demonstram ativação nas áreas cerebrais motoras primária e secundária. Enfatizamos que o procedimento adotado no presente estudo pode, em princípio, ser utilizado em outros métodos e sob diferentes condições experimentais. / In the present work, concepts of statistical inference are used for application and comparison of different methods of signal analysis in functional magnetic resonance imaging. The central idea is based on obtaining the probability distribution of the random variable of interest, for each method studied under different values of signal-to-noise ratio (SNR). This purpose is achieved by means of numerical simulations of the hemodynamic response function (HRF) with gaussian noise. This procedure allows us to assess the sensitivity and specificity of the methods employed by the construction of the ROC curves (receiver operating characteristic) for different values of SNR. Under specific experimental conditions, we apply classical methods of analysis (Student\'s t test and correlation), information measures (distance of Kullback-Leibler and its generalized form) and a Bayesian method (independent pixel method). In particular, we show that the distance of Kullback-Leibler D (or relative entropy) and its generalized form are useful measures for analysis of signals within the information theory scenario. These entropies are used as measures of the \"distance\"between the probability functions p1 and p2 of the signal levels related to stimulus and non-stimulus. In order to avoid undesirable divergences of D, we introduced a small parameter d in the definitions of p1 and p2. We extend such analysis, by presenting an original study of the generalized Kullback-Leibler distance Dq (q is Tsallis parameter). In this case, the appropriate choice of range 0 < q < 1 ensures that Dq is finite. We obtain the probability densities f (D) and f (Dq) of the sample averages of the variables D and Dq, respectively, calculated over the N epochs of the entire experiment. For small values of N (N < 30), we show that f (D) and f (Dq) are well approximated by Gamma distributions (qui^2 < 0.0009). Afterward, we studied the independent pixel bayesian method, considering the probability a posteriori as a random variable, and obtaining its distribution for various SNR\'s and probabilities a priori. The results of simulations point to the fact that the correlation and the independent pixel method have better performance than the other methods used (for SNR> -20 dB). However, one should consider that the Student\'s t test and the entropic methods share the advantage of not using a model for HRF in real data analysis. Finally, we obtain the maps corresponding to real data series from an asymptomatic volunteer submitted to an event-related motor stimulus, which shows brain activation in the primary and secondary motor brain areas. We emphasize that the procedure adopted in this study may, in principle, be used in other methods and under different experimental conditions.

Page generated in 0.1064 seconds