• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 8
  • 7
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 49
  • 49
  • 15
  • 9
  • 9
  • 9
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

[en] SEMIPARAMETRIC POISSON-GAMMA MODELS: A ROUGHNESS PENALTY APPROACH / [pt] MODELO POISSON-GAMA SEMI-PARAMÉTRICO: UMA ABORDAGEM DE PENALIZAÇÃO POR RUGOSIDADE

WASHINGTON LEITE JUNGER 19 February 2004 (has links)
[pt] Neste trabalho, os modelos Poisson-gama são estendidos para uma formulação mais geral onde o preditor linear das covariáveis é substituído por um preditor aditivo de funções genéricas destas covariáveis. Como nos modelos aditivos generalizados (MAG), as funções lineares das covariáveis constituem um caso particular de modelo aditivo e as funções suavizadores utilizadas são as splines cúbicas naturais. A formulação semi-paramétrica permite ampliar o campo de aplicação desta classe de modelos. Os modelos semi-paramétricos são estimados por um processo iterativo combinando maximização da verossimilhança e algoritmo backfitting. Todos os algoritmos de estimação e diagnósticos estão implementados nas linguagens de programação R e C. / [en] This work is aimed at extending the Poisson-Gamma models towards a more general specification, where the linear predictor of covariates is replaced by an additive predictor of generic functions of these covariates. Just like the generalized additive models (GAM), the linear functions of covariates are a particular case of additive models and the natural cubic splines are used as smoothing functions. The semiparametric specification allows to enlarge the possibilities of application of these models. The semiparametric models are fitted by an iterative process that combines maximization of likelihood and backfitting algorithm. All the routines for model fitting and diagnostics are implemented in R and C programming languages.
42

Least squares estimation for binary decision trees

Albrecht, Nadine 14 December 2020 (has links)
In this thesis, a binary decision tree is used as an approximation of a nonparametric regression curve. The best fitted decision tree is estimated from data via least squares method. It is investigated how and under which conditions the estimator converges. These asymptotic results then are used to create asymptotic convergence regions.
43

Contribution à la régression non paramétrique avec un processus erreur d'autocovariance générale et application en pharmacocinétique / Contribution to nonparametric regression estimation with general autocovariance error process and application to pharmacokinetics

Benelmadani, Djihad 18 September 2019 (has links)
Dans cette thèse, nous considérons le modèle de régression avec plusieurs unités expérimentales, où les erreurs forment un processus d'autocovariance dans un cadre générale, c'est-à-dire, un processus du second ordre (stationnaire ou non stationnaire) avec une autocovariance non différentiable le long de la diagonale. Nous sommes intéressés, entre autres, à l'estimation non paramétrique de la fonction de régression de ce modèle.Premièrement, nous considérons l'estimateur classique proposé par Gasser et Müller. Nous étudions ses performances asymptotiques quand le nombre d'unités expérimentales et le nombre d'observations tendent vers l'infini. Pour un échantillonnage régulier, nous améliorons les vitesses de convergence d'ordre supérieur de son biais et de sa variance. Nous montrons aussi sa normalité asymptotique dans le cas des erreurs corrélées.Deuxièmement, nous proposons un nouvel estimateur à noyau pour la fonction de régression, basé sur une propriété de projection. Cet estimateur est construit à travers la fonction d'autocovariance des erreurs et une fonction particulière appartenant à l'Espace de Hilbert à Noyau Autoreproduisant (RKHS) associé à la fonction d'autocovariance. Nous étudions les performances asymptotiques de l'estimateur en utilisant les propriétés de RKHS. Ces propriétés nous permettent d'obtenir la vitesse optimale de convergence de la variance de cet estimateur. Nous prouvons sa normalité asymptotique, et montrons que sa variance est asymptotiquement plus petite que celle de l'estimateur de Gasser et Müller. Nous conduisons une étude de simulation pour confirmer nos résultats théoriques.Troisièmement, nous proposons un nouvel estimateur à noyau pour la fonction de régression. Cet estimateur est construit en utilisant la règle numérique des trapèzes, pour approximer l'estimateur basé sur des données continues. Nous étudions aussi sa performance asymptotique et nous montrons sa normalité asymptotique. En outre, cet estimateur permet d'obtenir le plan d'échantillonnage optimal pour l'estimation de la fonction de régression. Une étude de simulation est conduite afin de tester le comportement de cet estimateur dans un plan d'échantillonnage de taille finie, en terme d'erreur en moyenne quadratique intégrée (IMSE). De plus, nous montrons la réduction dans l'IMSE en utilisant le plan d'échantillonnage optimal au lieu de l'échantillonnage uniforme.Finalement, nous considérons une application de la régression non paramétrique dans le domaine pharmacocinétique. Nous proposons l'utilisation de l'estimateur non paramétrique à noyau pour l'estimation de la fonction de concentration. Nous vérifions son bon comportement par des simulations et une analyse de données réelles. Nous investiguons aussi le problème de l'estimation de l'Aire Sous la Courbe de concentration (AUC), pour lequel nous proposons un nouvel estimateur à noyau, obtenu par l'intégration de l'estimateur à noyau de la fonction de régression. Nous montrons, par une étude de simulation, que le nouvel estimateur est meilleur que l'estimateur classique en terme d'erreur en moyenne quadratique. Le problème crucial de l'obtention d'un plan d'échantillonnage optimale pour l'estimation de l'AUC est discuté en utilisant l'algorithme de recuit simulé généralisé. / In this thesis, we consider the fixed design regression model with repeated measurements, where the errors form a process with general autocovariance function, i.e. a second order process (stationary or nonstationary), with a non-differentiable covariance function along the diagonal. We are interested, among other problems, in the nonparametric estimation of the regression function of this model.We first consider the well-known kernel regression estimator proposed by Gasser and Müller. We study its asymptotic performance when the number of experimental units and the number of observations tend to infinity. For a regular sequence of designs, we improve the higher rates of convergence of the variance and the bias. We also prove the asymptotic normality of this estimator in the case of correlated errors.Second, we propose a new kernel estimator of the regression function based on a projection property. This estimator is constructed through the autocovariance function of the errors, and a specific function belonging to the Reproducing Kernel Hilbert Space (RKHS) associated to the autocovariance function. We study its asymptotic performance using the RKHS properties. These properties allow to obtain the optimal convergence rate of the variance. We also prove its asymptotic normality. We show that this new estimator has a smaller asymptotic variance then the one of Gasser and Müller. A simulation study is conducted to confirm this theoretical result.Third, we propose a new kernel estimator for the regression function. This estimator is constructed through the trapezoidal numerical approximation of the kernel regression estimator based on continuous observations. We study its asymptotic performance, and we prove its asymptotic normality. Moreover, this estimator allow to obtain the asymptotic optimal sampling design for the estimation of the regression function. We run a simulation study to test the performance of the proposed estimator in a finite sample set, where we see its good performance, in terms of Integrated Mean Squared Error (IMSE). In addition, we show the reduction of the IMSE using the optimal sampling design instead of the uniform design in a finite sample set.Finally, we consider an application of the regression function estimation in pharmacokinetics problems. We propose to use the nonparametric kernel methods, for the concentration-time curve estimation, instead of the classical parametric ones. We prove its good performance via simulation study and real data analysis. We also investigate the problem of estimating the Area Under the concentration Curve (AUC), where we introduce a new kernel estimator, obtained by the integration of the regression function estimator. We prove, using a simulation study, that the proposed estimators outperform the classical one in terms of Mean Squared Error. The crucial problem of finding the optimal sampling design for the AUC estimation is investigated using the Generalized Simulating Annealing algorithm.
44

Non- and semiparametric models for conditional probabilities in two-way contingency tables / Modèles non-paramétriques et semiparamétriques pour les probabilités conditionnelles dans les tables de contingence à deux entrées

Geenens, Gery 04 July 2008 (has links)
This thesis is mainly concerned with the estimation of conditional probabilities in two-way contingency tables, that is probabilities of type P(R=i,S=j|X=x), for (i,j) in {1, . . . , r}×{1, . . . , s}, where R and S are the two categorical variables forming the contingency table, with r and s levels respectively, and X is a vector of explanatory variables possibly associated with R, S, or both. Analyzing such a conditional distribution is often of interest, as this allows to go further than the usual unconditional study of the behavior of the variables R and S. First, one can check an eventual effect of these covariates on the distribution of the individuals through the cells of the table, and second, one can carry out usual analyses of contingency tables, such as independence tests, taking into account, and removing in some sense, this effect. This helps for instance to identify the external factors which could be responsible for an eventual association between R and S. This also gives the possibility to adapt for a possible heterogeneity in the population of interest, when analyzing the table.
45

Three Essays on Estimation and Testing of Nonparametric Models

Ma, Guangyi 2012 August 1900 (has links)
In this dissertation, I focus on the development and application of nonparametric methods in econometrics. First, a constrained nonparametric regression method is developed to estimate a function and its derivatives subject to shape restrictions implied by economic theory. The constrained estimators can be viewed as a set of empirical likelihood-based reweighted local polynomial estimators. They are shown to be weakly consistent and have the same first order asymptotic distribution as the unconstrained estimators. When the shape restrictions are correctly specified, the constrained estimators can achieve a large degree of finite sample bias reduction and thus outperform the unconstrained estimators. The constrained nonparametric regression method is applied on the estimation of daily option pricing function and state-price density function. Second, a modified Cumulative Sum of Squares (CUSQ) test is proposed to test structural changes in the unconditional volatility in a time-varying coefficient model. The proposed test is based on nonparametric residuals from local linear estimation of the time-varying coefficients. Asymptotic theory is provided to show that the new CUSQ test has standard null distribution and diverges at standard rate under the alternatives. Compared with a test based on least squares residuals, the new test enjoys correct size and good power properties. This is because, by estimating the model nonparametrically, one can circumvent the size distortion from potential structural changes in the mean. Empirical results from both simulation experiments and real data applications are presented to demonstrate the test's size and power properties. Third, an empirical study of testing the Purchasing Power Parity (PPP) hypothesis is conducted in a functional-coefficient cointegration model, which is consistent with equilibrium models of exchange rate determination with the presence of trans- actions costs in international trade. Supporting evidence of PPP is found in the recent float exchange rate era. The cointegration relation of nominal exchange rate and price levels varies conditioning on the real exchange rate volatility. The cointegration coefficients are more stable and numerically near the value implied by PPP theory when the real exchange rate volatility is relatively lower.
46

Flexibilnost, robustnost a nespojitost v neparamerických regresních postupech / Flexibility, Robustness and Discontinuities in Nonparametric Regression Approaches

Maciak, Matúš January 2011 (has links)
Thesis title: Flexibility, Robustness and Discontinuity in Nonparametric Regression Approaches Author: Mgr. Matúš Maciak, M.Sc. Department: Department of Probability and Mathematical Statistics, Charles University in Prague Supervisor: Prof. RNDr. Marie Hušková, DrSc. huskova@karlin.mff.cuni.cz Abstract: In this thesis we focus on local polynomial estimation approaches of an unknown regression function while taking into account also some robust issues like a presence of outlying observa- tions or heavy-tailed distributions of random errors as well. We will discuss the most common method used for such settings, so called local polynomial M-smoothers and we will present the main statistical properties and asymptotic inference for this method. The M-smoothers method is especially suitable for such cases because of its natural robust flavour, which can nicely deal with outliers as well as heavy-tailed distributed random errors. Another important quality we will focus in this thesis on is a discontinuity issue where we allow for sudden changes (discontinuity points) in the unknown regression function or its derivatives respectively. We will propose a discontinuity model with different variability structures for both independent and dependent random errors while the discontinuity points will be treated in a...
47

Optimum Savitzky-Golay Filtering for Signal Estimation

Krishnan, Sunder Ram January 2013 (has links) (PDF)
Motivated by the classic works of Charles M. Stein, we focus on developing risk-estimation frameworks for denoising problems in both one-and two-dimensions. We assume a standard additive noise model, and formulate the denoising problem as one of estimating the underlying clean signal from noisy measurements by minimizing a risk corresponding to a chosen loss function. Our goal is to incorporate perceptually-motivated loss functions wherever applicable, as in the case of speech enhancement, with the squared error loss being considered for the other scenarios. Since the true risks are observed to depend on the unknown parameter of interest, we circumvent the roadblock by deriving finite-sample un-biased estimators of the corresponding risks based on Stein’s lemma. We establish the link with the multivariate parameter estimation problem addressed by Stein and our denoising problem, and derive estimators of the oracle risks. In all cases, optimum values of the parameters characterizing the denoising algorithm are determined by minimizing the Stein’s unbiased risk estimator (SURE). The key contribution of this thesis is the development of a risk-estimation approach for choosing the two critical parameters affecting the quality of nonparametric regression, namely, the order and bandwidth/smoothing parameters. This is a classic problem in statistics, and certain algorithms relying on derivation of suitable finite-sample risk estimators for minimization have been reported in the literature (note that all these works consider the mean squared error (MSE) objective). We show that a SURE-based formalism is well-suited to the regression parameter selection problem, and that the optimum solution guarantees near-minimum MSE (MMSE) performance. We develop algorithms for both glob-ally and locally choosing the two parameters, the latter referred to as spatially-adaptive regression. We observe that the parameters are so chosen as to tradeoff the squared bias and variance quantities that constitute the MSE. We also indicate the advantages accruing out of incorporating a regularization term in the cost function in addition to the data error term. In the more general case of kernel regression, which uses a weighted least-squares (LS) optimization, we consider the applications of image restoration from very few random measurements, in addition to denoising of uniformly sampled data. We show that local polynomial regression (LPR) becomes a special case of kernel regression, and extend our results for LPR on uniform data to non-uniformly sampled data also. The denoising algorithms are compared with other standard, performant methods available in the literature both in terms of estimation error and computational complexity. A major perspective provided in this thesis is that the problem of optimum parameter choice in nonparametric regression can be viewed as the selection of optimum parameters of a linear, shift-invariant filter. This interpretation is provided by deriving motivation out of the hallmark paper of Savitzky and Golay and Schafer’s recent article in IEEE Signal Processing Magazine. It is worth noting that Savitzky and Golay had shown in their original Analytical Chemistry journal article, that LS fitting of a fixed-order polynomial over a neighborhood of fixed size is equivalent to convolution with an impulse response that is fixed and can be pre-computed. They had provided tables of impulse response coefficients for computing the smoothed function and smoothed derivatives for different orders and neighborhood sizes, the resulting filters being referred to as Savitzky-Golay (S-G) filters. Thus, we provide the new perspective that the regression parameter choice is equivalent to optimizing for the filter impulse response length/3dB bandwidth, which are inversely related. We observe that the MMSE solution is such that the S-G filter chosen is of longer impulse response length (equivalently smaller cutoff frequency) at relatively flat portions of the noisy signal so as to smooth noise, and vice versa at locally fast-varying portions of the signal so as to capture the signal patterns. Also, we provide a generalized S-G filtering viewpoint in the case of kernel regression. Building on the S-G filtering perspective, we turn to the problem of dynamic feature computation in speech recognition. We observe that the methodology employed for computing dynamic features from the trajectories of static features is in fact derivative S-G filtering. With this perspective, we note that the filter coefficients can be pre-computed, and that the whole problem of delta feature computation becomes efficient. Indeed, we observe an advantage by a factor of 104 on making use of S-G filtering over actual LS polynomial fitting and evaluation. Thereafter, we study the properties of first-and second-order derivative S-G filters of certain orders and lengths experimentally. The derivative filters are bandpass due to the combined effects of LPR and derivative computation, which are lowpass and highpass operations, respectively. The first-and second-order S-G derivative filters are also observed to exhibit an approximately constant-Q property. We perform a TIMIT phoneme recognition experiment comparing the recognition accuracies obtained using S-G filters and the conventional approach followed in HTK, where Furui’s regression formula is made use of. The recognition accuracies for both cases are almost identical, with S-G filters of certain bandwidths and orders registering a marginal improvement. The accuracies are also observed to improve with longer filter lengths, for a particular order. In terms of computation latency, we note that S-G filtering achieves delta and delta-delta feature computation in parallel by linear filtering, whereas they need to be obtained sequentially in case of the standard regression formulas used in the literature. Finally, we turn to the problem of speech enhancement where we are interested in de-noising using perceptually-motivated loss functions such as Itakura-Saito (IS). We propose to perform enhancement in the discrete cosine transform domain using risk-minimization. The cost functions considered are non-quadratic, and derivation of the unbiased estimator of the risk corresponding to the IS distortion is achieved using an approximate Taylor-series analysis under high signal-to-noise ratio assumption. The exposition is general since we focus on an additive noise model with the noise density assumed to fall within the exponential class of density functions, which comprises most of the common densities. The denoising function is assumed to be pointwise linear (modified James-Stein (MJS) estimator), and parallels between Wiener filtering and the optimum MJS estimator are discussed.
48

Optimum Savitzky-Golay Filtering for Signal Estimation

Krishnan, Sunder Ram January 2013 (has links) (PDF)
Motivated by the classic works of Charles M. Stein, we focus on developing risk-estimation frameworks for denoising problems in both one-and two-dimensions. We assume a standard additive noise model, and formulate the denoising problem as one of estimating the underlying clean signal from noisy measurements by minimizing a risk corresponding to a chosen loss function. Our goal is to incorporate perceptually-motivated loss functions wherever applicable, as in the case of speech enhancement, with the squared error loss being considered for the other scenarios. Since the true risks are observed to depend on the unknown parameter of interest, we circumvent the roadblock by deriving finite-sample un-biased estimators of the corresponding risks based on Stein’s lemma. We establish the link with the multivariate parameter estimation problem addressed by Stein and our denoising problem, and derive estimators of the oracle risks. In all cases, optimum values of the parameters characterizing the denoising algorithm are determined by minimizing the Stein’s unbiased risk estimator (SURE). The key contribution of this thesis is the development of a risk-estimation approach for choosing the two critical parameters affecting the quality of nonparametric regression, namely, the order and bandwidth/smoothing parameters. This is a classic problem in statistics, and certain algorithms relying on derivation of suitable finite-sample risk estimators for minimization have been reported in the literature (note that all these works consider the mean squared error (MSE) objective). We show that a SURE-based formalism is well-suited to the regression parameter selection problem, and that the optimum solution guarantees near-minimum MSE (MMSE) performance. We develop algorithms for both glob-ally and locally choosing the two parameters, the latter referred to as spatially-adaptive regression. We observe that the parameters are so chosen as to tradeoff the squared bias and variance quantities that constitute the MSE. We also indicate the advantages accruing out of incorporating a regularization term in the cost function in addition to the data error term. In the more general case of kernel regression, which uses a weighted least-squares (LS) optimization, we consider the applications of image restoration from very few random measurements, in addition to denoising of uniformly sampled data. We show that local polynomial regression (LPR) becomes a special case of kernel regression, and extend our results for LPR on uniform data to non-uniformly sampled data also. The denoising algorithms are compared with other standard, performant methods available in the literature both in terms of estimation error and computational complexity. A major perspective provided in this thesis is that the problem of optimum parameter choice in nonparametric regression can be viewed as the selection of optimum parameters of a linear, shift-invariant filter. This interpretation is provided by deriving motivation out of the hallmark paper of Savitzky and Golay and Schafer’s recent article in IEEE Signal Processing Magazine. It is worth noting that Savitzky and Golay had shown in their original Analytical Chemistry journal article, that LS fitting of a fixed-order polynomial over a neighborhood of fixed size is equivalent to convolution with an impulse response that is fixed and can be pre-computed. They had provided tables of impulse response coefficients for computing the smoothed function and smoothed derivatives for different orders and neighborhood sizes, the resulting filters being referred to as Savitzky-Golay (S-G) filters. Thus, we provide the new perspective that the regression parameter choice is equivalent to optimizing for the filter impulse response length/3dB bandwidth, which are inversely related. We observe that the MMSE solution is such that the S-G filter chosen is of longer impulse response length (equivalently smaller cutoff frequency) at relatively flat portions of the noisy signal so as to smooth noise, and vice versa at locally fast-varying portions of the signal so as to capture the signal patterns. Also, we provide a generalized S-G filtering viewpoint in the case of kernel regression. Building on the S-G filtering perspective, we turn to the problem of dynamic feature computation in speech recognition. We observe that the methodology employed for computing dynamic features from the trajectories of static features is in fact derivative S-G filtering. With this perspective, we note that the filter coefficients can be pre-computed, and that the whole problem of delta feature computation becomes efficient. Indeed, we observe an advantage by a factor of 104 on making use of S-G filtering over actual LS polynomial fitting and evaluation. Thereafter, we study the properties of first-and second-order derivative S-G filters of certain orders and lengths experimentally. The derivative filters are bandpass due to the combined effects of LPR and derivative computation, which are lowpass and highpass operations, respectively. The first-and second-order S-G derivative filters are also observed to exhibit an approximately constant-Q property. We perform a TIMIT phoneme recognition experiment comparing the recognition accuracies obtained using S-G filters and the conventional approach followed in HTK, where Furui’s regression formula is made use of. The recognition accuracies for both cases are almost identical, with S-G filters of certain bandwidths and orders registering a marginal improvement. The accuracies are also observed to improve with longer filter lengths, for a particular order. In terms of computation latency, we note that S-G filtering achieves delta and delta-delta feature computation in parallel by linear filtering, whereas they need to be obtained sequentially in case of the standard regression formulas used in the literature. Finally, we turn to the problem of speech enhancement where we are interested in de-noising using perceptually-motivated loss functions such as Itakura-Saito (IS). We propose to perform enhancement in the discrete cosine transform domain using risk-minimization. The cost functions considered are non-quadratic, and derivation of the unbiased estimator of the risk corresponding to the IS distortion is achieved using an approximate Taylor-series analysis under high signal-to-noise ratio assumption. The exposition is general since we focus on an additive noise model with the noise density assumed to fall within the exponential class of density functions, which comprises most of the common densities. The denoising function is assumed to be pointwise linear (modified James-Stein (MJS) estimator), and parallels between Wiener filtering and the optimum MJS estimator are discussed.
49

Quantifying Gait Characteristics and Neurological Effects in people with Spinal Cord Injury using Data-Driven Techniques / Kvantifiering av gångens egenskaper och neurologisk funktionens effekt hos personer med ryggmärgsskada med hjälp av datadrivna metoder

Truong, Minh January 2024 (has links)
Spinal cord injury, whether traumatic or nontraumatic, can partially or completely damage sensorimotor pathways between the brain and the body, leading to heterogeneous gait abnormalities. Mobility impairments also depend on other factors such as age, weight, time since injury, pain, and walking aids used. The ASIA Impairment Scale is recommended to classify injury severity, but is not designed to characterize individual ambulatory capacity. Other standardized tests based on subjective or timing/distance assessments also have only limited ability to determine an individual's capacity. Data-driven techniques have demonstrated effectiveness in analysing complexity in many domains and may provide additional perspectives on the complexity of gait performance in persons with spinal cord injury. The studies in this thesis aimed to address the complexity of gait and functional abilities after spinal cord injury using data-driven approaches. The aim of the first manuscript was to characterize the heterogeneous gait patterns in persons with incomplete spinal cord injury. Dissimilarities among gait patterns in the study population were quantified with multivariate dynamic time warping. Gait patterns were classified into six distinct clusters using hierarchical agglomerative clustering. Through random forest classifiers with explainable AI, peak ankle plantarflexion during swing was identified as the feature that most often distinguished most clusters from the controls. By combining clinical evaluation with the proposed methods, it was possible to provide comprehensive analyses of the six gait clusters.     The aim of the second manuscript was to quantify sensorimotor effects on walking performance in persons with spinal cord injury. The relationships between 11 input features and 2 walking outcome measures - distance walked in 6 minutes and net energy cost of transport - were captured using 2 Gaussian process regression models. Explainable AI revealed the importance of muscle strength on both outcome measures. Use of walking aids also influenced distance walked, and  cardiovascular capacity influenced energy cost. Analyses for each person also gave useful insights into individual performance.     The findings from these studies demonstrate the large potential of advanced machine learning and explainable AI to address the complexity of gait function in persons with spinal cord injury. / Skador på ryggmärgen, oavsett om de är traumatiska eller icke-traumatiska, kan helt eller delvis skada sensoriska och motoriska banor mellan hjärnan och kroppen, vilket påverkar gången i varierande grad. Rörelsenedsättningen beror också på andra faktorer såsom ålder, vikt, tid sedan skadan uppstod, smärta och gånghjälpmedel. ASIA-skalan används för att klassificera ryggmärgsskadans svårighetsgrad, men är inte utformad för att karaktärisera individens gångförmåga. Andra standardiserade tester baserade på subjektiva eller tids och avståndsbedömningar har också begränsad möjlighet att beskriva individuell kapacitet. Datadrivna metoder är kraftfulla och kan ge ytterligare perspektiv på gångens komplexitet och prestation. Studierna i denna avhandling syftar till att analysera komplexa relationer mellan gång, motoriska samt sensoriska funktion efter ryggmärgsskada med hjälp av datadrivna metoder. Syftet med den första studien är att karaktärisera de heterogena gångmönster hos personer med inkomplett ryggmärgsskada. Multivariat dynamisk tidsförvrägning (eng: Multivariate dynamic time warping) användes för att kvantifiera gångskillnader i studiepopulationen. Hierarkisk agglomerativ klusteranalys (eng: hierarchical agglomerative clustering) delade upp gång i sex distinkta kluster, varav fyra hade lägre hastighet än kontroller. Med hjälp av förklarbara AI (eng: explainable AI) identifierades det att fotledsvinkeln i svingfasen hade störst påverkan om vilken kluster som gångmönstret hamnat i. Genom att kombinera klinisk undersökning med datadrivna metoder kunde vi beskriva en omfattande bild av de sex gångklustren. Syftet med den andra manuskriptet är att kvantifiera sensoriska och motoriska faktorerans påverkan på gångförmåga efter ryggmärgsskada. Med hjälp av två Gaussian process-regressionsmodeller identiferades sambanden mellan 11 beskrivande faktorer och 2 gång prestationsmått, nämligen gångavstånd på 6 minuter samt metabola energiåtgång. Med hjälp av förklarbar AI påvisades det stora påverkan av muskelstyrka på både gångsträckan och energiåtgång. Gånghjälpmedlet samt kardiovaskulär kapaciteten hade också betydande påverkan på gångprestation. Enskilda analyser gav insiktsfull information om varje individ. Resultaten från dessa studier visar på potentiella tillämpningar av avancerad maskininlärning och AI metoder för att analysera komplexa relationer mellan funktion och motorisk prestation efter ryggmärgsskada. / <p>QC 20240221</p>

Page generated in 0.0328 seconds