• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 104
  • 67
  • 36
  • 32
  • 20
  • 20
  • 18
  • 6
  • 6
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 345
  • 345
  • 71
  • 65
  • 63
  • 53
  • 53
  • 40
  • 34
  • 33
  • 32
  • 28
  • 26
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Essays in international trade and energy / Essais dans le commerce international et l'énergie

Monastyrenko, Evgenii 24 September 2018 (has links)
Dans le chapitre 1, j’examine les résultats des fusions entre producteurs européens d’énergie en termes d’efficacité. Je calcule l’éco-efficacité en utilisant l’analyse de l’enveloppement des données et l’indice de productivité Malmquist-Luenberger. Je trouve que les fusions horizontales nationales, qui sont soigneusement réglementées, n’ont pas d’impact. Les fusions horizontales transfrontalières nuisent à l’éco-efficacité à court terme mais la stimulent deux ans après l’achèvement. Les fusions verticales nuisent à l’éco-efficacité. Je présente des suggestions de politiques concernant la réglementation des fusions. Le chapitre 2 est un travail conjoint avec Julian Hinz. Nous enquêtons sur les effets de l’embargo russe auto-imposé sur les importations de produits alimentaires en provenance des pays occidentaux. Nous construisons un modèle ricardien avec des liens sectoriels, des échanges de biens intermédiaires et une hétérogénéité sectorielle dans la production. L’étalonnage du modèle avec des données réelles permet de simuler les résultats de l’embargo en termes de changements de bien-être et de prix. Nous quantifions en outre l’impact sur les prix à la consommation en Russie à l’aide de la méthode des doubles différences. Le chapitre 3 est basé sur un article co-écrit avec Cristina Herghelegiu. Nous enquêtons sur l’utilisation des conditions commerciales internationales (Incoterms). Ce sont les schémas prédéfinis de la répartition des coûts et des risques entre les acheteurs et les vendeurs. Nous nous appuyons sur un ensemble de données très détaillées sur les exportations russes durant la période 2012-2015. Nous constatons que les grandes entreprises sont plus susceptibles d’assumer des responsabilités. Les gros acheteurs assument plus de responsabilités, quelle que soit la taille du vendeur, alors que les gros vendeurs le font uniquement lorsque leur partenaire est petit. C’est plus probable que les risques et les coûts sont sur les acheteurs dans les transactions de biens intermédiaires et de biens d’équipement. / In Chapter 1 I investigate firm-level efficiency outcomes of mergers between the European energy producers. I compute eco-efficiency using data envelopment analysis and the Malmquist-Luenberger productivity index. I find that carefully regulated domestic horizontal mergers do not have a statistically significant impact. Cross-border horizontal mergers hamper eco-efficiency in the short run but stimulate it two years after completion. Vertical mergers are detrimental to eco-efficiency. I put forward policy suggestions regarding the regulation of mergers. Chapter 2 is joint work with Julian Hinz. We investigate the effects of self-imposed Russian embargo on food import from Western countries. We build a Ricardian model with sectoral linkages, trade in intermediate goods and sectoral heterogeneity in production. The calibration of the model with real data allows to simulate the outcomes of embargo in terms of changes in welfare and prices. We further quantify the impact on consumer prices in Russia with the difference-in-differences estimator. Chapter 3 is based on a paper co-written with Cristina Herghelegiu. We investigate the use of International Commercial Terms. They are pre-defined schemes of repartition of costs and risks between buyers and sellers, which serve to mitigate the uncertainty. We rely on a highly detailed dataset on Russian exports over the 2012-2015 period. We find that big firms are more likely to take on responsibilities. Big buyers bear more responsibilities regardless of the seller size, whereas big sellers do so only when their partner is small. Risks and costs are more likely on buyers in transactions of intermediate and capital goods.
332

O modelo de regressão odd log-logística gama generalizada com aplicações em análise de sobrevivência / The regression model odd log-logistics generalized gamma with applications in survival analysis

Fábio Prataviera 11 July 2017 (has links)
Propor uma família de distribuição de probabilidade mais ampla e flexível é de grande importância em estudos estatísticos. Neste trabalho é utilizado um novo método de adicionar um parâmetro para uma distribuição contínua. A distribuição gama generalizada, que tem como casos especiais a distribuição Weibull, exponencial, gama, qui-quadrado, é usada como distribuição base. O novo modelo obtido tem quatro parâmetros e é chamado odd log-logística gama generalizada (OLLGG). Uma das características interessante do modelo OLLGG é o fato de apresentar bimodalidade. Outra proposta deste trabalho é introduzir um modelo de regressão chamado log-odd log-logística gama generalizada (LOLLGG) com base na GG (Stacy e Mihram, 1965). Este modelo pode ser muito útil, quando por exemplo, os dados amostrados possuem uma mistura de duas populações estatísticas. Outra vantagem da distribuição OLLGG consiste na capacidade de apresentar várias formas para a função de risco, crescente, decrescente, na forma de U e bimodal entre outras. Desta forma, são apresentadas em ambos os casos as expressões explícitas para os momentos, função geradora e desvios médios. Considerando dados nãocensurados e censurados de forma aleatória, as estimativas para os parâmetros de interesse, foram obtidas via método da máxima verossimilhança. Estudos de simulação, considerando diferentes valores para os parâmetros, porcentagens de censura e tamanhos amostrais foram conduzidos com o objetivo de verificar a flexibilidade da distribuição e a adequabilidade dos resíduos no modelo de regressão. Para ilustrar, são realizadas aplicações em conjuntos de dados reais. / Providing a wider and more flexible probability distribution family is of great importance in statistical studies. In this work a new method of adding a parameter to a continuous distribution is used. In this study the generalized gamma distribution (GG) is used as base distribution. The GG distribution has, as especial cases, Weibull distribution, exponential, gamma, chi-square, among others. For this motive, it is considered a flexible distribution in data modeling procedures. The new model obtained with four parameters is called log-odd log-logistic generalized gamma (OLLGG). One of the interesting characteristics of the OLLGG model is the fact that it presents bimodality. In addition, a regression model regression model called log-odd log-logistic generalized gamma (LOLLGG) based by GG (Stacy e Mihram, 1965) is introduced. This model can be very useful when, the sampled data has a mixture of two statistical populations. Another advantage of the OLLGG distribution is the ability to present various forms for the failing rate, as increasing, as decreasing, and the shapes of bathtub or U. Explicity expressions for the moments, generating functions, mean deviations are obtained. Considering non-censored and randomly censored data, the estimates for the parameters of interest were obtained using the maximum likelihood method. Simulation studies, considering different values for the parameters, percentages of censoring and sample sizes were done in order to verify the distribuition flexibility, and the residues distrbutuon in the regression model. To illustrate, some applications using real data sets are carried out.
333

LDHBx and MDH1x are controlled by physiological translational readthrough in Homo sapiens

Schüren, Fabian 07 April 2016 (has links)
No description available.
334

複迴歸係數排列檢定方法探討 / Methods for testing significance of partial regression coefficients in regression model

闕靖元, Chueh, Ching Yuan Unknown Date (has links)
在傳統的迴歸模型架構下,統計推論的進行需要假設誤差項之間相互獨立,且來自於常態分配。當理論模型假設條件無法達成的時候,排列檢定(permutation tests)這種無母數的統計方法通常會是可行的替代方法。 在以往的文獻中,應用於複迴歸模型(multiple regression)之係數排列檢定方法主要以樞紐統計量(pivotal quantity)作為檢定統計量,進而探討不同排列檢定方式的差異。本文除了採用t統計量這一個樞紐統計量作為檢定統計量的排列檢定方式外,亦納入以非樞紐統計量的迴歸係數估計量b22所建構而成的排列檢定方式,藉由蒙地卡羅模擬方法,比較以此兩類檢定方式之型一誤差(type I error)機率以及檢定力(power),並觀察其可行性以及適用時機。模擬結果顯示,在解釋變數間不相關且誤差分配較不偏斜的情形下,Freedman and Lane (1983)、Levin and Robbins (1983)、Kennedy (1995)之排列方法在樣本數大時適用b2統計量,且其檢定力較使用t2統計量高,但差異程度不大;若解釋變數間呈現高度相關,則不論誤差的偏斜狀態,Freedman and Lane (1983)、Kennedy (1995) 之排列方法於樣本數大時適用b2統計量,其檢定力結果也較使用t2統計量高,而且兩者的差異程度比起解釋變數間不相關時更加明顯。整體而言,使用t2統計量適用的場合較廣;相反的,使用b2的模擬結果則常需視樣本數大小以及解釋變數間相關性而定。 / In traditional linear models, error term are usually assumed to be independently, identically, normally distributed with mean zero and a constant variance. When the assumptions cannot meet, permutation tests can be an alternative method. Several permutation tests have been proposed to test the significance of a partial regression coefficient in a multiple regression model. t=b⁄(se(b)), an asymptotically pivotal quantity, is usually preferred and suggested as the test statistic. In this study, we take not only t statistics, but also the estimates of the partial regression coefficient as our test statistics. Their performance are compared in terms of the probability of committing a type I error and the power through the use of Monte Carlo simulation method. Situations where estimates of the partial regression coefficients may outperform t statistics are discussed.
335

Statistical inference for joint modelling of longitudinal and survival data

Li, Qiuju January 2014 (has links)
In longitudinal studies, data collected within a subject or cluster are somewhat correlated by their very nature and special cares are needed to account for such correlation in the analysis of data. Under the framework of longitudinal studies, three topics are being discussed in this thesis. In chapter 2, the joint modelling of multivariate longitudinal process consisting of different types of outcomes are discussed. In the large cohort study of UK north Stafforshire osteoarthritis project, longitudinal trivariate outcomes of continuous, binary and ordinary data are observed at baseline, year 3 and year 6. Instead of analysing each process separately, joint modelling is proposed for the trivariate outcomes to account for the inherent association by introducing random effects and the covariance matrix G. The influence of covariance matrix G on statistical inference of fixed-effects parameters has been investigated within the Bayesian framework. The study shows that by joint modelling the multivariate longitudinal process, it can reduce the bias and provide with more reliable results than it does by modelling each process separately. Together with the longitudinal measurements taken intermittently, a counting process of events in time is often being observed as well during a longitudinal study. It is of interest to investigate the relationship between time to event and longitudinal process, on the other hand, measurements taken for the longitudinal process may be potentially truncated by the terminated events, such as death. Thus, it may be crucial to jointly model the survival and longitudinal data. It is popular to propose linear mixed-effects models for the longitudinal process of continuous outcomes and Cox regression model for survival data to characterize the relationship between time to event and longitudinal process, and some standard assumptions have been made. In chapter 3, we try to investigate the influence on statistical inference for survival data when the assumption of mutual independence on random error of linear mixed-effects models of longitudinal process has been violated. And the study is conducted by utilising conditional score estimation approach, which provides with robust estimators and shares computational advantage. Generalised sufficient statistic of random effects is proposed to account for the correlation remaining among the random error, which is characterized by the data-driven method of modified Cholesky decomposition. The simulation study shows that, by doing so, it can provide with nearly unbiased estimation and efficient statistical inference as well. In chapter 4, it is trying to account for both the current and past information of longitudinal process into the survival models of joint modelling. In the last 15 to 20 years, it has been popular or even standard to assume that longitudinal process affects the counting process of events in time only through the current value, which, however, is not necessary to be true all the time, as recognised by the investigators in more recent studies. An integral over the trajectory of longitudinal process, along with a weighted curve, is proposed to account for both the current and past information to improve inference and reduce the under estimation of effects of longitudinal process on the risk hazards. A plausible approach of statistical inference for the proposed models has been proposed in the chapter, along with real data analysis and simulation study.
336

Exploring advanced forecasting methods with applications in aviation

Riba, Evans Mogolo 02 1900 (has links)
Abstracts in English, Afrikaans and Northern Sotho / More time series forecasting methods were researched and made available in recent years. This is mainly due to the emergence of machine learning methods which also found applicability in time series forecasting. The emergence of a variety of methods and their variants presents a challenge when choosing appropriate forecasting methods. This study explored the performance of four advanced forecasting methods: autoregressive integrated moving averages (ARIMA); artificial neural networks (ANN); support vector machines (SVM) and regression models with ARIMA errors. To improve their performance, bagging was also applied. The performance of the different methods was illustrated using South African air passenger data collected for planning purposes by the Airports Company South Africa (ACSA). The dissertation discussed the different forecasting methods at length. Characteristics such as strengths and weaknesses and the applicability of the methods were explored. Some of the most popular forecast accuracy measures were discussed in order to understand how they could be used in the performance evaluation of the methods. It was found that the regression model with ARIMA errors outperformed all the other methods, followed by the ARIMA model. These findings are in line with the general findings in the literature. The ANN method is prone to overfitting and this was evident from the results of the training and the test data sets. The bagged models showed mixed results with marginal improvement on some of the methods for some performance measures. It could be concluded that the traditional statistical forecasting methods (ARIMA and the regression model with ARIMA errors) performed better than the machine learning methods (ANN and SVM) on this data set, based on the measures of accuracy used. This calls for more research regarding the applicability of the machine learning methods to time series forecasting which will assist in understanding and improving their performance against the traditional statistical methods / Die afgelope tyd is verskeie tydreeksvooruitskattingsmetodes ondersoek as gevolg van die ontwikkeling van masjienleermetodes met toepassings in die vooruitskatting van tydreekse. Die nuwe metodes en hulle variante laat ʼn groot keuse tussen vooruitskattingsmetodes. Hierdie studie ondersoek die werkverrigting van vier gevorderde vooruitskattingsmetodes: outoregressiewe, geïntegreerde bewegende gemiddeldes (ARIMA), kunsmatige neurale netwerke (ANN), steunvektormasjiene (SVM) en regressiemodelle met ARIMA-foute. Skoenlussaamvoeging is gebruik om die prestasie van die metodes te verbeter. Die prestasie van die vier metodes is vergelyk deur hulle toe te pas op Suid-Afrikaanse lugpassasiersdata wat deur die Suid-Afrikaanse Lughawensmaatskappy (ACSA) vir beplanning ingesamel is. Hierdie verhandeling beskryf die verskillende vooruitskattingsmetodes omvattend. Sowel die positiewe as die negatiewe eienskappe en die toepasbaarheid van die metodes is uitgelig. Bekende prestasiemaatstawwe is ondersoek om die prestasie van die metodes te evalueer. Die regressiemodel met ARIMA-foute en die ARIMA-model het die beste van die vier metodes gevaar. Hierdie bevinding strook met dié in die literatuur. Dat die ANN-metode na oormatige passing neig, is deur die resultate van die opleidings- en toetsdatastelle bevestig. Die skoenlussamevoegingsmodelle het gemengde resultate opgelewer en in sommige prestasiemaatstawwe vir party metodes marginaal verbeter. Op grond van die waardes van die prestasiemaatstawwe wat in hierdie studie gebruik is, kan die gevolgtrekking gemaak word dat die tradisionele statistiese vooruitskattingsmetodes (ARIMA en regressie met ARIMA-foute) op die gekose datastel beter as die masjienleermetodes (ANN en SVM) presteer het. Dit dui op die behoefte aan verdere navorsing oor die toepaslikheid van tydreeksvooruitskatting met masjienleermetodes om hul prestasie vergeleke met dié van die tradisionele metodes te verbeter. / Go nyakišišitšwe ka ga mekgwa ye mentši ya go akanya ka ga molokoloko wa dinako le go dirwa gore e hwetšagale mo mengwageng ye e sa tšwago go feta. Se k e k a le b a k a la g o t šwelela ga mekgwa ya go ithuta ya go diriša metšhene yeo le yona e ilego ya dirišwa ka kakanyong ya molokolokong wa dinako. Go t šwelela ga mehutahuta ya mekgwa le go fapafapana ga yona go tšweletša tlhohlo ge go kgethwa mekgwa ya maleba ya go akanya. Dinyakišišo tše di lekodišišitše go šoma ga mekgwa ye mene ya go akanya yeo e gatetšego pele e lego: ditekanyotshepelo tšeo di kopantšwego tša poelomorago ya maitirišo (ARIMA); dinetweke tša maitirelo tša nyurale (ANN); metšhene ya bekthara ya thekgo (SVM); le mekgwa ya poelomorago yeo e nago le diphošo tša ARIMA. Go kaonafatša go šoma ga yona, nepagalo ya go ithuta ka metšhene le yona e dirišitšwe. Go šoma ga mekgwa ye e fepafapanego go laeditšwe ka go šomiša tshedimošo ya banamedi ba difofane ba Afrika Borwa yeo e kgobokeditšwego mabakeng a dipeakanyo ke Khamphani ya Maemafofane ya Afrika Borwa (ACSA). Sengwalwanyaki šišo se ahlaahlile mekgwa ya kakanyo ye e fapafapanego ka bophara. Dipharologanyi tša go swana le maatla le bofokodi le go dirišega ga mekgwa di ile tša šomišwa. Magato a mangwe ao a tumilego kudu a kakanyo ye e nepagetšego a ile a ahlaahlwa ka nepo ya go kwešiša ka fao a ka šomišwago ka gona ka tshekatshekong ya go šoma ga mekgwa ye. Go hweditšwe gore mokgwa wa poelomorago wa go ba le diphošo tša ARIMA o phadile mekgwa ye mengwe ka moka, gwa latela mokgwa wa ARIMA. Dikutollo tše di sepelelana le dikutollo ka kakaretšo ka dingwaleng. Mo k gwa wa ANN o ka fela o fetišiša gomme se se bonagetše go dipoelo tša tlhahlo le dihlo pha t ša teko ya tshedimošo. Mekgwa ya nepagalo ya go ithuta ka metšhene e bontšhitše dipoelo tšeo di hlakantšwego tšeo di nago le kaonafalo ye kgolo go ye mengwe mekgwa ya go ela go phethagatšwa ga mešomo. Go ka phethwa ka gore mekgwa ya setlwaedi ya go akanya dipalopalo (ARIMA le mokgwa wa poelomorago wa go ba le diphošo tša ARIMA) e šomile bokaone go phala mekgwa ya go ithuta ka metšhene (ANN le SVM) ka mo go sehlopha se sa tshedimošo, go eya ka magato a nepagalo ya magato ao a šomišitšwego. Se se nyaka gore go dirwe dinyakišišo tše dingwe mabapi le go dirišega ga mekgwa ya go ithuta ka metšhene mabapi le go akanya molokoloko wa dinako, e lego seo se tlago thuša go kwešiša le go kaonafatša go šoma ga yona kgahlanong le mekgwa ya setlwaedi ya dipalopalo. / Decision Sciences / M. Sc. (Operations Research)
337

Závislost hodnoty stavebního závodu na velikosti vlastního kapitálu / Dependence of the value of the construction enterprise on the size of the equity

Bahenský, Miloš January 2019 (has links)
The doctoral thesis deals with the valuer issues of business valuation with construction production in the condition of the Czech economy. The business valuation issue is, and will always be, highly relevant in a market economy environment, with regard to both methodical and practical approaches. The main aim of the doctoral thesis is to demonstrate the dependence constructing empirical regression model to determine the value of the construction enterprise by the chosen income valuation method based on the equity (book value of equity in historical costs). The first part of the doctoral thesis is a research study describing the approach of the authors to the current state of knowledge concerning the issues of business valuation, aspects of equity, using the principles of system methodology. Based on these findings, a space is defined in which it is possible to propose a solution of a partial problem in terms of selecting the enterprise value category and the associated income valuation methods suitable for extensive time-series analysis. An integral part of the doctoral thesis is the determination of the sample size of construction enterprises according to the assumptions and limitations of the chosen methodology. Empirical research for data collection is based on Justice.cz database. Another important part is, in the spirit of system approach principles, the choice and application of the method of system discipline for the solved problem of doctoral thesis. The result of the solution is an empirical regression model, which after subsequent validation in multiple case studies could also be recommended for wider verification in valuers practice. Part of the thesis will also include discussions in the wider context of the potential benefits of the doctoral thesis for practical, theoretical and pedagogical use.
338

Metody pro predikci s vysokodimenzionálními daty genových expresí / Methods for class prediction with high-dimensional gene expression data

Šilhavá, Jana Unknown Date (has links)
Dizertační práce se zabývá predikcí vysokodimenzionálních dat genových expresí. Množství dostupných genomických dat významně vzrostlo v průběhu posledního desetiletí. Kombinování dat genových expresí s dalšími daty nachází uplatnění v mnoha oblastech. Například v klinickém řízení rakoviny (clinical cancer management) může přispět k přesnějšímu určení prognózy nemocí. Hlavní část této dizertační práce je zaměřena na kombinování dat genových expresí a klinických dat. Používáme logistické regresní modely vytvořené prostřednictvím různých regularizačních technik. Generalizované lineární modely umožňují kombinování modelů s různou strukturou dat. V dizertační práci je ukázáno, že kombinování modelu dat genových expresí a klinických dat může vést ke zpřesnění výsledku predikce oproti vytvoření modelu pouze z dat genových expresí nebo klinických dat. Navrhované postupy přitom nejsou výpočetně náročné.  Testování je provedeno nejprve se simulovanými datovými sadami v různých nastaveních a následně s~reálnými srovnávacími daty. Také se zde zabýváme určením přídavné hodnoty microarray dat. Dizertační práce obsahuje porovnání příznaků vybraných pomocí klasifikátoru genových expresí na pěti různých sadách dat týkajících se rakoviny prsu. Navrhujeme také postup výběru příznaků, který kombinuje data genových expresí a znalosti z genových ontologií.
339

Developing Artificial Neural Networks (ANN) Models for Predicting E. Coli at Lake Michigan Beaches

Mitra Khanibaseri (9045878) 24 July 2020 (has links)
<p>A neural network model was developed to predict the E. Coli levels and classes in six (6) select Lake Michigan beaches. Water quality observations at the time of sampling and discharge information from two close tributaries were used as input to predict the E. coli. This research was funded by the Indiana Department of Environmental Management (IDEM). A user-friendly Excel Sheet based tool was developed based on the best model for making future predictions of E. coli classes. This tool will facilitate beach managers to take real-time decisions.</p> <p>The nowcast model was developed based on historical tributary flows and water quality measurements (physical, chemical and biological). The model uses experimentally available information such as total dissolved solids, total suspended solids, pH, electrical conductivity, and water temperature to estimate whether the E. Coli counts would exceed the acceptable standard. For setting up this model, field data collection was carried out during 2019 beachgoer’s season.</p> <p>IDEM recommends posting an advisory at the beach indicating swimming and wading are not recommended when E. coli counts exceed advisory standards. Based on the advisory limit, a single water sample shall not exceed an E. Coli count of 235 colony forming units per 100 milliliters (cfu/100ml). Advisories are removed when bacterial levels fall within the acceptable standard. However, the E. coli results were available after a time lag leading to beach closures from previous day results. Nowcast models allow beach managers to make real-time beach advisory decisions instead of waiting a day or more for laboratory results to become available.</p> <p>Using the historical data, an extensive experiment was carried out, to obtain the suitable input variables and optimal neural network architecture. The best feed-forward neural network model was developed using Bayesian Regularization Neural Network (BRNN) training algorithm. Developed ANN model showed an average prediction accuracy of around 87% in predicting the E. coli classes. </p>
340

Régression non-paramétrique pour variables fonctionnelles / Non parametric regression for functional data

Elamine, Abdallah Bacar 23 March 2010 (has links)
Cette thèse se décompose en quatre parties auxquelles s'ajoute une présentation. Dans un premier temps, on expose les outils mathématiques essentiels à la compréhension des prochains chapitres. Dans un deuxième temps, on s'intéresse à la régression non paramétrique locale pour des données fonctionnelles appartenant à un espace de Hilbert. On propose, tout d'abord, un estimateur de l'opérateur de régression. La construction de cet estimateur est liée à la résolution d'un problème inverse linéaire. On établit des bornes de l'erreur quadratique moyenne (EQM) de l'estimateur de l'opérateur de régression en utilisant une décomposition classique. Cette EQM dépend de la fonction de petite boule de probabilité du régresseur au sujet de laquelle des hypothèses de type Gamma-variation sont posées. Dans le chapitre suivant, on reprend le travail élaboré dans le précédent chapitre en se plaçant dans le cadre de données fonctionnelles appartenant à un espace semi-normé. On établit des bornes de l'EQM de l'estimateur de l'opérateur de régression. Cette EQM peut être vue comme une fonction de la fonction de petite boule de probabilité. Dans le dernier chapitre, on s'intéresse à l'estimation de la fonction auxiliaire associée à la fonction de petite boule de probabilité. D'abord, on propose un estimateur de cette fonction auxiliare. Ensuite, on établit la convergence en moyenne quadratique et la normalité asymptotique de cet estimateur. Enfin, par des simulations, on étudie le comportement de de cet estimateur au voisinage de zéro. / This thesis is divided in four sections with an additionnal presentation. In the first section, We expose the essential mathematics skills for the comprehension of the next sections. In the second section, we adress the problem of local non parametric with functional inputs. First, we propose an estimator of the unknown regression function. The construction of this estimator is related to the resolution of a linear inverse problem. Using a classical method of decomposition, we establish a bound for the mean square error (MSE). This bound depends on the small ball probability of the regressor which is assumed to belong to the class of Gamma varying functions. In the third section, we take again the work done in the preceding section by being situated in the frame of data belonging to a semi-normed space with infinite dimension. We establish bound for the MSE of the regression operator. This MSE can be seen as a function of the small ball probability function. In the last section, we interest to the estimation of the auxiliary function. Then, we establish the convergence in mean square and the asymptotic normality of the estimator. At last, by simulations, we study the bahavour of this estimator in a neighborhood of zero.

Page generated in 0.0464 seconds