• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 298
  • 107
  • 49
  • 38
  • 23
  • 20
  • 20
  • 18
  • 9
  • 8
  • 7
  • 6
  • 5
  • 4
  • 4
  • Tagged with
  • 689
  • 152
  • 84
  • 77
  • 71
  • 66
  • 55
  • 54
  • 49
  • 48
  • 46
  • 43
  • 43
  • 42
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Modelagem estatística para a determinação de resultados de dados esportivos.

Suzuki, Adriano Kamimura 27 June 2007 (has links)
Made available in DSpace on 2016-06-02T20:05:59Z (GMT). No. of bitstreams: 1 DissAKS.pdf: 566811 bytes, checksum: b01be331b665ab0824c5ab32218e4354 (MD5) Previous issue date: 2007-06-27 / Financiadora de Estudos e Projetos / The basic result of a soccer game is the final scoreboard, which can be seen as a bivariate random vector. Theoretically and based on existent literature we can argue that the number of marked gols by a team in a game obeys a (univariate) Poisson distribution. Thus, the Bivariate Poisson distributions are studied, in special for the class "of Holgate" (1964). Using as information the recent results of the teams, whose confrontation we want to model, several methods were used for parameters estimation of the Bivariate Poisson class "of Holgate". The idea is to use procedures that supply the probabilities of occurrence of placares, so that thus the probability of the occurrence of a certain result (team home´s victory, draw or defeat) can be calculated properly. The parameters of Bivariate Poisson distribution "of Holgate" are assumed to have a dependence factors, such as attack, defense and field, that possibly explain the numbers of goals. / O resultado básico de uma partida de futebol é o seu placar …nal, que pode ser visto como um vetor aleatório bivariado. Teoricamente e baseando-se na literatura existente podemos argumentar que o número de gols marcados por um time em uma dada partida obedeça a uma distribuição (univariada) de Poisson. Assim, são estudadas as distribuições de Poisson Bivariadas, com destaque para a classe "de Holgate" (1964). Utilizando como informações os resultados recentes dos times, cujo confronto se queira modelar, foram utilizados vários métodos para a estimação de parâmetros da densidade da classe Poisson Bivariada "de Holgate". A idéia é considerar procedimentos que forneçam as probabilidades de ocorrência de placares, para que assim a probabilidade da ocorrência de um determinado resultado (vitória do time mandante, empate ou derrota) possa ser obtido. Os parâmetros da distribuição de Poisson Bivariada "de Holgate" são assumidos terem dependência de fatores, tais como ataque, defesa e campo, que possivelmente explicam os números de gols feitos.
262

Contributions à la théorie des valeurs extrêmes : Détection de tendance pour les extrêmes hétéroscédastiques / Contributions to extreme value theory : Trend detection for heteroscedastic extremes

Mefleh, Aline 26 June 2018 (has links)
Nous présentons dans cette thèse en premier lieu la méthode de Bootstrap par permutation appliquée à la méthode des blocs maxima utilisée en théorie des valeurs extrêmes (TVE) univariée. La méthode est basée sur un échantillonnage particulier des données en utilisant les rangs des blocs maxima dont la distribution est présentée et introduite dans les simulations. Elle amène à une réduction de la variance des paramètres de la loi GEV et des quantiles estimés. En second lieu, on s’intéresse au cas où les observations sont indépendantes mais non identiquement distribuées en TVE. Cette variation dans la distribution est quantifiée en utilisant une fonction dite « skedasis function » notée c qui représente la fréquence des extrêmes. Ce modèle a été introduit par Einmahl et al. dans le papier « Statistics of heteroscedastic extremes ». On étudie plusieurs modèles paramétriques pour c (log-linéaire, linéaire, log-linéaire discret) ainsi que les résultats de consistance et de normalité asymptotique du paramètre θ représentant la tendance. Le test θ =0 contre θ ≠0 est interprété alors comme un test de détection de tendance dans les extrêmes. Nous illustrons nos résultats dans une étude par simulation qui montre en particulier que les tests paramétriques sont en général plus puissants que les tests non paramétriques pour la détection de la tendance, d’où l’utilité de notre travail. Nous discutons en plus le choix du seuil k en appliquant la méthode de Lepski. Enfin, nous appliquons la méthodologie sur les données de températures minimales et maximales dans la région de Fort Collins, Colorado durant le 20ème siècle afin de détecter la présence d’une tendance dans les extrêmes sur cette période. En troisième lieu, on dispose d’un jeu de données de précipitation journalière maximale sur 24 ans dans 40 stations. On réalise une prédiction spatio-temporelle des quantiles correspondants à un niveau de retour de 20 ans pour les précipitations mensuelles dans chaque station. Nous utilisons des modèles de GEV en introduisant des covariables dans les paramètres. Le meilleur modèle est choisi en termes d’AIC et par la méthode de validation croisée. Pour chacun des deux modèles choisis, nous estimons les quantiles extrêmes. Finalement, on applique la TVE unvariée et bivariée sur les vitesses du vent et la hauteur des vagues dans une région au Liban en vue de protéger la plateforme pétrolière qui y sera installée de ces risques environnementaux. On applique d’abord la théorie univariée sur la vitesse du vent et la hauteur des vagues séparément en utilisant la méthode des blocs maximas pour estimer les paramètres de la GEV et les niveaux de retour associés à des périodes de retour de 50, 100 et 500 années. Nous passons ensuite à l’application de la théorie bivariée afin d’estimer la dépendance entre les vents et les vagues extrêmes et d’estimer des probabilités jointes de dépassement des niveaux de retour univariés. Nous associons ces probabilités jointes de dépassement à des périodes de retour jointes et nous les comparons aux périodes de retour marginales. / We firstly present in this thesis the permutation Bootstrap method applied for the block maxima (BM) method in extreme value theory. The method is based on BM ranks whose distribution is presented and simulated. It performs well and leads to a variance reduction in the estimation of the GEV parameters and the extreme quantiles. Secondly, we build upon the heteroscedastic extremes framework by Einmahl et al. (2016) where the observations are assumed independent but not identically distributed and the variation in their tail distributions is modeled by the so-called skedasis function. While the original paper focuses on non-parametric estimation of the skedasis function, we consider here parametric models and prove the consistency and asymptotic normality of the parameter estimators. A parametric test for trend detection in the case where the skedasis function is monotone is introduced. A short simulation study shows that the parametric test can be more powerful than the non-parametric Kolmogorov-Smirnov type test, even for misspecified models. We also discuss the choice of threshold based on Lepski's method. The methodology is finally illustrated on a dataset of minimal/maximal daily temperatures in Fort Collins, Colorado, during the 20th century. Thirdly, we have a training sample data of daily maxima precipitation over 24 years in 40 stations. We make spatio-temporal prediction of quantile of level corresponding to extreme monthly precipitation over the next 20 years in every station. We use generalized extreme value models by incorporating covariates. After selecting the best model based on the Akaike information criterion and the k-fold cross validation method, we present the results of the estimated quantiles for the selected models. Finally, we study the wind speed and wave height risks in Beddawi region in the northern Lebanon during the winter season in order to protect the oil rig that will be installed. We estimate the return levels associated to return periods of 50, 100 and 500 years for each risk separately using the univariate extreme value theory. Then, by using the multivariate extreme value theory we estimate the dependence between extreme wind speed and wave height as well as joint exceedance probabilities and joint return levels to take into consideration the risk of these two environmental factors simultaneously.
263

Applications of the Extremal Functional Bootstrap / Aplicações do Bootstrap Funcional Extremo

Alexander Meinke 13 November 2018 (has links)
The study of conformal symmetry is motivated through an example in statistical mechanics and then rigorously developed in quantum field theories in general spatial dimensions. In particular, primary fields are introduced as the fundamental objects of such theories and then studied in the formalism of radial quantization. The implications of conformal invariance on the functional form of correlation functions are studied in detail. Conformal blocks are defined and various approaches to their analytical and numerical calculation are presented with a special emphasis on the one-dimensional case. Building on these preliminaries, a modern formulation of the conformal bootstrap program and its various extensions are discussed. Examples are given in which bounds on the scaling dimensions in a one-dimensional theory are derived numerically. Using these results I motivate the technique of using the extremal functional bootstrap which I then develop in more detail. Many technical details are discussed and examples shown. After a brief discussion of conformal field theories with a boundary I apply numerical methods to find constraints on the spectrum of the 3D Ising model. Another application is presented in which I study the 4-point function on the boundary of a particular theory in Anti-de-Sitter space in order to approximate the mass spectrum of the theory. / O estudo da simetria conforme é motivado através de um exemplo em mecânica estatística e em seguida rigorosamente desenvolvido em teorias de campos quânticos em dimensões espaciais gerais. Em particular, os campos primários são introduzidos como os objetos fundamentais de tais teorias e então estudados através do formalismo de quantização radial. As implicações da invariância conforme na forma funcional das funções de correlação são estudadas em detalhe. Blocos conformes são definidos e várias abordagens para seu cálculo analítico e numérico são apresentadas com uma ênfase especial no caso unidimensional. Com base nessas preliminares, uma formulação moderna do programa de bootstrap conforme e suas várias extensões são discutidas. Exemplos são dados em que limites nas dimensões de escala em uma teoria unidimensional são derivados numericamente. Usando esses resultados, motivei a técnica de usar o bootstrap funcional extremo, que depois desenvolvo em mais detalhes. Diversos detalhes técnicos são discutidos e exemplos são apresentados. Após uma breve discussão das teorias de campo conformes com fronteiras, eu aplico métodos numéricos para encontrar restrições no espectro do modelo de Ising em 3D. Outra aplicação é apresentada em que eu estudo a função de 4 pontos na fronteira de uma teoria particular no espaço Anti-de-Sitter, a fim de aproximar o espectro de massa da teoria.
264

Energia metabolizável de alimentos energéticos para suínos: predição via meta-análise, determinação e validação por simulação bootstrap / Metabolizable energy of energetic food for swine: prediction via meta-analysis, determination and validation by bootstrap simulation

Langer, Carolina Natali 19 July 2013 (has links)
Made available in DSpace on 2017-07-10T17:47:55Z (GMT). No. of bitstreams: 1 Carolina_Natali_Langer.pdf: 3347830 bytes, checksum: e0c654ce879cbbab0be9d34b37c74f3d (MD5) Previous issue date: 2013-07-19 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The proposed objectives in this study were the metabolizable energy (ME) prediction of corn, sorghum and wheat bran from the chemical and energy composition of these foods in national and international literature data; the stepwise procedure validation of regressive selection by bootstrap simulation; the ME determination of these foods for growing pigs and subsequent validation of equations estimated in ME values observed in the experiment, using the bootstrap resampling procedure. For the ME prediction in chemical composition function, we used data from trials of pig metabolism and chemical composition of corn, sorghum and wheat bran, available in national and international scientific literature. Five models of multiple linear regression were adjusted to estimate the ME. In the stepwise procedure validation of regressive selection, it was used the non-parametric bootstrap resampling method, with each sample replacement, from the database formed via meta-analysis. It was observed the significance percentage by regressive (SPR) and the joint occurrence percentage of the model regressive (JOPMR). In the complete model and in the model without the digestible energy inclusion (DE), the DE and the gross energy (GE) were the regressive which presented the highest SPR (DE = 100% and GE = 95.7%), respectively, suggesting the importance of such regressive to explain the ME of energetic foods for pigs. However, the JOPMR were low, with values among 2.6 and 23.4%, indicating a low reliability of the predicted models to estimate the ME of corn, sorghum and wheat bran for pigs. Based on the SPR, the regressive of the models ME4 = 3824.440 - 105.294Ash + 45.008EE - 37.257DA1*CP (R2 = 0.90); ME5 = 3982.994 - 79.970Ash - 44.778DA1*CP - 43.416DA2*Ash (R2 = 0.92) are valid to estimate the ME of energetic food for pigs. In the field trial, we used 44 crossbred pigs, male and castrated, with an average initial weight of 24.3 ± 1.12 kg, in a randomized block experimental design, with ten treatments and a reference ratio. The ten treatments consisted of six corn and two sorghum cultivars, which replaced in 30% the RR, and two wheat brans, which replaced 20% of the RR. The method of total collection of feces and urine was used for determining the ME of food by using ferric oxide as a fecal marker to define the beginning and end of the collection period. The ME values of corn, sorghum and wheat bran for pigs vary from 3.161 to 3.275, from 3.317 to 3.457 and from 2.767 to 2.842 kcal kg-1 of natural matter, respectively. The validation of the ME prediction models was performed through adjusting the linear regression models of 1st degree from the observed values experimentally determined in function of ME predicted values, calculated by replacement of chemical and energetic composition values of foods, determined in laboratory, in the estimated models via meta-analysis, using the ordinary minimum squares method. The validation of 1st degree models and prediction models of ME was verified by testing the joint null hypothesis for the linear regression parameters (H0: β0 = 0 and β1 = 1). The crossvalidation percentage of each estimated model was evaluated by the same validation tests described in the single test validation. The model ME1 generated similar predicted EM values (p>0.05) to the experimentally observed ME values for national corn and sorghum cultivars in single test validation and had the highest percentage of validation (68%) in 200 bootstrap samples. The other models had a low percentage of cross-validation (0 to 29.5%), and the validated model by both procedures, and that can be used for national corn and sorghum is the ME1 = 2.547 + 0.969DE / Os objetivos propostos neste trabalho foram a predição da energia metabolizável (EM) do milho, sorgo e farelo de trigo a partir da composição química e energética desses alimentos em dados de literatura nacional e internacional; a validação do procedimento stepwise de seleção de regressoras por simulação bootstrap; a determinação da EM desses alimentos para suínos em crescimento e a subsequente validação das equações estimadas nos valores de EM observados no experimento, com utilização do método de reamostragem bootstrap. Para a predição da EM em função de composição química, foram utilizados dados de ensaios de metabolismo de suínos e de composição química do milho, sorgo e farelo de trigo, disponibilizados na literatura científica nacional e internacional. Foram ajustados cinco modelos de regressão linear múltipla para estimar a EM. Na validação do procedimento stepwise de seleção de regressoras, utilizou-se o método de reamostragem bootstrap não paramétrico, com reposição de cada amostra, a partir do banco de dados formado via metaanálise. Foi observado o percentual de significância por regressora (PSR) e o percentual de ocorrência conjunta de regressoras do modelo (POCRM). No modelo completo e no modelo sem inclusão de energia digestível (ED), a ED e a energia bruta (EB) foram as regressoras que apresentaram os maiores PSR (ED = 100% e EB = 95,7%), respectivamente, sugerindo a importância de tais regressoras para explicar a EM de alimentos energéticos para suínos. Entretanto, os POCRM apresentaram-se baixos, com valores entre 2,6 e 23,4%, indicando uma baixa confiabilidade dos modelos preditos para estimar a EM do milho, sorgo e farelo de trigo para suínos. Com base no PSR, as regressoras dos modelos EM4 = 3824,44 - 105,29MM + 45,01EE - 37,26DA1*PB (R2 = 0,90) e EM5 = 3982,99 - 79,97MM - 44,78DA1*PB - 43,42DA2*MM (R2 = 0,92) são válidas para estimar a EM de alimentos energéticos para suínos. No experimento de campo, foram utilizados 44 suínos mestiços, machos e castrados, com peso médio inicial de 24,3 ± 1,12 kg, em delineamento experimental de blocos ao acaso, com dez tratamentos e uma ração referência (RR). Os dez tratamentos consistiram em seis cultivares de milho e dois de sorgo, que substituíram em 30% a RR, e dois farelos de trigo, que substituíram em 20% a RR. O método da coleta total de fezes e urina foi utilizado para determinação da EM dos alimentos. Os valores de EM dos milhos, sorgos e farelos de trigo para suínos variam de 3.161 a 3.275, de 3.317 a 3.457 e de 2.767 a 2.842 kcal kg-1 de matéria natural, respectivamente. A validação dos modelos de predição da EM foi realizada por meio do ajuste de modelos de regressão linear de 1º grau dos valores observados determinados em ensaio sobre os valores preditos de EM, calculados por substituição dos valores de composição química e energética dos alimentos, determinados em laboratório, nos modelos estimados via meta-análise, utilizando-se do método dos mínimos quadrados ordinários. A validação dos modelos de 1º grau e dos modelos de predição da EM foi verificada por meio de teste da hipótese de nulidade conjunta para os parâmetros da regressão linear (H0: β0 = 0 e β1 = 1). O percentual de validação cruzada de cada modelo estimado foi avaliado por meio dos mesmos testes de validação descritos no teste único de validação. O modelo EM1 gerou valores de EM predita semelhantes (p>0,05) aos valores de EM observados em experimento para milhos e sorgos nacionais em teste único de validação e apresentou o maior percentual de validação (68%) em 200 amostras bootstrap. Os demais modelos tiveram baixo percentual de validação cruzada (0 a 29,5%) e o modelo validado por ambos os procedimentos, e que pode ser utilizado para o milho e sorgo nacionais é o EM1a = 2,547 + 0,969ED
265

Modelos estocásticos com heterocedasticidade para séries temporais em finanças / Stochastic models with heteroscedasticity for time series in finance

Sandra Cristina de Oliveira 20 May 2005 (has links)
Neste trabalho desenvolvemos um estudo sobre modelos auto-regressivos com heterocedasticidade (ARCH) e modelos auto-regressivos com erros ARCH (AR-ARCH). Apresentamos os procedimentos para a estimação dos modelos e para a seleção da ordem dos mesmos. As estimativas dos parâmetros dos modelos são obtidas utilizando duas técnicas distintas: a inferência Clássica e a inferência Bayesiana. Na abordagem de Máxima Verossimilhança obtivemos intervalos de confiança usando a técnica Bootstrap e, na abordagem Bayesiana, adotamos uma distribuição a priori informativa e uma distribuição a priori não-informativa, considerando uma reparametrização dos modelos para mapear o espaço dos parâmetros no espaço real. Este procedimento nos permite adotar distribuição a priori normal para os parâmetros transformados. As distribuições a posteriori são obtidas através dos métodos de simulação de Monte Carlo em Cadeias de Markov (MCMC). A metodologia é exemplificada considerando séries simuladas e séries do mercado financeiro brasileiro / In this work we present a study of autoregressive conditional heteroskedasticity models (ARCH) and autoregressive models with autoregressive conditional heteroskedasticity errors (AR-ARCH). We also present procedures for the estimation and the selection of these models. The estimates of the parameters of those models are obtained using both Maximum Likelihood estimation and Bayesian estimation. In the Maximum Likelihood approach we get confidence intervals using Bootstrap resampling method and in the Bayesian approach we present informative prior and non-informative prior distributions, considering a reparametrization of those models in order to map the space of the parameters into real space. This procedure permits to choose prior normal distributions for the transformed parameters. The posterior distributions are obtained using Monte Carlo Markov Chain methods (MCMC). The methodology is exemplified considering simulated and Brazilian financial series
266

Planejamento energético da operação de médio prazo conjugando as técnicas de PDDE, PAR(p) e Bootstrap

Castro, Cristina Márcia Barros de 27 December 2012 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2016-06-22T12:09:45Z No. of bitstreams: 1 cristinamarciabarrosdecastro.pdf: 9219339 bytes, checksum: 92fbbaf80500b5c629a4e62bcd9aa49d (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-07-13T15:29:14Z (GMT) No. of bitstreams: 1 cristinamarciabarrosdecastro.pdf: 9219339 bytes, checksum: 92fbbaf80500b5c629a4e62bcd9aa49d (MD5) / Made available in DSpace on 2016-07-13T15:29:14Z (GMT). No. of bitstreams: 1 cristinamarciabarrosdecastro.pdf: 9219339 bytes, checksum: 92fbbaf80500b5c629a4e62bcd9aa49d (MD5) Previous issue date: 2012-12-27 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Com o objetivo de atendimento à demanda de energia elétrica, buscando um baixo custo na geração de energia, é imprescindível o desenvolvimento do planejamento da operação do setor elétrico brasileiro. O planejamento da operação no horizonte de médio prazo leva em consideração a alta estocasticidade das afluências e é avaliado através da série histórica de Energia Natural Afluente (ENA). No modelo homologado pelo setor, o estudo da ENA tem sido feito por meio da metodologia Box e Jenkins, para determinar os modelos autorregressivos periódicos (PAR(p)), bem como sua ordem . Aos resíduos gerados na modelagem do PAR(p), são aplicados uma distribuição lognormal três parâmetros, como forma de gerar séries sintéticas hidrológicas semelhantes à série histórica original. Contudo, a transformação lognormal incorpora não linearidades que afetam o processo de convergência da Programação Dinâmica Dual Estocástica (PDDE). Este trabalho incorpora a técnica de bootstrap para a geração de cenários sintéticos que servirão de base para a aplicação da PDDE. A técnica estatística Bootstrap é um método alternativo a ser empregado ao problema de planejamento e que permite tanto determinar a ordem ( ) do modelo PAR(p), quanto gerar novas séries sintéticas hidrológicas. Assim, o objetivo do trabalho é analisar os impactos existentes com o uso do Bootstrap no planejamento da operação dos sistemas hidrotérmicos e, em seguida estabelecer uma comparação com a metodologia que tem sido aplicada no setor. Diante dos resultados foi possível concluir que a técnica bootstrap permite a obtenção de séries hidrológicas bem ajustadas e geram resultados confiáveis quanto ao planejamento da operação de sistemas hidrotérmicos, podendo ser usada como uma técnica alternativa ao problema em questão. / Aiming to match the long term load demand with a low cost in power generation, it is very important to improve more and more the operation planning of the Brazilian electric sector. The operation planning of medium/long term takes into account the water inflows, which are strongly stochastic, and it must be evaluated using the series of Natural Energy Inflows (NEI). In the current computational model applied to Brazilian operation planning of medium/long term, the study of ENA has been done by Box and Jenkins methodology, which determines the periodic autoregressive model (PAR (p)), as well as its order p. A lognormal distribution with three parameters is applied on the residues that are created by the PAR (p) model, as a way to generate synthetic hydrologic series similar to the original series. However, this lognormal transformation brings nonlinearities which can disturb the stability and convergence of Stochastic Dual Dynamic Programming (SDDP). This thesis incorporates the bootstrap technique to create synthetic scenarios which will be taken into account as a basis for the SDDP implementation. This statistical technique, called bootstrap, is an alternative method used to determine both the order (p) of the model PAR (p), and, after that, to produce synthetic hydrological series. Thus, the objective of this thesis is to analyze the impact of the Bootstrap technique compared to the current methodology. The results showed that the bootstrap technique is suitable to obtain adherent hydrological series. So, it was created reliable scenarios regarding the planning of the operation of hydrothermal systems. Finally, this new methodology can be used as an alternative technique to long term hydrothermal planning problems.
267

Approche pour la construction de modèles d'estimation réaliste de l'effort/coût de projet dans un environnement incertain : application au domaine du développement logiciel / Approach to build realistic models for estimating project effort/cost in an uncertain environment : application to the software development field

Laqrichi, Safae 17 December 2015 (has links)
L'estimation de l'effort de développement logiciel est l'une des tâches les plus importantes dans le management de projets logiciels. Elle constitue la base pour la planification, le contrôle et la prise de décision. La réalisation d'estimations fiables en phase amont des projets est une activité complexe et difficile du fait, entre autres, d'un manque d'informations sur le projet et son avenir, de changements rapides dans les méthodes et technologies liées au domaine logiciel et d'un manque d'expérience avec des projets similaires. De nombreux modèles d'estimation existent, mais il est difficile d'identifier un modèle performant pour tous les types de projets et applicable à toutes les entreprises (différents niveaux d'expérience, technologies maitrisées et pratiques de management de projet). Globalement, l'ensemble de ces modèles formule l'hypothèse forte que (1) les données collectées sont complètes et suffisantes, (2) les lois reliant les paramètres caractérisant les projets sont parfaitement identifiables et (3) que les informations sur le nouveau projet sont certaines et déterministes. Or, dans la réalité du terrain cela est difficile à assurer. Deux problématiques émergent alors de ces constats : comment sélectionner un modèle d'estimation pour une entreprise spécifique ? et comment conduire une estimation pour un nouveau projet présentant des incertitudes ? Les travaux de cette thèse s'intéressent à répondre à ces questions en proposant une approche générale d'estimation. Cette approche couvre deux phases : une phase de construction du système d'estimation et une phase d'utilisation du système pour l'estimation de nouveaux projets. La phase de construction du système d'estimation est composée de trois processus : 1) évaluation et comparaison fiable de différents modèles d'estimation, et sélection du modèle d'estimation le plus adéquat, 2) construction d'un système d'estimation réaliste à partir du modèle d'estimation sélectionné et 3) utilisation du système d'estimation dans l'estimation d'effort de nouveaux projets caractérisés par des incertitudes. Cette approche intervient comme un outil d'aide à la décision pour les chefs de projets dans l'aide à l'estimation réaliste de l'effort, des coûts et des délais de leurs projets logiciels. L'implémentation de l'ensemble des processus et pratiques développés dans le cadre de ces travaux ont donné naissance à un prototype informatique open-source. Les résultats de cette thèse s'inscrivent dans le cadre du projet ProjEstimate FUI13. / Software effort estimation is one of the most important tasks in the management of software projects. It is the basis for planning, control and decision making. Achieving reliable estimates in projects upstream phases is a complex and difficult activity because, among others, of the lack of information about the project and its future, the rapid changes in the methods and technologies related to the software field and the lack of experience with similar projects. Many estimation models exist, but it is difficult to identify a successful model for all types of projects and that is applicable to all companies (different levels of experience, mastered technologies and project management practices). Overall, all of these models form the strong assumption that (1) the data collected are complete and sufficient, (2) laws linking the parameters characterizing the projects are fully identifiable and (3) information on the new project are certain and deterministic. However, in reality on the ground, that is difficult to be ensured.Two problems then emerge from these observations: how to select an estimation model for a specific company ? and how to conduct an estimate for a new project that presents uncertainties ?The work of this thesis interested in answering these questions by proposing a general estimation framework. This framework covers two phases: the construction phase of the estimation system and system usage phase for estimating new projects. The construction phase of the rating system consists of two processes: 1) evaluation and reliable comparison of different estimation models then selection the most suitable estimation model, 2) construction of a realistic estimation system from the selected estimation model and 3) use of the estimation system in estimating effort of new projects that are characterized by uncertainties. This approach acts as an aid to decision making for project managers in supporting the realistic estimate of effort, cost and time of their software projects. The implementation of all processes and practices developed as part of this work has given rise to an open-source computer prototype. The results of this thesis fall in the context of ProjEstimate FUI13 project.
268

Des tests non paramétriques en régression / Of nonparametric testing in regression

Maistre, Samuel 12 September 2014 (has links)
Dans cette thèse, nous étudions des tests du type : (H0) : E [U | X] = 0 p.s. contre (H1) : P {E [U | X] = 0} < 1 où U est le résidu de la modélisation d'une variable Y en fonction de X. Dans ce cadre et pour plusieurs cas particuliers – significativité de variables, régression quantile, données fonctionnelles, modèle single-index –, nous proposons une statistique de test permettant d'obtenir des valeurs critiques issues d'une loi asymptotique pivotale. Dans chaque cas, nous donnons également une méthode de bootstrap appropriée pour les échantillons de petite taille. Nous montrons la consistance envers des alternatives locales – ou à la Pitman – des tests proposés, lorsque ce type d'alternative ne tend pas trop vite vers l'hypothèse nulle. À chaque fois, nous vérifions à partir de simulations sous l'hypothèse nulle et sous une séquence d'hypothèses alternatives que les résultats théoriques sont en accord avec la pratique. / In this thesis, we study test statistics of the form : (H0) : E [U | X] = 0 p.s. contre (H1) : P {E [U | X] = 0} < 1 where U is the residual of some Y modeling with respect to covariates X. In this setup and for several particular cases – significance, quantile regression, functional data, single-index model –, we introduce test statistics that have pivotal asymptotic critical values. For each case, we also give a suitable bootstrap procedure for small samples. We prove the consistency against local – or Pitman – alternatives for the proposed test statistics, when such an alternative does not get close to the null hypothesis too fast. Simulation studies are used to check the effectiveness of the theoretical results in applications.
269

Optimalizace skladových zásob ve společnosti NET4GAS, s.r.o. / Optimization of inventory in NET4GAS, s.r.o.

Hynoušová, Zuzana January 2012 (has links)
This thesis deals with optimization of inventory of spare parts and maintenance materials in NET4GAS, s.r.o. The aim of the thesis is to sort the items stored in the company and to propose specific supply methodology for the year 2013. The thesis is divided into two parts. The first, theoretical part includes theoretical knowledge of inventory management together with the methods used in the managing process, it also introduces specific inventory management of spare parts and maintenance materials. The second, practical part describes NET4GAS, s.r.o., its current system of inventory management of spare parts and maintenance materials, it identifies the local current problems in inventory management, it proposes selection of appropriate methods of inventory optimization and it demonstrates their application to real data. For the classification of stored items is selected ABC method. To draw up the supply plan is primarily used bootstrap method (also called bootstrapping), which makes estimates of future consumption of spare parts and maintenance materials. The final section summarizes all the recommendations for improving the current inventory management.
270

Estimation de paramètres et planification d’expériences adaptée aux problèmes de cinétique - Application à la dépollution des fumées en sortie des moteurs / Parameter estimation and design of experiments adapted to kinetics problems - Application for depollution of exhaust smoke from the output of engines

Canaud, Matthieu 14 September 2011 (has links)
Les modèles physico-chimiques destinés à représenter la réalité expérimentale peuvent se révéler inadéquats. C'est le cas du piège à oxyde d'azote, utilisé comme support applicatif de notre thèse, qui est un système catalytique traitant les émissions polluantes du moteur Diesel. Les sorties sont des courbes de concentrations des polluants, qui sont des données fonctionnelles, dépendant de concentrations initiales scalaires.L'objectif initial de cette thèse est de proposer des plans d'expériences ayant un sens pour l'utilisateur. Cependant les plans d'expérience s'appuyant sur des modèles, l'essentiel du travail a conduit à proposer une représentation statistique tenant compte des connaissances des experts, et qui permette de construire ce plan.Trois axes de recherches ont été explorés. Nous avons d'abord considéré une modélisation non fonctionnelle avec le recours à la théorie du krigeage. Puis, nous avons pris en compte la dimension fonctionnelle des réponses, avec l'application et l'extension des modèles à coefficients variables. Enfin en repartant du modèle initial, nous avons fait dépendre les paramètres cinétiques des entrées (scalaires) à l'aide d'une représentation non paramétrique.Afin de comparer les méthodes, il a été nécessaire de mener une campagne expérimentale, et nous proposons une démarche de plan exploratoire, basée sur l’entropie maximale. / Physico-chemical models designed to represent experimental reality may prove to be inadequate. This is the case of nitrogen oxide trap, used as an application support of our thesis, which is a catalyst system treating the emissions of the diesel engine. The outputs are the curves of concentrations of pollutants, which are functional data, depending on scalar initial concentrations.The initial objective of this thesis is to propose experiental design that are meaningful to the user. However, the experimental design relying on models, most of the work has led us to propose a statistical representation taking into account the expert knowledge, and allows to build this plan.Three lines of research were explored. We first considered a non-functional modeling with the use of kriging theory. Then, we took into account the functional dimension of the responses, with the application and extension of varying coefficent models. Finally, starting again from the original model, we developped a model depending on the kinetic parameters of the inputs (scalar) using a nonparametric representation.To compare the methods, it was necessary to conduct an experimental campaign, and we propose an exploratory design approach, based on maximum entropy.

Page generated in 0.0552 seconds