21 |
Modelo linear parcial generalizado simétrico / Linear Model Partial Generalized SymmetricVasconcelos, Julio Cezar Souza 06 February 2017 (has links)
Neste trabalho foi proposto o modelo linear parcial generalizado simétrico, com base nos modelos lineares parciais generalizados e nos modelos lineares simétricos, em que a variável resposta segue uma distribuição que pertence à família de distribuições simétricas, considerando um preditor linear que possui uma parte paramétrica e uma não paramétrica. Algumas distribuições que pertencem a essa classe são as distribuições: Normal, t-Student, Exponencial potência, Slash e Hiperbólica, dentre outras. Uma breve revisão dos conceitos utilizados ao longo do trabalho foram apresentados, a saber: análise residual, influência local, parâmetro de suavização, spline, spline cúbico, spline cúbico natural e algoritmo backfitting, dentre outros. Além disso, é apresentada uma breve teoria dos modelos GAMLSS (modelos aditivos generalizados para posição, escala e forma). Os modelos foram ajustados utilizando o pacote gamlss disponível no software livre R. A seleção de modelos foi baseada no critério de Akaike (AIC). Finalmente, uma aplicação é apresentada com base em um conjunto de dados reais da área financeira do Chile. / In this work we propose the symmetric generalized partial linear model, based on the generalized partial linear models and symmetric linear models, that is, the response variable follows a distribution that belongs to the symmetric distribution family, considering a linear predictor that has a parametric and a non-parametric component. Some distributions that belong to this class are distributions: Normal, t-Student, Power Exponential, Slash and Hyperbolic among others. A brief review of the concepts used throughout the work was presented, namely: residual analysis, local influence, smoothing parameter, spline, cubic spline, natural cubic spline and backfitting algorithm, among others. In addition, a brief theory of GAMLSS models is presented (generalized additive models for position, scale and shape). The models were adjusted using the package gamlss available in the free R software. The model selection was based on the Akaike criterion (AIC). Finally, an application is presented based on a set of real data from Chile\'s financial area.
|
22 |
Modelos mistos aditivos semiparamétricos de contornos elípticos / Elliptical contoured semiparametric additive mixed models.Pulgar, Germán Mauricio Ibacache 14 August 2009 (has links)
Neste trabalho estendemos os modelos mistos semiparamétricos propostos por Zhang et al. (1998) para uma classe mais geral de modelos, a qual denominamos modelos mistos aditivos semiparamétricos com erros de contornos elípticos. Com essa nova abordagem, flexibilizamos a curtose da distribuição dos erros possibilitando a escolha de distribuições com caudas mais leves ou mais pesadas do que as caudas da distribuição normal padrão. Funções de verossimilhança penalizadas são aplicadas para a obtenção das estimativas de máxima verossimilhança com os respectivos erros padrão aproximados. Essas estimativas, sob erros de caudas pesadas, são robustas no sentido da distância de Mahalanobis contra observações aberrantes. Curvaturas de influência local são obtidas segundo alguns esquemas de perturbação e gráficos de diagnóstico são propostos. Exemplos ilustrativos são apresentados em que ajustes sob erros normais são comparados, através das metodologias de sensibilidade desenvolvidas no trabalho, com ajustes sob erros de contornos elípticos. / In this work we extend the models proposed by Zhang et al. (1998) to a more general class of models, know as semiparametric additive mixed models with elliptical errors in order to allow distributions with heavier or lighter tails than the normal ones. Penalized likelihood equations are applied to derive the maximum likelihood estimates which appear to be robust against outlying observations in the sense of the Mahalanobis distance. In order to study the sensitivity of the penalized estimates under some usual perturbation schemes in the model or data, the local influence curvatures are derived and some diagnostic graphics are proposed. Motivating examples preliminary analyzed under normal errors are reanalyzed under some appropriate elliptical errors. The local influence approach is used to compare the sensitivity of the model estimates.
|
23 |
Modelos mistos aditivos semiparamétricos de contornos elípticos / Elliptical contoured semiparametric additive mixed models.Germán Mauricio Ibacache Pulgar 14 August 2009 (has links)
Neste trabalho estendemos os modelos mistos semiparamétricos propostos por Zhang et al. (1998) para uma classe mais geral de modelos, a qual denominamos modelos mistos aditivos semiparamétricos com erros de contornos elípticos. Com essa nova abordagem, flexibilizamos a curtose da distribuição dos erros possibilitando a escolha de distribuições com caudas mais leves ou mais pesadas do que as caudas da distribuição normal padrão. Funções de verossimilhança penalizadas são aplicadas para a obtenção das estimativas de máxima verossimilhança com os respectivos erros padrão aproximados. Essas estimativas, sob erros de caudas pesadas, são robustas no sentido da distância de Mahalanobis contra observações aberrantes. Curvaturas de influência local são obtidas segundo alguns esquemas de perturbação e gráficos de diagnóstico são propostos. Exemplos ilustrativos são apresentados em que ajustes sob erros normais são comparados, através das metodologias de sensibilidade desenvolvidas no trabalho, com ajustes sob erros de contornos elípticos. / In this work we extend the models proposed by Zhang et al. (1998) to a more general class of models, know as semiparametric additive mixed models with elliptical errors in order to allow distributions with heavier or lighter tails than the normal ones. Penalized likelihood equations are applied to derive the maximum likelihood estimates which appear to be robust against outlying observations in the sense of the Mahalanobis distance. In order to study the sensitivity of the penalized estimates under some usual perturbation schemes in the model or data, the local influence curvatures are derived and some diagnostic graphics are proposed. Motivating examples preliminary analyzed under normal errors are reanalyzed under some appropriate elliptical errors. The local influence approach is used to compare the sensitivity of the model estimates.
|
24 |
Identificação de uma coluna de destilação de metanol-água através de modelos paramétricos e redes neurais artificiais / Identification of a distillation column of methanol-water through parametric models and artificial neural networksTeixeira, Alex Fernandes Rocha 04 October 2011 (has links)
This work presents a black box identification for a continuous methanol-water distillation column setting in open loop and closed loop response. Step changes and Pseudo-Random Binary Signal (PRBS) disturbance were used to excite the plant. The mathematical models candidates to identify were the Artificial Neural Networks (ANN) and the parametric models: ARX(autoregressive with exogenous inputs ), ARMAX (AutoRegressive Moving Average with eXogenous inputs ), OE(Output Error) and the Box-Jenkins (BJ)structure. The closed loop configuration was the R-V. The results showed that for the bottom loop, the best response were given by BJ, OE and RNA for both open and closed loop response. For the top closed loop, the best responses were also given by BJ, OE and RNA while in open loop condition, the RNA was the one that gave satisfactory outcome. It was verified that the pseudo-random binary signal was a good choice of excitation signal in identification for both open loop and closed dynamic systems. / Foi realizado neste trabalho identificação caixa preta do processo de destilação Metanol-Água nas configurações malha aberta e malha fechada, utilizando como sinais de perturbação a função degrau e o Sinal Binário Pseudo-Aleatório (PRBS) para excitar a planta. Os modelos matemáticos candidatos a identificação foram as Redes Neurais Artificiais (RNA), e os modelos paramétricos discretos lineares autorregressivo com entradas externas (ARX do inglês AutoRegressive with eXogenous Inputs), autorregressivo com média móvel e entradas exógenas (ARMAX do inglês AutoRegressive Moving Average with eXogenous Inputs), modelo do tipo erro na saída (OE do inglês Output Error) e a estrutura Box-Jenkins (BJ). Com a disposição dos modelos, foram comparados quais dos modelos matemáticos candidatos à identificação melhor representa o processo coluna de destilação metanol-água. Comparou-se qual configuração do processo no ensaio de identificação para geração de dados apresenta mais vantagens, se em malha aberta ou em malha fechada, nas condições e metodologias utilizadas. Constatou-se a funcionalidade do sinal binário pseudo-aleatório como uma boa opção de excitação na identificação em malha aberta e fechada para sistemas dinâmicos.
|
25 |
Modelo linear parcial generalizado simétrico / Linear Model Partial Generalized SymmetricJulio Cezar Souza Vasconcelos 06 February 2017 (has links)
Neste trabalho foi proposto o modelo linear parcial generalizado simétrico, com base nos modelos lineares parciais generalizados e nos modelos lineares simétricos, em que a variável resposta segue uma distribuição que pertence à família de distribuições simétricas, considerando um preditor linear que possui uma parte paramétrica e uma não paramétrica. Algumas distribuições que pertencem a essa classe são as distribuições: Normal, t-Student, Exponencial potência, Slash e Hiperbólica, dentre outras. Uma breve revisão dos conceitos utilizados ao longo do trabalho foram apresentados, a saber: análise residual, influência local, parâmetro de suavização, spline, spline cúbico, spline cúbico natural e algoritmo backfitting, dentre outros. Além disso, é apresentada uma breve teoria dos modelos GAMLSS (modelos aditivos generalizados para posição, escala e forma). Os modelos foram ajustados utilizando o pacote gamlss disponível no software livre R. A seleção de modelos foi baseada no critério de Akaike (AIC). Finalmente, uma aplicação é apresentada com base em um conjunto de dados reais da área financeira do Chile. / In this work we propose the symmetric generalized partial linear model, based on the generalized partial linear models and symmetric linear models, that is, the response variable follows a distribution that belongs to the symmetric distribution family, considering a linear predictor that has a parametric and a non-parametric component. Some distributions that belong to this class are distributions: Normal, t-Student, Power Exponential, Slash and Hyperbolic among others. A brief review of the concepts used throughout the work was presented, namely: residual analysis, local influence, smoothing parameter, spline, cubic spline, natural cubic spline and backfitting algorithm, among others. In addition, a brief theory of GAMLSS models is presented (generalized additive models for position, scale and shape). The models were adjusted using the package gamlss available in the free R software. The model selection was based on the Akaike criterion (AIC). Finally, an application is presented based on a set of real data from Chile\'s financial area.
|
26 |
Determining multimediastreaming content / Bestämning av innehåll på multimedia-strömmarTano, Richard January 2011 (has links)
This Master Thesis report was written by Umeå University Engineering Physics student Richard Tano during his thesis work at Ericsson Luleå. Monitoring network quality is of utmost importance to network providers. This can be done with models evaluating QoS (Quality of Service) and conforming to ITU-T Recommendations. When determining video stream quality there is of more importance to evaluatethe QoE (Quality of Experience) to understand how the user perceives the quality. This isranked in MOS (Mean opinion scores) values. An important aspect of determining the QoEis the video content type, which is correlated to the coding complexity and MOS values ofthe video. In this work the possibilities to improve quality estimation models complying to ITU-T study group 12 (q.14) was investigated. Methods were evaluated and an algorithm was developed that applies time series analysis of packet statistics for determination of videostreams MOS scores. Methods used in the algorithm includes a novel assembling of frequentpattern analysis and regression analysis. A model which incorporates the algorithm for usage from low to high bitrates was dened. The new model resulted in around 20% improvedprecision in MOS score estimation compared to the existing reference model. Furthermore an algorithm using only regression statistics and modeling of related statistical parameters was developed. Improvements in coding estimation was comparable with earlier algorithm but efficiency increased considerably. / Detta examensarbete skrevs av Richard Tano student på Umeå universitet åt Ericsson Luleå. Övervakning av nätets prestanda är av yttersta vikt för nätverksleverantörer. Detta görs med modeller för att utvärdera QoS (Quality of Service) som överensstämmer med ITU-T rekommendationer. Vid bestämning av kvaliten på videoströmmar är det mer meningsfullt att utvärdera QoE (Quality of Experience) för att få insikt i hur användaren uppfattar kvaliten. Detta graderas i värden av MOS (Mean opinion score). En viktig aspekt för att bestämma QoE är typen av videoinnehåll, vilket är korrelerat till videons kodningskomplexitet och MOS värden. I detta arbete undersöktes möjligheterna att förbättra kvalitetsuppskattningsmodellerna under uppfyllande av ITU-T studygroup 12 (q.14). Metoder undersöktes och en algoritm utvecklades som använder tidsserieanalys av paketstatistik för uppskattning av videoströmmars MOS-värden. Metoder som ingår i algoritmen är en nyutvecklad frekventa mönster metod tillsammans med regressions analys. En modell som använder algoritmen från låg till hög bithastighet definierades. Den nya modellen gav omkring 20% förbättrad precision i uppskattning av MOS-värden jämfört med existerande referensmodell. Även en algoritm som enbart använder regressionsstatistik och modellerande av statistiska parametrar utvecklades. Denna algoritm levererade jämförbara resultat med föregående algoritm men gav även kraftigt förbättrad effektivitet.
|
27 |
Predictive models for side effects following radiotherapy for prostate cancer / Modèles prédictifs pour les effets secondaires du traitement du cancer de la prostate par radiothérapieOspina Arango, Juan David 16 June 2014 (has links)
La radiothérapie externe (EBRT en anglais pour External Beam Radiotherapy) est l'un des traitements référence du cancer de prostate. Les objectifs de la radiothérapie sont, premièrement, de délivrer une haute dose de radiations dans la cible tumorale (prostate et vésicules séminales) afin d'assurer un contrôle local de la maladie et, deuxièmement, d'épargner les organes à risque voisins (principalement le rectum et la vessie) afin de limiter les effets secondaires. Des modèles de probabilité de complication des tissus sains (NTCP en anglais pour Normal Tissue Complication Probability) sont nécessaires pour estimer sur les risques de présenter des effets secondaires au traitement. Dans le contexte de la radiothérapie externe, les objectifs de cette thèse étaient d'identifier des paramètres prédictifs de complications rectales et vésicales secondaires au traitement; de développer de nouveaux modèles NTCP permettant l'intégration de paramètres dosimétriques et de paramètres propres aux patients; de comparer les capacités prédictives de ces nouveaux modèles à celles des modèles classiques et de développer de nouvelles méthodologies d'identification de motifs de dose corrélés à l'apparition de complications. Une importante base de données de patients traités par radiothérapie conformationnelle, construite à partir de plusieurs études cliniques prospectives françaises, a été utilisée pour ces travaux. Dans un premier temps, la fréquence des symptômes gastro-Intestinaux et génito-Urinaires a été décrite par une estimation non paramétrique de Kaplan-Meier. Des prédicteurs de complications gastro-Intestinales et génito-Urinaires ont été identifiés via une autre approche classique : la régression logistique. Les modèles de régression logistique ont ensuite été utilisés dans la construction de nomogrammes, outils graphiques permettant aux cliniciens d'évaluer rapidement le risque de complication associé à un traitement et d'informer les patients. Nous avons proposé l'utilisation de la méthode d'apprentissage de machine des forêts aléatoires (RF en anglais pour Random Forests) pour estimer le risque de complications. Les performances de ce modèle incluant des paramètres cliniques et patients, surpassent celles des modèle NTCP de Lyman-Kutcher-Burman (LKB) et de la régression logistique. Enfin, la dose 3D a été étudiée. Une méthode de décomposition en valeurs populationnelles (PVD en anglais pour Population Value Decomposition) en 2D a été généralisée au cas tensoriel et appliquée à l'analyse d'image 3D. L'application de cette méthode à une analyse de population a été menée afin d'extraire un motif de dose corrélée à l'apparition de complication après EBRT. Nous avons également développé un modèle non paramétrique d'effets mixtes spatio-Temporels pour l'analyse de population d'images tridimensionnelles afin d'identifier une région anatomique dans laquelle la dose pourrait être corrélée à l'apparition d'effets secondaires. / External beam radiotherapy (EBRT) is one of the cornerstones of prostate cancer treatment. The objectives of radiotherapy are, firstly, to deliver a high dose of radiation to the tumor (prostate and seminal vesicles) in order to achieve a maximal local control and, secondly, to spare the neighboring organs (mainly the rectum and the bladder) to avoid normal tissue complications. Normal tissue complication probability (NTCP) models are then needed to assess the feasibility of the treatment and inform the patient about the risk of side effects, to derive dose-Volume constraints and to compare different treatments. In the context of EBRT, the objectives of this thesis were to find predictors of bladder and rectal complications following treatment; to develop new NTCP models that allow for the integration of both dosimetric and patient parameters; to compare the predictive capabilities of these new models to the classic NTCP models and to develop new methodologies to identify dose patterns correlated to normal complications following EBRT for prostate cancer treatment. A large cohort of patient treated by conformal EBRT for prostate caner under several prospective French clinical trials was used for the study. In a first step, the incidence of the main genitourinary and gastrointestinal symptoms have been described. With another classical approach, namely logistic regression, some predictors of genitourinary and gastrointestinal complications were identified. The logistic regression models were then graphically represented to obtain nomograms, a graphical tool that enables clinicians to rapidly assess the complication risks associated with a treatment and to inform patients. This information can be used by patients and clinicians to select a treatment among several options (e.g. EBRT or radical prostatectomy). In a second step, we proposed the use of random forest, a machine-Learning technique, to predict the risk of complications following EBRT for prostate cancer. The superiority of the random forest NTCP, assessed by the area under the curve (AUC) of the receiving operative characteristic (ROC) curve, was established. In a third step, the 3D dose distribution was studied. A 2D population value decomposition (PVD) technique was extended to a tensorial framework to be applied on 3D volume image analysis. Using this tensorial PVD, a population analysis was carried out to find a pattern of dose possibly correlated to a normal tissue complication following EBRT. Also in the context of 3D image population analysis, a spatio-Temporal nonparametric mixed-Effects model was developed. This model was applied to find an anatomical region where the dose could be correlated to a normal tissue complication following EBRT.
|
28 |
Apprentissage ciblé et Big Data : contribution à la réconciliation de l'estimation adaptative et de l’inférence statistique / Targeted learning in Big Data : bridging data-adaptive estimation and statistical inferenceZheng, Wenjing 21 July 2016 (has links)
Cette thèse porte sur le développement de méthodes semi-paramétriques robustes pour l'inférence de paramètres complexes émergeant à l'interface de l'inférence causale et la biostatistique. Ses motivations sont les applications à la recherche épidémiologique et médicale à l'ère des Big Data. Nous abordons plus particulièrement deux défis statistiques pour réconcilier, dans chaque contexte, estimation adaptative et inférence statistique. Le premier défi concerne la maximisation de l'information tirée d'essais contrôlés randomisés (ECRs) grâce à la conception d'essais adaptatifs. Nous présentons un cadre théorique pour la construction et l'analyse d'ECRs groupes-séquentiels, réponses-adaptatifs et ajustés aux covariable (traduction de l'expression anglaise « group-sequential, response-adaptive, covariate-adjusted », d'où l'acronyme CARA) qui permettent le recours à des procédures adaptatives d'estimation à la fois pour la construction dynamique des schémas de randomisation et pour l'estimation du modèle de réponse conditionnelle. Ce cadre enrichit la littérature existante sur les ECRs CARA notamment parce que l'estimation des effets est garantie robuste même lorsque les modèles sur lesquels s'appuient les procédures adaptatives d'estimation sont mal spécificiés. Le second défi concerne la mise au point et l'étude asymptotique d'une procédure inférentielle semi-paramétrique avec estimation adaptative des paramètres de nuisance. A titre d'exemple, nous choisissons comme paramètre d'intérêt la différence des risques marginaux pour un traitement binaire. Nous proposons une version cross-validée du principe d'inférence par minimisation ciblée de pertes (« Cross-validated Targeted Mimum Loss Estimation » en anglais, d'où l'acronyme CV-TMLE) qui, comme son nom le suggère, marie la procédure TMLE classique et le principe de la validation croisée. L'estimateur CV-TMLE ainsi élaboré hérite de la propriété typique de double-robustesse et aussi des propriétés d'efficacité du TMLE classique. De façon remarquable, le CV-TMLE est linéairement asymptotique sous des conditions minimales, sans recourir aux conditions de type Donsker. / This dissertation focuses on developing robust semiparametric methods for complex parameters that emerge at the interface of causal inference and biostatistics, with applications to epidemiological and medical research in the era of Big Data. Specifically, we address two statistical challenges that arise in bridging the disconnect between data-adaptive estimation and statistical inference. The first challenge arises in maximizing information learned from Randomized Control Trials (RCT) through the use of adaptive trial designs. We present a framework to construct and analyze group sequential covariate-adjusted response-adaptive (CARA) RCTs that admits the use of data-adaptive approaches in constructing the randomization schemes and in estimating the conditional response model. This framework adds to the existing literature on CARA RCTs by allowing flexible options in both their design and analysis and by providing robust effect estimates even under model mis-specifications. The second challenge arises from obtaining a Central Limit Theorem when data-adaptive estimation is used to estimate the nuisance parameters. We consider as target parameter of interest the marginal risk difference of the outcome under a binary treatment, and propose a Cross-validated Targeted Minimum Loss Estimator (TMLE), which augments the classical TMLE with a sample-splitting procedure. The proposed Cross-Validated TMLE (CV-TMLE) inherits the double robustness properties and efficiency properties of the classical TMLE , and achieves asymptotic linearity at minimal conditions by avoiding the Donsker class condition.
|
29 |
Modeling and Assessment of Emergency Mitigation Preparedness & Vulnerability for External Events in Nuclear Power Plants / Assi _ Ahmad _ Final Submission 2014 _ M.A.Sc.Assi, Ahmad 11 1900 (has links)
Thesis Abstract Current Nuclear Power Plant (NPP) design does not account for Beyond Design Basis Events (BDBEs) and thus lack the provisions to effectively mitigates complete loss of AC power and total loss of heat sink. Furthermore, parametric models used in PRA studies to assess Nuclear Power Plant’s safety risk for BDBE and External Events (EE) have significant limitations and proved ineffective to provide solutions on how to mitigate in BDBE or EEs situations. The Fukushima accident is a good example where PRA assessments did not provide the necessary means to cool or contain the reactors effectively. In this thesis, Emergency Mitigation Preparedness (EMP) model and assessment is proposed. The EMP model is objective and practical in evaluating NPP’s mitigation readiness in BDBE and EEs situations and provide a practical NPP Vulnerability indicator gauge which can potentially be used in risk-informed decisions. This will aid further in the NPP to improve in areas of emergency planning, enhance site and reactor design and improve workers safety and readiness to execute effective mitigation procedures and emergency plans. / Thesis / Master of Engineering (ME)
|
30 |
Inference of buffer queue times in data processing systems using Gaussian Processes : An introduction to latency prediction for dynamic software optimization in high-end trading systems / Inferens av buffer-kötider i dataprocesseringssystem med hjälp av Gaussiska processerHall, Otto January 2017 (has links)
This study investigates whether Gaussian Process Regression can be applied to evaluate buffer queue times in large scale data processing systems. It is additionally considered whether high-frequency data stream rates can be generalized into a small subset of the sample space. With the aim of providing basis for dynamic software optimization, a promising foundation for continued research is introduced. The study is intended to contribute to Direct Market Access financial trading systems which processes immense amounts of market data daily. Due to certain limitations, we shoulder a naïve approach and model latencies as a function of only data throughput in eight small historical intervals. The training and test sets are represented from raw market data, and we resort to pruning operations to shrink the datasets by a factor of approximately 0.0005 in order to achieve computational feasibility. We further consider four different implementations of Gaussian Process Regression. The resulting algorithms perform well on pruned datasets, with an average R2 statistic of 0.8399 over six test sets of approximately equal size as the training set. Testing on non-pruned datasets indicate shortcomings from the generalization procedure, where input vectors corresponding to low-latency target values are associated with less accuracy. We conclude that depending on application, the shortcomings may be make the model intractable. However for the purposes of this study it is found that buffer queue times can indeed be modelled by regression algorithms. We discuss several methods for improvements, both in regards to pruning procedures and Gaussian Processes, and open up for promising continued research. / Denna studie undersöker huruvida Gaussian Process Regression kan appliceras för att utvärdera buffer-kötider i storskaliga dataprocesseringssystem. Dessutom utforskas ifall dataströmsfrekvenser kan generaliseras till en liten delmängd av utfallsrymden. Medmålet att erhålla en grund för dynamisk mjukvaruoptimering introduceras en lovandestartpunkt för fortsatt forskning. Studien riktas mot Direct Market Access system för handel på finansiella marknader, somprocesserar enorma mängder marknadsdata dagligen. På grund av vissa begränsningar axlas ett naivt tillvägagångssätt och väntetider modelleras som en funktion av enbartdatagenomströmning i åtta små historiska tidsinterval. Tränings- och testdataset representeras från ren marknadsdata och pruning-tekniker används för att krympa dataseten med en ungefärlig faktor om 0.0005, för att uppnå beräkningsmässig genomförbarhet. Vidare tas fyra olika implementationer av Gaussian Process Regression i beaktning. De resulterande algorithmerna presterar bra på krympta dataset, med en medel R2 statisticpå 0.8399 över sex testdataset, alla av ungefär samma storlek som träningsdatasetet. Tester på icke krympta dataset indikerar vissa brister från pruning, där input vektorermotsvararande låga latenstider är associerade med mindre exakthet. Slutsatsen dras att beroende på applikation kan dessa brister göra modellen obrukbar. För studiens syftefinnes emellertid att latenstider kan sannerligen modelleras av regressionsalgoritmer. Slutligen diskuteras metoder för förbättrning med hänsyn till både pruning och GaussianProcess Regression, och det öppnas upp för lovande vidare forskning.
|
Page generated in 0.4399 seconds