Spelling suggestions: "subject:"[een] SMOOTHING"" "subject:"[enn] SMOOTHING""
441 |
Changements dans la répartition des décès selon l'âge : une approche non paramétrique pour l'étude de la mortalité adulteOuellette, Nadine 03 1900 (has links)
Au cours du siècle dernier, nous avons pu observer une diminution remarquable de la mortalité dans toutes les régions du monde, en particulier dans les pays développés. Cette chute a été caractérisée par des modifications importantes quant à la répartition des décès selon l'âge, ces derniers ne se produisant plus principalement durant les premiers âges de la vie mais plutôt au-delà de l'âge de 65 ans. Notre étude s'intéresse spécifiquement au suivi fin et détaillé des changements survenus dans la distribution des âges au décès chez les personnes âgées. Pour ce faire, nous proposons une nouvelle méthode de lissage non paramétrique souple qui repose sur l'utilisation des P-splines et qui mène à une expression précise de la mortalité, telle que décrite par les données observées. Les résultats de nos analyses sont présentés sous forme d'articles scientifiques, qui s'appuient sur les données de la Human Mortality Database, la Base de données sur la longévité canadienne et le Registre de la population du Québec ancien reconnues pour leur fiabilité. Les conclusions du premier article suggèrent que certains pays à faible mortalité auraient récemment franchi l'ère de la compression de la mortalité aux grands âges, ère durant laquelle les décès au sein des personnes âgées tendent à se concentrer dans un intervalle d'âge progressivement plus court. En effet, depuis le début des années 1990 au Japon, l'âge modal au décès continue d'augmenter alors que le niveau d'hétérogénéité des durées de vie au-delà de cet âge demeure inchangé. Nous assistons ainsi à un déplacement de l'ensemble des durées de vie adultes vers des âges plus élevés, sans réduction parallèle de la dispersion de la mortalité aux grands âges. En France et au Canada, les femmes affichent aussi de tels développements depuis le début des années 2000, mais le scénario de compression de la mortalité aux grands âges est toujours en cours chez les hommes. Aux États-Unis, les résultats de la dernière décennie s'avèrent inquiétants car pour plusieurs années consécutives, l'âge modal au décès, soit la durée de vie la plus commune des adultes, a diminué de manière importante chez les deux sexes. Le second article s'inscrit dans une perspective géographique plus fine et révèle que les disparités provinciales en matière de mortalité adulte au Canada entre 1930 et 2007, bien décrites à l'aide de surfaces de mortalité lissées, sont importantes et méritent d'être suivies de près. Plus spécifiquement, sur la base des trajectoires temporelles de l'âge modal au décès et de l'écart type des âges au décès situés au-delà du mode, les différentiels de mortalité aux grands âges entre provinces ont à peine diminué durant cette période, et cela, malgré la baisse notable de la mortalité dans toutes les provinces depuis le début du XXe siècle. Également, nous constatons que ce sont précisément les femmes issues de provinces de l'Ouest et du centre du pays qui semblent avoir franchi l'ère de la compression de la mortalité aux grands âges au Canada. Dans le cadre du troisième et dernier article de cette thèse, nous étudions la longévité des adultes au XVIIIe siècle et apportons un nouvel éclairage sur la durée de vie la plus commune des adultes à cette époque. À la lumière de nos résultats, l'âge le plus commun au décès parmi les adultes canadiens-français a augmenté entre 1740-1754 et 1785-1799 au Québec ancien. En effet, l'âge modal au décès est passé d'environ 73 ans à près de 76 ans chez les femmes et d'environ 70 ans à 74 ans chez les hommes. Les conditions de vie particulières de la population canadienne-française à cette époque pourraient expliquer cet accroissement. / Over the course of the last century, we have witnessed major improvements in the level of mortality in regions all across the globe, in particular in developed countries. This remarkable mortality decrease has also been characterized by fundamental changes in the mortality profile by age. Indeed, deaths are no longer occurring mainly at very young ages but rather at advanced ages such as above age 65. Our research focuses on monitoring and understanding historical changes in the age-at-death distribution among the elderly population. We propose a new flexible nonparametric smoothing approach based on P-splines leading to detailed mortality representations, as described by actual data. The results are presented in three scientific papers, which rest upon reliable data taken from the Human Mortality Database, the Canadian Human Mortality Database, and the Registre de la population du Québec ancien. Findings from the first paper suggest that some low mortality countries may have recently reached the end of the old-age compression of mortality era, where deaths among the elderly population tend to concentrate into a progressively shorter age interval over time. Indeed, since the early 1990s in Japan, the modal age at death continues to increase while reductions in the variability of age at death above the mode have stopped. Thus, the distribution of age at death at older ages has been sliding towards higher ages without changing its shape. In France and Canada, women show such developments since the early 2000s, whereas men are still boldly engaged in an old-age mortality compression regime. In the USA, the picture for the latest decade is worrying because for several consecutive years in that timeframe, women and men have both recorded important declines in their modal age at death, which corresponds to the most common age at death among adults. The second paper takes a look within national boundaries and examines regional adult mortality differentials in Canada between 1930 and 2007. Smoothed mortality surfaces reveal that provincial disparities among adults in general and among the elderly population in particular are substantial in this country and deserve to be monitored closely. More specifically, based on modal age at death and standard deviation above the mode time trends, provincial disparities at older ages have barely reduced during the period studied, despite the great mortality improvements recorded in all provinces since the early XXth century. Also, we find that women who have reached the end of the old-age compression of mortality era in Canada are respectively those of Western and Central provinces. The last paper focuses on adult longevity during the XVIIIth century in historical Quebec and provides new insight on the most common adult age at death. Indeed, our analysis reveals that the modal age at death increased among French-Canadian adults between 1740-1754 and 1785-1799. In 1740-1754, it was estimated at 73 years among females and at about 70 years among males. By 1785-1799, modal age at death estimates were almost 3 years higher for females and 4 years higher for males. Specific living conditions of the French-Canadian population at the time could explain these results.
|
442 |
Taylor-regelns aktualitet och tillämpbarhet : En jämförelse av Taylor-skattningar i Brasilien, Kanada, Polen, Sverige och Sydafrika för åren 2000-2013 / The Taylor rule’s relevance and applicability : A comparision of Taylor interest rates in Brazil, Canada, Poland, Sweden and South Africa for the years 2000-2013Björklund, Pontus, Hegart, Ellinor January 2014 (has links)
John B. Taylor, professor i nationalekonomi vid Stanford University, presenterade år 1993 en penningpolitisk regel som syftade till att vara ett hjälpmedel för centralbanker vid räntebeslut. Taylor-regeln är mycket enkel i sitt uförande och baseras på att styrräntan bör sättas efter två variabler: BNP-gapet och inflationsavvikelsen. Denna styrränteregel fick genomslag inom den vetenskapliga världen men spreds även till makroekonomisk praktik och medförde stora förändringar för penningpolitiken. Flera empriska studier har publicerats sedan Taylor-regeln tillkom och det råder det delade meningar om hur väl Taylor-regeln presterar för olika typer av ekonomier och hur användbar den är idag. Det har även uppkomit nya teorier angående trögheten i effekterna av styrränteförändringar och vid vilken tidpunkt dessa får en inverkan på inflationstakten. Syftet med denna uppsats är att jämföra hur väl den ursprungliga Taylor-modellen och en tidslaggad modell förklarar centralbankernas historiska styrräntesättning i fem länder med inflationsmål under tidsperioden 2000-2013. Analysen av resultaten görs med utgångspunkt i ländernas olika ekonomiska egenskaper samt tidsperioden som studien omfattar. Studien begränsas till jämförelser av de två Taylor-modellernas tillämpbarhet vid styrräntesättningar för länderna Brasilien, Kanada, Polen, Sverige och Sydafrika. De två modellerna modifieras också med en styrränteutjämningsfunktion. Våra resultat tyder på att den ursprungliga Taylor-regeln presterar bättre i förhållande till den tidslaggade modellen när det gäller att förklara den faktiska styrräntesättningen idag för alla länder i studien utom Polen. Den tidslaggade presterar dock bättre än den ursprungliga för de utvecklade ekonomierna Sverige och Kanada under 1990-talet. Båda modellerna gör kraftiga över- och underskattningar som till stor del avhjälps med den utjämningsfunktion som vi tillämpar. Koefficienterna hålls konstanta över hela tidsperioden, vilket inte är rimligt då en viss dynamik bör inkluderas så att regeln justeras efter varje period då för mycket vikt läggs vid BNP-variabeln som såldes är en bidragande faktor till regelns över- och underskattningar. Regeln presterar bättre för ekonomier med stabila förhållanden mellan tillväxttakt och inflationstakt än för länder som lider av mer volatila förhållanden mellan dessa två variabler, likt tillväxtländerna i vår studie. Dessutom ger Taylor-regeln skattningar som ligger närmre den faktiska styrräntesättningen under de tidigare delarna av perioden för att sedan till större del börja avvika från den faktiskt satta styrräntan. Slutsatserna som kan dras utifrån våra resultat är att den ursprungliga Taylor-regeln presterar bäst i att beskriva ett lands styrräntesättning sett till kvantitativa mått medan en tidslaggad modell tar större hänsyn faktiska förhållanden. Över lag presterar modellerna bättre för de utvecklade ekonomierna än för tillväxtekonomierna och huruvida storleken på ekonomin har någon inverkan är svårt att avgöra. Resultaten tyder också på att Taylor-regeln med tidslagg ligger närmre den faktiska styrräntesättningen för de utvecklade ekonomierna under 1990-talet än under perioden 2000-2013 medan den ursprungliga presterar bättre idag. / John. B Taylor, professor of Economics at Stanford University, presented a monetary policy rule in 1993 which intended to help central banks with their interst rate decisions. In its design the Taylor-rule was very simple and based on only two variables: the GDP-gap and the deviation of actual inflation from the inflation target. The Taylor rule had a great impact on the academic research and also contributed to changes within monetary policy around the world. Many empirical studies have been published on the Taylor rule and there are divided contentions about its applicability in different kind of economies and its relevance today. New theories have also been published regardning the time aspect of the impact on inflation due to a change in the interest rate. The intentions of this study is to make a comparsion between the original Taylor rule and a Taylor rule including a time lag regarding how well they describe the actual interest rates set by the central banks in five countries during the period 2000-2013. The results will be analyzed under consideration of the different economies attributes. The study compares the two kinds of Taylor rules and the applicability in describing the historical interest rate in Brazil, Canada, Poland, Sweden and South Africa. The two rules have also been modified with an interest rate smoothing-function. Our results conclude that the original Taylor rule describes the historical interest rate better than the rule including a time lag for the time period 2000-2013 for all countries apart from Poland. For the developed economies Canada and Sweden the time lagged model show less deviations for the 1990’s. However both rules tend to over and underestimate the valutation of the interest rate. The smoothing function does to some extent correct this problem. The coefficients of the variables are held constant during the study which in reality should not be the case. They should instead be adjusted between every period to make allowances for the different relationship of the two variables. Mostly too much weight is put on the GDP-variable which should be a contributing cause of the overestimations. The rules do however have the tendency to describe the historical interst rate of the developed economies superior to the developing economies. The performance is greater at the beginning of the period with less deviation from the actual outcome than later on. The conclusion of our study is that the original Taylor rule generally performs superior to the one including time lag with conciderations to the deviations from the actual interest rates. However, the Taylor rule including the time-lag does allow for actual circumstances which the original Taylor rule does not take into consideration. Mainly the rules do perform better for developed economies compared to developing economies. Regarding the impact of the size of the economy on the applicability of the rules it was difficult to conclude anything specific. The Taylor rule with the time-lag is more applicable for the developed economies during the earlier time period, the 1990’s, than the later time period, the 2000’s where the original Taylor rule shows less deviations.
|
443 |
Proximal Splitting Methods in Nonsmooth Convex OptimizationHendrich, Christopher 25 July 2014 (has links) (PDF)
This thesis is concerned with the development of novel numerical methods for solving nondifferentiable convex optimization problems in real Hilbert spaces and with the investigation of their asymptotic behavior. To this end, we are also making use of monotone operator theory as some of the provided algorithms are originally designed to solve monotone inclusion problems.
After introducing basic notations and preliminary results in convex analysis, we derive two numerical methods based on different smoothing strategies for solving nondifferentiable convex optimization problems. The first approach, known as the double smoothing technique, solves the optimization problem with some given a priori accuracy by applying two regularizations to its conjugate dual problem. A special fast gradient method then solves the regularized dual problem such that an approximate primal solution can be reconstructed from it. The second approach affects the primal optimization problem directly by applying a single regularization to it and is capable of using variable smoothing parameters which lead to a more accurate approximation of the original problem as the iteration counter increases. We then derive and investigate different primal-dual methods in real Hilbert spaces. In general, one considerable advantage of primal-dual algorithms is that they are providing a complete splitting philosophy in that the resolvents, which arise in the iterative process, are only taken separately from each maximally monotone operator occurring in the problem description. We firstly analyze the forward-backward-forward algorithm of Combettes and Pesquet in terms of its convergence rate for the objective of a nondifferentiable convex optimization problem. Additionally, we propose accelerations of this method under the additional assumption that certain monotone operators occurring in the problem formulation are strongly monotone. Subsequently, we derive two Douglas–Rachford type primal-dual methods for solving monotone inclusion problems involving finite sums of linearly composed parallel sum type monotone operators. To prove their asymptotic convergence, we use a common product Hilbert space strategy by reformulating the corresponding inclusion problem reasonably such that the Douglas–Rachford algorithm can be applied to it. Finally, we propose two primal-dual algorithms relying on forward-backward and forward-backward-forward approaches for solving monotone inclusion problems involving parallel sums of linearly composed monotone operators.
The last part of this thesis deals with different numerical experiments where we intend to compare our methods against algorithms from the literature. The problems which arise in this part are manifold and they reflect the importance of this field of research as convex optimization problems appear in lots of applications of interest.
|
444 |
Cox模式有時間相依共變數下預測問題之研究陳志豪, Chen,Chih-Hao Unknown Date (has links)
共變數的值會隨著時間而改變時,我們稱之為時間相依之共變數。時間相依之共變數往往具有重複測量的特性,也是長期資料裡最常見到的一種共變數形態;在對時間相依之共變數進行重複測量時,可以考慮每次測量的間隔時間相同或是間隔時間不同兩種情形。在間隔時間相同的情形下,我們可以忽略間隔時間所產生的效應,利用分組的Cox模式或是合併的羅吉斯迴歸模式來分析,而合併的羅吉斯迴歸是一種把資料視為“對象 時間單位”形態的分析方法;此外,分組的Cox模式和合併的羅吉斯迴歸模式也都可以用來預測存活機率。在某些條件滿足下,D’Agostino等六人在1990年已經證明出這兩個模式所得到的結果會很接近。
當間隔時間為不同時,我們可以用計數過程下的Cox模式來分析,在計數過程下的Cox模式中,資料是以“對象 區間”的形態來分析。2001年Bruijne等人則是建議把間隔時間也視為一個時間相依之共變數,並將其以B-spline函數加至模式中分析;在我們論文的實證分析裡也顯示間隔時間在延伸的Cox模式中的確是個很顯著的時間相依之共變數。延伸的Cox模式為間隔時間不同下的時間相依之共變數提供了另一個分析方法。至於在時間相依之共變數的預測方面,我們是以指數趨勢平滑法來預測其未來時間點的數值;利用預測出來的時間相依之共變數值再搭配延伸的Cox模式即可預測未來的存活機率。 / It is so called “time-dependent covariates” that the values of covariates change over time. Time-dependent covariates are measured repeatedly and often appear in the longitudinal data. Time-dependent covariates can be regularly or irregularly measured. In the regular case, we can ignore the TEL(time elapsed since last observation) effect and the grouped Cox model or the pooled logistic regression model is employed to anlalyze. The pooled logistic regression is an analytic method using the“person-period”approach. The grouped Cox model and the pooled logistic regression model also can be used to predict survival probablity. D’Agostino et al. (1990) had proved that pooled logistic regression model is asymptotically equivalent to the grouped Cox model.
If time-dependent covariates are observed irregularly, Cox model under counting process may be taken into account. Before making the prediction we must turn the original data into“person-interval”form, and this data form is also suitable for the prediction of grouped Cox model in regular measurements. de Bruijne et al.(2001) first considered TEL as a time-dependent covariate and used B-spline function to model it in their proposed extended Cox model. We also show that TEL is a very significant time-dependent covariate in our paper. The extended Cox model provided an alternative for the irregularly measured time-dependent covariates. On the other hand, we use exponential smoothing with trend to predict the future value of time-dependent covariates. Using the predicted values with the extended Cox model then we can predict survival probablity.
|
445 |
Nouveaux regards sur la longévité : analyse de l'âge modal au décès et de la dispersion des durées de vie selon les principales causes de décès au Canada (1974-2011)Diaconu, Viorela 08 1900 (has links)
No description available.
|
446 |
Advances and Applications of Experimental Measures to Test Behavioral Saving Theories and a Method to Increase Efficiency in Binary and Multiple Treatment AssignmentSchneider, Sebastian Olivier 24 November 2017 (has links)
No description available.
|
447 |
Quelques contributions à l'estimation des modèles définis par des équations estimantes conditionnelles / Some contributions to the statistical inference in models defined by conditional estimating equationsLi, Weiyu 15 July 2015 (has links)
Dans cette thèse, nous étudions des modèles définis par des équations de moments conditionnels. Une grande partie de modèles statistiques (régressions, régressions quantiles, modèles de transformations, modèles à variables instrumentales, etc.) peuvent se définir sous cette forme. Nous nous intéressons au cas des modèles avec un paramètre à estimer de dimension finie, ainsi qu’au cas des modèles semi paramétriques nécessitant l’estimation d’un paramètre de dimension finie et d’un paramètre de dimension infinie. Dans la classe des modèles semi paramétriques étudiés, nous nous concentrons sur les modèles à direction révélatrice unique qui réalisent un compromis entre une modélisation paramétrique simple et précise, mais trop rigide et donc exposée à une erreur de modèle, et l’estimation non paramétrique, très flexible mais souffrant du fléau de la dimension. En particulier, nous étudions ces modèles semi paramétriques en présence de censure aléatoire. Le fil conducteur de notre étude est un contraste sous la forme d’une U-statistique, qui permet d’estimer les paramètres inconnus dans des modèles généraux. / In this dissertation we study statistical models defined by condition estimating equations. Many statistical models could be stated under this form (mean regression, quantile regression, transformation models, instrumental variable models, etc.). We consider models with finite dimensional unknown parameter, as well as semiparametric models involving an additional infinite dimensional parameter. In the latter case, we focus on single-index models that realize an appealing compromise between parametric specifications, simple and leading to accurate estimates, but too restrictive and likely misspecified, and the nonparametric approaches, flexible but suffering from the curse of dimensionality. In particular, we study the single-index models in the presence of random censoring. The guiding line of our study is a U-statistics which allows to estimate the unknown parameters in a wide spectrum of models.
|
448 |
Silové a deformační chování duktilních mikropilot v soudržných zeminách / Load-displacement behavior of ductile micropiles in cohesive soilsStoklasová, Andrea January 2020 (has links)
This thesis is focused on creation of mobilization curves, based on data, obtained from standard and detailed monitoring of the load test. The load test was performed on the 9 meters long ductile micropile. The first part of the thesis explains the methods and principles, which was used to construct the mobilization curves. Next there is description of the technologies of ductile micropiles and the load test. In the next part of the thesis is generally explained process, which was applied to the evaluated data. For evaluation was used spreadsheet Microsoft Excel and programming language Matlab, with Kernel Smoothing extension. In the last chapter of the thesis there are interpreted the load transfer function together with skin friction and micropile displacement.
|
449 |
Predikce časových řad pomocí statistických metod / Prediction of Time Series Using Statistical MethodsBeluský, Ondrej January 2011 (has links)
Many companies consider essential to obtain forecast of time series of uncertain variables that influence their decisions and actions. Marketing includes a number of decisions that depend on a reliable forecast. Forecasts are based directly or indirectly on the information derived from historical data. This data may include different patterns - such as trend, horizontal pattern, and cyclical or seasonal pattern. Most methods are based on the recognition of these patterns, their projection into the future and thus create a forecast. Other approaches such as neural networks are black boxes, which uses learning.
|
450 |
AUTOMATED OPTIMAL FORECASTING OF UNIVARIATE MONITORING PROCESSES : Employing a novel optimal forecast methodology to define four classes of forecast approaches and testing them on real-life monitoring processesRazroev, Stanislav January 2019 (has links)
This work aims to explore practical one-step-ahead forecasting of structurally changing data, an unstable behaviour, that real-life data connected to human activity often exhibit. This setting can be characterized as monitoring process. Various forecast models, methods and approaches can range from being simple and computationally "cheap" to very sophisticated and computationally "expensive". Moreover, different forecast methods handle different data-patterns and structural changes differently: for some particular data types or data intervals some particular forecast methods are better than the others, something that is usually not known beforehand. This raises a question: "Can one design a forecast procedure, that effectively and optimally switches between various forecast methods, adapting the forecast methods usage to the changes in the incoming data flow?" The thesis answers this question by introducing optimality concept, that allows optimal switching between simultaneously executed forecast methods, thus "tailoring" forecast methods to the changes in the data. It is also shown, how another forecast approach: combinational forecasting, where forecast methods are combined using weighted average, can be utilized by optimality principle and can therefore benefit from it. Thus, four classes of forecast results can be considered and compared: basic forecast methods, basic optimality, combinational forecasting, and combinational optimality. The thesis shows, that the usage of optimality gives results, where most of the time optimality is no worse or better than the best of forecast methods, that optimality is based on. Optimality reduces also scattering from multitude of various forecast suggestions to a single number or only a few numbers (in a controllable fashion). Optimality gives additionally lower bound for optimal forecasting: the hypothetically best achievable forecast result. The main conclusion is that optimality approach makes more or less obsolete other traditional ways of treating the monitoring processes: trying to find the single best forecast method for some structurally changing data. This search still can be sought, of course, but it is best done within optimality approach as its innate component. All this makes the proposed optimality approach for forecasting purposes a valid "representative" of a more broad ensemble approach (which likewise motivated development of now popular Ensemble Learning concept as a valid part of Machine Learning framework). / Denna avhandling syftar till undersöka en praktisk ett-steg-i-taget prediktering av strukturmässigt skiftande data, ett icke-stabilt beteende som verkliga data kopplade till människoaktiviteter ofta demonstrerar. Denna uppsättning kan alltså karakteriseras som övervakningsprocess eller monitoringsprocess. Olika prediktionsmodeller, metoder och tillvägagångssätt kan variera från att vara enkla och "beräkningsbilliga" till sofistikerade och "beräkningsdyra". Olika prediktionsmetoder hanterar dessutom olika mönster eller strukturförändringar i data på olika sätt: för vissa typer av data eller vissa dataintervall är vissa prediktionsmetoder bättre än andra, vilket inte brukar vara känt i förväg. Detta väcker en fråga: "Kan man skapa en predictionsprocedur, som effektivt och på ett optimalt sätt skulle byta mellan olika prediktionsmetoder och för att adaptera dess användning till ändringar i inkommande dataflöde?" Avhandlingen svarar på frågan genom att introducera optimalitetskoncept eller optimalitet, något som tillåter ett optimalbyte mellan parallellt utförda prediktionsmetoder, för att på så sätt skräddarsy prediktionsmetoder till förändringar i data. Det visas också, hur ett annat prediktionstillvägagångssätt: kombinationsprediktering, där olika prediktionsmetoder kombineras med hjälp av viktat medelvärde, kan utnyttjas av optimalitetsprincipen och därmed få nytta av den. Alltså, fyra klasser av prediktionsresultat kan betraktas och jämföras: basprediktionsmetoder, basoptimalitet, kombinationsprediktering och kombinationsoptimalitet. Denna avhandling visar, att användning av optimalitet ger resultat, där optimaliteten för det mesta inte är sämre eller bättre än den bästa av enskilda prediktionsmetoder, som själva optimaliteten är baserad på. Optimalitet reducerar också spridningen från mängden av olika prediktionsförslag till ett tal eller bara några enstaka tal (på ett kontrollerat sätt). Optimalitet producerar ytterligare en nedre gräns för optimalprediktion: det hypotetiskt bästa uppnåeliga prediktionsresultatet. Huvudslutsatsen är följande: optimalitetstillvägagångssätt gör att andra traditionella sätt att ta hand om övervakningsprocesser blir mer eller mindre föråldrade: att leta bara efter den enda bästa enskilda prediktionsmetoden för data med strukturskift. Sådan sökning kan fortfarande göras, men det är bäst att göra den inom optimalitetstillvägagångssättet, där den ingår som en naturlig komponent. Allt detta gör det föreslagna optimalitetstillvägagångssättetet för prediktionsändamål till en giltig "representant" för det mer allmäna ensembletillvägagångssättet (något som också motiverade utvecklingen av numera populär Ensembleinlärning som en giltig del av Maskininlärning).
|
Page generated in 0.0848 seconds