Spelling suggestions: "subject:"smoothing"" "subject:"moothing""
441 |
Proximal Splitting Methods in Nonsmooth Convex OptimizationHendrich, Christopher 25 July 2014 (has links) (PDF)
This thesis is concerned with the development of novel numerical methods for solving nondifferentiable convex optimization problems in real Hilbert spaces and with the investigation of their asymptotic behavior. To this end, we are also making use of monotone operator theory as some of the provided algorithms are originally designed to solve monotone inclusion problems.
After introducing basic notations and preliminary results in convex analysis, we derive two numerical methods based on different smoothing strategies for solving nondifferentiable convex optimization problems. The first approach, known as the double smoothing technique, solves the optimization problem with some given a priori accuracy by applying two regularizations to its conjugate dual problem. A special fast gradient method then solves the regularized dual problem such that an approximate primal solution can be reconstructed from it. The second approach affects the primal optimization problem directly by applying a single regularization to it and is capable of using variable smoothing parameters which lead to a more accurate approximation of the original problem as the iteration counter increases. We then derive and investigate different primal-dual methods in real Hilbert spaces. In general, one considerable advantage of primal-dual algorithms is that they are providing a complete splitting philosophy in that the resolvents, which arise in the iterative process, are only taken separately from each maximally monotone operator occurring in the problem description. We firstly analyze the forward-backward-forward algorithm of Combettes and Pesquet in terms of its convergence rate for the objective of a nondifferentiable convex optimization problem. Additionally, we propose accelerations of this method under the additional assumption that certain monotone operators occurring in the problem formulation are strongly monotone. Subsequently, we derive two Douglas–Rachford type primal-dual methods for solving monotone inclusion problems involving finite sums of linearly composed parallel sum type monotone operators. To prove their asymptotic convergence, we use a common product Hilbert space strategy by reformulating the corresponding inclusion problem reasonably such that the Douglas–Rachford algorithm can be applied to it. Finally, we propose two primal-dual algorithms relying on forward-backward and forward-backward-forward approaches for solving monotone inclusion problems involving parallel sums of linearly composed monotone operators.
The last part of this thesis deals with different numerical experiments where we intend to compare our methods against algorithms from the literature. The problems which arise in this part are manifold and they reflect the importance of this field of research as convex optimization problems appear in lots of applications of interest.
|
442 |
Cox模式有時間相依共變數下預測問題之研究陳志豪, Chen,Chih-Hao Unknown Date (has links)
共變數的值會隨著時間而改變時,我們稱之為時間相依之共變數。時間相依之共變數往往具有重複測量的特性,也是長期資料裡最常見到的一種共變數形態;在對時間相依之共變數進行重複測量時,可以考慮每次測量的間隔時間相同或是間隔時間不同兩種情形。在間隔時間相同的情形下,我們可以忽略間隔時間所產生的效應,利用分組的Cox模式或是合併的羅吉斯迴歸模式來分析,而合併的羅吉斯迴歸是一種把資料視為“對象 時間單位”形態的分析方法;此外,分組的Cox模式和合併的羅吉斯迴歸模式也都可以用來預測存活機率。在某些條件滿足下,D’Agostino等六人在1990年已經證明出這兩個模式所得到的結果會很接近。
當間隔時間為不同時,我們可以用計數過程下的Cox模式來分析,在計數過程下的Cox模式中,資料是以“對象 區間”的形態來分析。2001年Bruijne等人則是建議把間隔時間也視為一個時間相依之共變數,並將其以B-spline函數加至模式中分析;在我們論文的實證分析裡也顯示間隔時間在延伸的Cox模式中的確是個很顯著的時間相依之共變數。延伸的Cox模式為間隔時間不同下的時間相依之共變數提供了另一個分析方法。至於在時間相依之共變數的預測方面,我們是以指數趨勢平滑法來預測其未來時間點的數值;利用預測出來的時間相依之共變數值再搭配延伸的Cox模式即可預測未來的存活機率。 / It is so called “time-dependent covariates” that the values of covariates change over time. Time-dependent covariates are measured repeatedly and often appear in the longitudinal data. Time-dependent covariates can be regularly or irregularly measured. In the regular case, we can ignore the TEL(time elapsed since last observation) effect and the grouped Cox model or the pooled logistic regression model is employed to anlalyze. The pooled logistic regression is an analytic method using the“person-period”approach. The grouped Cox model and the pooled logistic regression model also can be used to predict survival probablity. D’Agostino et al. (1990) had proved that pooled logistic regression model is asymptotically equivalent to the grouped Cox model.
If time-dependent covariates are observed irregularly, Cox model under counting process may be taken into account. Before making the prediction we must turn the original data into“person-interval”form, and this data form is also suitable for the prediction of grouped Cox model in regular measurements. de Bruijne et al.(2001) first considered TEL as a time-dependent covariate and used B-spline function to model it in their proposed extended Cox model. We also show that TEL is a very significant time-dependent covariate in our paper. The extended Cox model provided an alternative for the irregularly measured time-dependent covariates. On the other hand, we use exponential smoothing with trend to predict the future value of time-dependent covariates. Using the predicted values with the extended Cox model then we can predict survival probablity.
|
443 |
Nouveaux regards sur la longévité : analyse de l'âge modal au décès et de la dispersion des durées de vie selon les principales causes de décès au Canada (1974-2011)Diaconu, Viorela 08 1900 (has links)
No description available.
|
444 |
Advances and Applications of Experimental Measures to Test Behavioral Saving Theories and a Method to Increase Efficiency in Binary and Multiple Treatment AssignmentSchneider, Sebastian Olivier 24 November 2017 (has links)
No description available.
|
445 |
Quelques contributions à l'estimation des modèles définis par des équations estimantes conditionnelles / Some contributions to the statistical inference in models defined by conditional estimating equationsLi, Weiyu 15 July 2015 (has links)
Dans cette thèse, nous étudions des modèles définis par des équations de moments conditionnels. Une grande partie de modèles statistiques (régressions, régressions quantiles, modèles de transformations, modèles à variables instrumentales, etc.) peuvent se définir sous cette forme. Nous nous intéressons au cas des modèles avec un paramètre à estimer de dimension finie, ainsi qu’au cas des modèles semi paramétriques nécessitant l’estimation d’un paramètre de dimension finie et d’un paramètre de dimension infinie. Dans la classe des modèles semi paramétriques étudiés, nous nous concentrons sur les modèles à direction révélatrice unique qui réalisent un compromis entre une modélisation paramétrique simple et précise, mais trop rigide et donc exposée à une erreur de modèle, et l’estimation non paramétrique, très flexible mais souffrant du fléau de la dimension. En particulier, nous étudions ces modèles semi paramétriques en présence de censure aléatoire. Le fil conducteur de notre étude est un contraste sous la forme d’une U-statistique, qui permet d’estimer les paramètres inconnus dans des modèles généraux. / In this dissertation we study statistical models defined by condition estimating equations. Many statistical models could be stated under this form (mean regression, quantile regression, transformation models, instrumental variable models, etc.). We consider models with finite dimensional unknown parameter, as well as semiparametric models involving an additional infinite dimensional parameter. In the latter case, we focus on single-index models that realize an appealing compromise between parametric specifications, simple and leading to accurate estimates, but too restrictive and likely misspecified, and the nonparametric approaches, flexible but suffering from the curse of dimensionality. In particular, we study the single-index models in the presence of random censoring. The guiding line of our study is a U-statistics which allows to estimate the unknown parameters in a wide spectrum of models.
|
446 |
Silové a deformační chování duktilních mikropilot v soudržných zeminách / Load-displacement behavior of ductile micropiles in cohesive soilsStoklasová, Andrea January 2020 (has links)
This thesis is focused on creation of mobilization curves, based on data, obtained from standard and detailed monitoring of the load test. The load test was performed on the 9 meters long ductile micropile. The first part of the thesis explains the methods and principles, which was used to construct the mobilization curves. Next there is description of the technologies of ductile micropiles and the load test. In the next part of the thesis is generally explained process, which was applied to the evaluated data. For evaluation was used spreadsheet Microsoft Excel and programming language Matlab, with Kernel Smoothing extension. In the last chapter of the thesis there are interpreted the load transfer function together with skin friction and micropile displacement.
|
447 |
Predikce časových řad pomocí statistických metod / Prediction of Time Series Using Statistical MethodsBeluský, Ondrej January 2011 (has links)
Many companies consider essential to obtain forecast of time series of uncertain variables that influence their decisions and actions. Marketing includes a number of decisions that depend on a reliable forecast. Forecasts are based directly or indirectly on the information derived from historical data. This data may include different patterns - such as trend, horizontal pattern, and cyclical or seasonal pattern. Most methods are based on the recognition of these patterns, their projection into the future and thus create a forecast. Other approaches such as neural networks are black boxes, which uses learning.
|
448 |
AUTOMATED OPTIMAL FORECASTING OF UNIVARIATE MONITORING PROCESSES : Employing a novel optimal forecast methodology to define four classes of forecast approaches and testing them on real-life monitoring processesRazroev, Stanislav January 2019 (has links)
This work aims to explore practical one-step-ahead forecasting of structurally changing data, an unstable behaviour, that real-life data connected to human activity often exhibit. This setting can be characterized as monitoring process. Various forecast models, methods and approaches can range from being simple and computationally "cheap" to very sophisticated and computationally "expensive". Moreover, different forecast methods handle different data-patterns and structural changes differently: for some particular data types or data intervals some particular forecast methods are better than the others, something that is usually not known beforehand. This raises a question: "Can one design a forecast procedure, that effectively and optimally switches between various forecast methods, adapting the forecast methods usage to the changes in the incoming data flow?" The thesis answers this question by introducing optimality concept, that allows optimal switching between simultaneously executed forecast methods, thus "tailoring" forecast methods to the changes in the data. It is also shown, how another forecast approach: combinational forecasting, where forecast methods are combined using weighted average, can be utilized by optimality principle and can therefore benefit from it. Thus, four classes of forecast results can be considered and compared: basic forecast methods, basic optimality, combinational forecasting, and combinational optimality. The thesis shows, that the usage of optimality gives results, where most of the time optimality is no worse or better than the best of forecast methods, that optimality is based on. Optimality reduces also scattering from multitude of various forecast suggestions to a single number or only a few numbers (in a controllable fashion). Optimality gives additionally lower bound for optimal forecasting: the hypothetically best achievable forecast result. The main conclusion is that optimality approach makes more or less obsolete other traditional ways of treating the monitoring processes: trying to find the single best forecast method for some structurally changing data. This search still can be sought, of course, but it is best done within optimality approach as its innate component. All this makes the proposed optimality approach for forecasting purposes a valid "representative" of a more broad ensemble approach (which likewise motivated development of now popular Ensemble Learning concept as a valid part of Machine Learning framework). / Denna avhandling syftar till undersöka en praktisk ett-steg-i-taget prediktering av strukturmässigt skiftande data, ett icke-stabilt beteende som verkliga data kopplade till människoaktiviteter ofta demonstrerar. Denna uppsättning kan alltså karakteriseras som övervakningsprocess eller monitoringsprocess. Olika prediktionsmodeller, metoder och tillvägagångssätt kan variera från att vara enkla och "beräkningsbilliga" till sofistikerade och "beräkningsdyra". Olika prediktionsmetoder hanterar dessutom olika mönster eller strukturförändringar i data på olika sätt: för vissa typer av data eller vissa dataintervall är vissa prediktionsmetoder bättre än andra, vilket inte brukar vara känt i förväg. Detta väcker en fråga: "Kan man skapa en predictionsprocedur, som effektivt och på ett optimalt sätt skulle byta mellan olika prediktionsmetoder och för att adaptera dess användning till ändringar i inkommande dataflöde?" Avhandlingen svarar på frågan genom att introducera optimalitetskoncept eller optimalitet, något som tillåter ett optimalbyte mellan parallellt utförda prediktionsmetoder, för att på så sätt skräddarsy prediktionsmetoder till förändringar i data. Det visas också, hur ett annat prediktionstillvägagångssätt: kombinationsprediktering, där olika prediktionsmetoder kombineras med hjälp av viktat medelvärde, kan utnyttjas av optimalitetsprincipen och därmed få nytta av den. Alltså, fyra klasser av prediktionsresultat kan betraktas och jämföras: basprediktionsmetoder, basoptimalitet, kombinationsprediktering och kombinationsoptimalitet. Denna avhandling visar, att användning av optimalitet ger resultat, där optimaliteten för det mesta inte är sämre eller bättre än den bästa av enskilda prediktionsmetoder, som själva optimaliteten är baserad på. Optimalitet reducerar också spridningen från mängden av olika prediktionsförslag till ett tal eller bara några enstaka tal (på ett kontrollerat sätt). Optimalitet producerar ytterligare en nedre gräns för optimalprediktion: det hypotetiskt bästa uppnåeliga prediktionsresultatet. Huvudslutsatsen är följande: optimalitetstillvägagångssätt gör att andra traditionella sätt att ta hand om övervakningsprocesser blir mer eller mindre föråldrade: att leta bara efter den enda bästa enskilda prediktionsmetoden för data med strukturskift. Sådan sökning kan fortfarande göras, men det är bäst att göra den inom optimalitetstillvägagångssättet, där den ingår som en naturlig komponent. Allt detta gör det föreslagna optimalitetstillvägagångssättetet för prediktionsändamål till en giltig "representant" för det mer allmäna ensembletillvägagångssättet (något som också motiverade utvecklingen av numera populär Ensembleinlärning som en giltig del av Maskininlärning).
|
449 |
Erhöhung der Qualität und Verfügbarkeit von satellitengestützter Referenzsensorik durch Smoothing im PostprocessingBauer, Stefan 08 November 2012 (has links)
In dieser Arbeit werden Postprocessing-Verfahren zum Steigern der Genauigkeit und Verfügbarkeit satellitengestützer Positionierungsverfahren, die ohne Inertialsensorik auskommen, untersucht. Ziel ist es, auch unter schwierigen Empfangsbedingungen, wie sie in urbanen Gebieten herrschen, eine Trajektorie zu erzeugen, deren Genauigkeit sie als Referenz für andere Verfahren qualifiziert. Zwei Ansätze werdenverfolgt: Die Verwendung von IGS-Daten sowie das Smoothing unter Einbeziehung von Sensoren aus der Fahrzeugodometrie. Es wird gezeigt, dass durch die Verwendung von IGS-Daten eine Verringerung des Fehlers um 50% bis 70% erreicht werden kann. Weiterhin demonstrierten die Smoothing-Verfahren, dass sie in der Lage sind, auch unter schlechten Empfangsbedingungen immer eine Genauigkeit im Dezimeterbereich zu erzielen.
|
450 |
Méthodes de contrôle de la qualité de solutions éléments finis: applications à l'acoustiqueBouillard, Philippe 05 December 1997 (has links)
This work is dedicated to the control of the accuracy of computational simulations of sound propagation and scattering. Assuming time-harmonic behaviour, the mathematical models are given as boundary value problems for the Helmholtz equation <i>Delta u+k2u=0 </i> in <i>Oméga</i>. A distinction is made between interior, exterior and coupled problems and this work focuses mainly on interior uncoupled problems for which the Helmholtz equation becomes singular at eigenfrequencies. <p><p>As in other application fields, error control is an important issue in acoustic computations. It is clear that the numerical parameters (mesh size h and degree of approximation p) must be adapted to the physical parameter k. The well known ‘rule of the thumb’ for the h version with linear elements is to resolve the wavelength <i>lambda=2 pi k-1</i> by six elements characterising the approximability of the finite element mesh. If the numerical model is stable, the quality of the numerical solution is entirely controlled by the approximability of the finite element mesh. The situation is quite different in the presence of singularities. In that case, <i>stability</i> (or the lack thereof) is equally (sometimes more) important. In our application, the solutions are ‘rough’, i.e. highly oscillatory if the wavenumber is large. This is a singularity inherent to the differential operator rather than to the domain or the boundary conditions. This effect is called the <i>k-singularity</i>. Similarly, the discrete operator (“stiffness” matrix) becomes singular at eigenvalues of the discretised interior problem (or nearly singular at damped eigenvalues in solid-fluid interaction). This type of singularities is called the <i>lambda-singularities</i>. Both singularities are of global character. Without adaptive correction, their destabilizing effect generally leads to large error of the finite element results, even if the finite element mesh satisfies the ‘rule of the thumb’. <p><p>The k- and lambda-singularities are first extensively demonstrated by numerical examples. Then, two <i>a posteriori</i> error estimators are developed and the numerical tests show that, due to these specific phenomena of dynamo-acoustic computations, <i>error control cannot, in general, be accomplished by just ‘transplanting’ methods that worked well in static computations</i>. However, for low wavenumbers, it is necessary to also control the influence of the geometric (reentrants corners) or physical (discontinuities of the boundary conditions) singularities. An <i>h</i>-adaptive version with refinements has been implemented. These tools have been applied to two industrial examples :the GLT, a bi-mode bus from Bombardier Eurorail, and the Vertigo, a sport car from Gillet Automobiles.<p><p>As a conclusion, it is recommanded to replace the rule of the thumb by a criterion based on the control of the influence of the specific singularities of the Helmholtz operator. As this aim cannot be achieved by the <i>a posteriori</i> error estimators, it is suggested to minimize the influence of the singularities by modifying the formulation of the finite element method or by formulating a “meshless” method.<p> / Doctorat en sciences appliquées / info:eu-repo/semantics/nonPublished
|
Page generated in 0.0637 seconds