Spelling suggestions: "subject:"square estimator""
1 |
Estimation in partly parametric additive Cox modelsLäuter, Henning January 2003 (has links)
The dependence between survival times and covariates is described e.g. by proportional hazard models. We consider partly parametric Cox models and discuss here the estimation of interesting parameters. We represent the ma- ximum likelihood approach and extend the results of Huang (1999) from linear to nonlinear parameters. Then we investigate the least squares esti- mation and formulate conditions for the a.s. boundedness and consistency of these estimators.
|
2 |
Efficient Semiparametric Estimators for Nonlinear Regressions and Models under Sample Selection BiasKim, Mi Jeong 2012 August 1900 (has links)
We study the consistency, robustness and efficiency of parameter estimation in different but related models via semiparametric approach. First, we revisit the second- order least squares estimator proposed in Wang and Leblanc (2008) and show that the estimator reaches the semiparametric efficiency. We further extend the method to the heteroscedastic error models and propose a semiparametric efficient estimator in this more general setting. Second, we study a class of semiparametric skewed distributions arising when the sample selection process causes sampling bias for the observations. We begin by assuming the anti-symmetric property to the skewing function. Taking into account the symmetric nature of the population distribution, we propose consistent estimators for the center of the symmetric population. These estimators are robust to model misspecification and reach the minimum possible estimation variance. Next, we extend the model to permit a more flexible skewing structure. Without assuming a particular form of the skewing function, we propose both consistent and efficient estimators for the center of the symmetric population using a semiparametric method. We also analyze the asymptotic properties and derive the corresponding inference procedures. Numerical results are provided to support the results and illustrate the finite sample performance of the proposed estimators.
|
3 |
Optimal regression design under second-order least squares estimator: theory, algorithm and applicationsYeh, Chi-Kuang 23 July 2018 (has links)
In this thesis, we first review the current development of optimal regression designs under the second-order least squares estimator in the literature. The criteria include A- and D-optimality. We then introduce a new formulation of A-optimality criterion so the result can be extended to c-optimality which has not been studied before. Following Kiefer's equivalence results, we derive the optimality conditions for A-, c- and D-optimal designs under the second-order least squares estimator. In addition, we study the number of support points for various regression models including Peleg models, trigonometric models, regular and fractional polynomial models. A generalized scale invariance property for D-optimal designs is also explored. Furthermore, we discuss one computing algorithm to find optimal designs numerically. Several interesting applications are presented and related MATLAB code are provided in the thesis. / Graduate
|
4 |
Minimax D-optimal designs for regression models with heteroscedastic errorsYzenbrandt, Kai 20 April 2021 (has links)
Minimax D-optimal designs for regression models with heteroscedastic errors are studied and constructed. These designs are robust against possible misspecification of the error variance in the model. We propose a flexible assumption for the error variance and use a minimax approach to define robust designs. As usual it is hard to find robust designs analytically, since the associated design problem is not a convex optimization problem. However, the minimax D-optimal design problem has an objective function as a difference of two convex functions. An effective algorithm is developed to compute minimax D-optimal designs under the least squares estimator and generalized least squares estimator. The algorithm can be applied to construct minimax D-optimal designs for any linear or nonlinear regression model with heteroscedastic errors. In addition, several theoretical results are obtained for the minimax D-optimal designs. / Graduate
|
5 |
An Improved C-Fuzzy Decision Tree and its Application to Vector QuantizationChiu, Hsin-Wei 27 July 2006 (has links)
In the last one hundred years, the mankind has invented a lot of convenient tools for pursuing beautiful and comfortable living environment. Computer is one of the most important inventions, and its operation ability is incomparable with the mankind. Because computer can deal with a large amount of data fast and accurately, people use this advantage to imitate human thinking. Artificial intelligence is developed extensively. Methods, such as all kinds of neural networks, data mining, fuzzy logic, etc., apply to each side fields (ex: fingerprint distinguishing, image compressing, antennal designing, etc.). We will probe into to prediction technology according to the decision tree and fuzzy clustering. The fuzzy decision tree proposed the classification method by using fuzzy clustering method, and then construct out the decision tree to predict for data. However, in the distance function, the impact of the target space was proportional inversely. This situation could make problems in some dataset. Besides, the output model of each leaf node represented by a constant restricts the representation capability about the data distribution in the node. We propose a more reasonable definition of the distance function by considering both input and target differences with weighting factor. We also extend the output model of each leaf node to a local linear model and estimate the model parameters with a recursive SVD-based least squares estimator. Experimental results have shown that our improved version produces higher recognition rates and smaller mean square errors for classification and regression problems, respectively.
|
6 |
General conditional linear models with time-dependent coefficients under censoring and truncationTeodorescu, Bianca 19 December 2008 (has links)
In survival analysis interest often lies in the relationship between the survival function and a certain number of covariates. It usually happens that for some individuals we cannot observe the event of interest, due to the presence of right censoring and/or left truncation. A typical example is given by a retrospective medical study, in which one is interested in the time interval between birth and death due to a certain disease. Patients who die of the disease at early age will rarely have entered the study before death and are therefore left truncated. On the other hand, for patients who are alive at the end of the study, only a lower bound of the true survival time is known and these patients are hence right censored.
In the case of censored and/or truncated responses, lots of models exist in the literature that describe the relationship between the survival function and the covariates (proportional hazards model or Cox model, log-logistic model, accelerated failure time model, additive risks model, etc.). In these models, the regression coefficients are usually supposed to be constant over time. In practice, the structure of the data might however be more complex, and it might therefore be better to consider coefficients that can vary over time. In the previous examples, certain covariates (e.g. age at diagnosis, type of surgery, extension of tumor, etc.) can have a relatively high impact on early age survival, but a lower influence at higher age. This motivated a number of authors to extend the Cox model to allow for time-dependent coefficients or consider other type of time-dependent coefficients models like the additive hazards model.
In practice it is of great use to have at hand a method to check the validity of the above mentioned models.
First we consider a very general model, which includes as special cases the above mentioned models (Cox model, additive model, log-logistic model, linear transformation models, etc.) with time-dependent coefficients and study the parameter estimation by means of a least squares approach. The response is allowed to be subject to right censoring and/or left truncation.
Secondly we propose an omnibus goodness-of-fit test that will test if the general time-dependent model considered above fits the data. A bootstrap version, to approximate the critical values of the test is also proposed.
In this dissertation, for each proposed method, the finite sample performance is evaluated in a simulation study and then applied to a real data set.
|
7 |
Type-2 Neuro-Fuzzy System Modeling with Hybrid Learning AlgorithmYeh, Chi-Yuan 19 July 2011 (has links)
We propose a novel approach for building a type-2 neuro-fuzzy system from a given set of input-output training data. For an input pattern, a corresponding crisp output of the system is obtained by combining the inferred results of all the rules into a type-2 fuzzy set which is then defuzzified by applying a type reduction algorithm. Karnik and Mendel proposed an algorithm, called KM algorithm, to compute the centroid of an interval type-2 fuzzy set efficiently. Based on this algorithm, Liu developed a centroid type-reduction strategy to do type reduction for type-2 fuzzy sets. A type-2 fuzzy set is decomposed into a collection of interval type-2 fuzzy sets by £\-cuts. Then the KM algorithm is called for each interval type-2 fuzzy set iteratively. However, the initialization of the switch point in each application of the KM algorithm is not a good one. In this thesis, we present an improvement to Liu's algorithm. We employ the result previously obtained to construct the starting values in the current application of the KM algorithm. Convergence in each iteration except the first one can then speed up and type reduction for type-2 fuzzy sets can be done faster. The efficiency of the improved algorithm is analyzed mathematically and demonstrated by experimental results.
Constructing a type-2 neuro-fuzzy system involves two major phases, structure identification and parameter identification. We propose a method which incorporates self-constructing fuzzy clustering algorithm and a SVD-based least squares estimator for structure identification of type-2 neuro-fuzzy modeling. The self-constructing fuzzy clustering method is used to partition the training data set into clusters through input-similarity and output-similarity tests. The membership function associated with each cluster is defined with the mean and deviation of the data points included in the cluster. Then applying SVD-based least squares estimator, a type-2 fuzzy TSK IF-THEN rule is derived from each cluster to form a fuzzy rule base. After that a fuzzy neural network is constructed. In the parameter identification phase, the parameters associated with the rules are then refined through learning. We propose a hybrid learning algorithm which incorporates particle swarm optimization and a SVD-based least squares estimator to refine the antecedent parameters and the consequent parameters, respectively. We demonstrate the effectiveness of our proposed approach in constructing type-2 neuro-fuzzy systems by showing the results for two nonlinear functions and two real-world benchmark datasets. Besides, we use the proposed approach to construct a type-2 neuro-fuzzy system to forecast the daily Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX). Experimental results show that our forecasting system performs better than other methods.
|
8 |
Neuro-Fuzzy System Modeling with Self-Constructed Rules and Hybrid LearningOuyang, Chen-Sen 09 November 2004 (has links)
Neuro-fuzzy modeling is an efficient computing paradigm for system modeling problems. It mainly integrates two well-known approaches, neural networks and fuzzy systems, and therefore possesses advantages of them, i.e., learning capability, robustness, human-like reasoning, and high understandability. Up to now, many approaches have been proposed for neuro-fuzzy modeling. However, it still exists many problems need to be solved.
We propose in this thesis two self-constructing rule generation methods, i.e., similarity-based rule generation (SRG) and similarity-and-merge-based rule generation (SMRG), and one hybrid learning algorithm (HLA) for structure identification and parameter identification, respectively, of neuro-fuzzy modeling. SRG and SMRG group the input-output training data into a set of fuzzy clusters incrementally based on similarity tests on the input and output spaces. Membership functions associated with each cluster are defined according to statistical means and deviations of the data points included in the cluster. Additionally, SMRG employs a merging mechanism to merge similar clusters dynamically. Then a zero-order or first-order TSK-type fuzzy IF-THEN rule is extracted from each cluster to form an initial fuzzy rule-base which can be directly employed for fuzzy reasoning or be further refined in the next phase of parameter identification. Compared with other methods, both our SRG and SMRG have advantages of generating fuzzy rules quickly, matching membership functions closely with the real distribution of the training data points, and avoiding the generation of the whole set of clusters from the scratch when new training data are considered. Besides, SMRG supports a more reasonable and quick mechanism for cluster merging to alleviate the problems of data-input-order bias and redundant clusters, which are encountered in SRG and other incremental clustering approaches.
To refine the fuzzy rules obtained in the structure identification phase, a zero-order or first-order TSK-type fuzzy neural network is constructed accordingly in the parameter identification phase. Then, we develop a HLA composed by a recursive SVD-based least squares estimator and the gradient descent method to train the network. Our HLA has the advantage of alleviating the local minimal problem. Besides, it learns faster, consumes less memory, and produces lower approximation errors than other methods.
To verify the practicability of our approaches, we apply them to the applications of function approximation and classification. For function approximation, we apply our approaches to model several nonlinear functions and real cases from measured input-output datasets. For classification, our approaches are applied to a problem of human object segmentation. A fuzzy self-clustering algorithm is used to divide the base frame of a video stream into a set of segments which are then categorized as foreground or background based on a combination of multiple criteria. Then, human objects in the base frame and the remaining frames of the video stream are precisely located by a fuzzy neural network which is constructed with the fuzzy rules previously obtained and is trained by our proposed HLA. Experimental results show that our approaches can improve the accuracy of human object identification in video streams and work well even when the human object presents no significant motion in an image sequence.
|
9 |
Some Extensions of Fractional Ornstein-Uhlenbeck Model : Arbitrage and Other ApplicationsMorlanes, José Igor January 2017 (has links)
This doctoral thesis endeavors to extend probability and statistical models using stochastic differential equations. The described models capture essential features from data that are not explained by classical diffusion models driven by Brownian motion. New results obtained by the author are presented in five articles. These are divided into two parts. The first part involves three articles on statistical inference and simulation of a family of processes related to fractional Brownian motion and Ornstein-Uhlenbeck process, the so-called fractional Ornstein-Uhlenbeck process of the second kind (fOU2). In two of the articles, we show how to simulate fOU2 by means of circulant embedding method and memoryless transformations. In the other one, we construct a least squares consistent estimator of the drift parameter and prove the central limit theorem using techniques from Stochastic Calculus for Gaussian processes and Malliavin Calculus. The second phase of my research consists of two articles about jump market models and arbitrage portfolio strategies for an insider trader. One of the articles describes two arbitrage free markets according to their risk neutral valuation formula and an arbitrage strategy by switching the markets. The key aspect is the difference in volatility between the markets. Statistical evidence of this situation is shown from a sequential data set. In the other one, we analyze the arbitrage strategies of an strong insider in a pure jump Markov chain financial market by means of a likelihood process. This is constructed in an enlarged filtration using Itô calculus and general theory of stochastic processes. / Föreliggande doktorsavhandling strävar efter att utöka sannolikhetsbaserade och statistiska modeller med stokastiska differentialekvationer. De beskrivna modellerna fångar väsentliga egenskaper i data som inte förklaras av klassiska diffusionsmodeller för brownsk rörelse. Nya resultat, som författaren har härlett, presenteras i fem uppsatser. De är ordnade i två delar. Del 1 innehåller tre uppsatser om statistisk inferens och simulering av en familj av stokastiska processer som är relaterade till fraktionell brownsk rörelse och Ornstein-Uhlenbeckprocessen, så kallade andra ordningens fraktionella Ornstein-Uhlenbeckprocesser (fOU2). I två av uppsatserna visar vi hur vi kan simulera fOU2-processer med hjälp av cyklisk inbäddning och minneslös transformering. I den tredje uppsatsen konstruerar vi en minsta-kvadratestimator som ger konsistent skattning av driftparametern och bevisar centrala gränsvärdessatsen med tekniker från statistisk analys för gaussiska processer och malliavinsk analys. Del 2 av min forskning består av två uppsatser om marknadsmodeller med plötsliga hopp och portföljstrategier med arbitrage för en insiderhandlare. En av uppsatserna beskriver två arbitragefria marknader med riskneutrala värderingsformeln och en arbitragestrategi som består i växla mellan marknaderna. Den väsentliga komponenten är skillnaden mellan marknadernas volatilitet. Statistisk evidens i den här situationen visas utifrån ett sekventiellt datamaterial. I den andra uppsatsen analyserar vi arbitragestrategier hos en insiderhandlare i en finansiell marknad som förändrar sig enligt en Markovkedja där alla förändringar i tillstånd består av plötsliga hopp. Det gör vi med en likelihoodprocess. Vi konstruerar detta med utökad filtrering med hjälp av Itôanalys och allmän teori för stokastiska processer. / <p>At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 4: Manuscript. Paper 5: Manuscript.</p>
|
10 |
Gestion des données : contrôle de qualité des modèles numériques des bases de données géographiques / Data management : quality Control of the Digital Models of Geographical DatabasesZelasco, José Francisco 13 December 2010 (has links)
Les modèles numériques de terrain, cas particulier de modèles numériques de surfaces, n'ont pas la même erreur quadratique moyenne en planimétrie qu'en altimétrie. Différentes solutions ont été envisagées pour déterminer séparément l'erreur en altimétrie et l'erreur planimétrique, disposant, bien entendu, d'un modèle numérique plus précis comme référence. La démarche envisagée consiste à déterminer les paramètres des ellipsoïdes d'erreur, centrées dans la surface de référence. Dans un premier temps, l'étude a été limitée aux profils de référence avec l'ellipse d'erreur correspondante. Les paramètres de cette ellipse sont déterminés à partir des distances qui séparent les tangentes à l'ellipse du centre de cette même ellipse. Remarquons que cette distance est la moyenne quadratique des distances qui séparent le profil de référence des points du modèle numérique à évaluer, c'est à dire la racine de la variance marginale dans la direction normale à la tangente. Nous généralisons à l'ellipsoïde de révolution. C'est le cas ou l'erreur planimétrique est la même dans toutes les directions du plan horizontal (ce n'est pas le cas des MNT obtenus, par exemple, par interférométrie radar). Dans ce cas nous montrons que le problème de simulation se réduit à l'ellipse génératrice et la pente du profil correspondant à la droite de pente maximale du plan appartenant à la surface de référence. Finalement, pour évaluer les trois paramètres d'un ellipsoïde, cas où les erreurs dans les directions des trois axes sont différentes (MNT obtenus par Interférométrie SAR), la quantité des points nécessaires pour la simulation doit être importante et la surface tr ès accidentée. Le cas échéant, il est difficile d'estimer les erreurs en x et en y. Néanmoins, nous avons remarqué, qu'il s'agisse de l'ellipsoïde de révolution ou non, que dans tous les cas, l'estimation de l'erreur en z (altimétrie) donne des résultats tout à fait satisfaisants. / A Digital Surface Model (DSM) is a numerical surface model which is formed by a set of points, arranged as a grid, to study some physical surface, Digital Elevation Models (DEM), or other possible applications, such as a face, or some anatomical organ, etc. The study of the precision of these models, which is of particular interest for DEMs, has been the object of several studies in the last decades. The measurement of the precision of a DSM model, in relation to another model of the same physical surface, consists in estimating the expectancy of the squares of differences between pairs of points, called homologous points, one in each model which corresponds to the same feature of the physical surface. But these pairs are not easily discernable, the grids may not be coincident, and the differences between the homologous points, corresponding to benchmarks in the physical surface, might be subject to special conditions such as more careful measurements than on ordinary points, which imply a different precision. The generally used procedure to avoid these inconveniences has been to use the squares of vertical distances between the models, which only address the vertical component of the error, thus giving a biased estimate when the surface is not horizontal. The Perpendicular Distance Evaluation Method (PDEM) which avoids this bias, provides estimates for vertical and horizontal components of errors, and is thus a useful tool for detection of discrepancies in Digital Surface Models (DSM) like DEMs. The solution includes a special reference to the simplification which arises when the error does not vary in all horizontal directions. The PDEM is also assessed with DEM's obtained by means of the Interferometry SAR Technique
|
Page generated in 0.0914 seconds