• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 6
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 24
  • 24
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Mean preservation in censored regression using preliminary nonparametric smoothing

Heuchenne, Cédric 18 August 2005 (has links)
In this thesis, we consider the problem of estimating the regression function in location-scale regression models. This model assumes that the random vector (X,Y) satisfies Y = m(X) + s(X)e, where m(.) is an unknown location function (e.g. conditional mean, median, truncated mean,...), s(.) is an unknown scale function, and e is independent of X. The response Y is subject to random right censoring, and the covariate X is completely observed. In the first part of the thesis, we assume that m(x) = E(Y|X=x) follows a polynomial model. A new estimation procedure for the unknown regression parameters is proposed, which extends the classical least squares procedure to censored data. The proposed method is inspired by the method of Buckley and James (1979), but is, unlike the latter method, a non-iterative procedure due to nonparametric preliminary estimation. The asymptotic normality of the estimators is established. Simulations are carried out for both methods and they show that the proposed estimators have usually smaller variance and smaller mean squared error than the Buckley-James estimators. For the second part, suppose that m(.)=E(Y|.) belongs to some parametric class of regression functions. A new estimation procedure for the true, unknown vector of parameters is proposed, that extends the classical least squares procedure for nonlinear regression to the case where the response is subject to censoring. The proposed technique uses new `synthetic' data points that are constructed by using a nonparametric relation between Y and X. The consistency and asymptotic normality of the proposed estimator are established, and the estimator is compared via simulations with an estimator proposed by Stute in 1999. In the third part, we study the nonparametric estimation of the regression function m(.). It is well known that the completely nonparametric estimator of the conditional distribution F(.|x) of Y given X=x suffers from inconsistency problems in the right tail (Beran, 1981), and hence the location function m(x) cannot be estimated consistently in a completely nonparametric way, whenever m(x) involves the right tail of F(.|x) (like e.g. for the conditional mean). We propose two alternative estimators of m(x), that do not share the above inconsistency problems. The idea is to make use of the assumed location-scale model, in order to improve the estimation of F(.|x), especially in the right tail. We obtain the asymptotic properties of the two proposed estimators of m(x). Simulations show that the proposed estimators outperform the completely nonparametric estimator in many cases.
12

Bootstrap bandwidth selection in kernel hazard rate estimation / S. Jansen van Vuuren

Van Vuuren, Stefan Jansen January 2011 (has links)
The purpose of this study is to thoroughly discuss kernel hazard function estimation, both in the complete sample case as well as in the presence of random right censoring. Most of the focus is on the very important task of automatic bandwidth selection. Two existing selectors, least–squares cross validation as described by Patil (1993a) and Patil (1993b), as well as the bootstrap bandwidth selector of Gonzalez–Manteiga, Cao and Marron (1996) will be discussed. The bandwidth selector of Hall and Robinson (2009), which uses bootstrap aggregation (or 'bagging'), will be extended to and evaluated in the setting of kernel hazard rate estimation. We will also make a simple proposal for a bootstrap bandwidth selector. The performance of these bandwidth selectors will be compared empirically in a simulation study. The findings and conclusions of this study are reported. / Thesis (M.Sc. (Statistics))--North-West University, Potchefstroom Campus, 2011.
13

Bootstrap bandwidth selection in kernel hazard rate estimation / S. Jansen van Vuuren

Van Vuuren, Stefan Jansen January 2011 (has links)
The purpose of this study is to thoroughly discuss kernel hazard function estimation, both in the complete sample case as well as in the presence of random right censoring. Most of the focus is on the very important task of automatic bandwidth selection. Two existing selectors, least–squares cross validation as described by Patil (1993a) and Patil (1993b), as well as the bootstrap bandwidth selector of Gonzalez–Manteiga, Cao and Marron (1996) will be discussed. The bandwidth selector of Hall and Robinson (2009), which uses bootstrap aggregation (or 'bagging'), will be extended to and evaluated in the setting of kernel hazard rate estimation. We will also make a simple proposal for a bootstrap bandwidth selector. The performance of these bandwidth selectors will be compared empirically in a simulation study. The findings and conclusions of this study are reported. / Thesis (M.Sc. (Statistics))--North-West University, Potchefstroom Campus, 2011.
14

An indoor-location sensing system using WLAN and ultrasonic/radio technologies

Hyun, Eugene Jaiho 20 August 2008 (has links)
Ubiquitous location-aware systems and services are becoming a reality as made evident by the widely known Global Position System (GPS). However, indoor location-aware sensing systems are not yet commercially viable since: (i) for a GPS-based system, the signals attenuate and can multipath indoors causing weak signal and poor location (ii) for a Radiolocation-based system, the propagation of radio signals are complex and difficult to model. In this paper, we present RadLoco, a location-sensing system that uses IEEE 802.11 Wireless LAN survey techniques to create a radio Received Signal Strength (RSS) map of the propagation environment. To provide accurate location estimation, we make use of a kernel-window estimation algorithm that is used to approximate the probability density function of RSS measurement and location. Unlike parametric estimators, this non-parametric kernel approach requires less knowledge of the distributions of location and measurements, and also makes use of the prior knowledge of mobile terminal location to reduce the estimation error. The novelty of the system is an innovative radio/ultrasonic sensory network which allows for rapid data collection whereas the standard technique of defining a grid of survey points with measuring rulers, chalk, and tape would require a great amount of manpower. Using this sensory network, a 2000m sq. office building is surveyed in four hours by a single technician. Our experimental results indicate the mobile terminal is located on the correct oor with over 98% accuracy and with a mean error of less than 2.5m from the true location.
15

Risk contribution and its application in asset and risk management for life insurance / Riskbidrag och dess användning i kapital- och riskförvaltning för livförsäkringsbolag

Sundin, Jesper January 2016 (has links)
In risk management one important aspect is the allocation of total portfolio risk into its components. This can be done by measuring each components' risk contribution relative to the total risk, taking into account the covariance between components. The measurement procedure is straightforward under assumptions of elliptical distributions but not under the commonly used multivariate log-normal distributions. Two portfolio strategies are considered, the "buy and hold" and the "constant mix" strategy. The profits and losses of the components of a generic portfolio strategy are defined in order to enable a proper definition of risk contribution for the constant mix strategy. Then kernel estimation of risk contribution is performed for both portfolio strategies using Monte Carlo simulation. Further, applications for asset and risk management with risk contributions are discussed in the context of life insurance. / En viktig aspekt inom riskhantering är tilldelning av total portföljrisk till tillångsportföljens beståndsdelar. Detta kan åstadkommas genom att mäta riskbidrag, som även kan ta hänsyn till beroenden mellan risktillgångar. Beräkning av riskbidrag är enkel vid antagande om elliptiska fördelningar så som multivariat normalfördelning, men inte vid antagande om multivariat log-normalfördelning där analytiska formler saknas. Skillnaden mellan riskbidragen inom två portföljstrategier undersöks. Dessa strategier är "buy and hold" och "constant mix" (konstant ombalansering). Tilldelning av resultaten hos de olika beståndsdelarna med en generisk portföljstrategi härleds för att kunna definiera riskbidrag för "constant mix" portföljstrategin. "Kernel estimering" används för att estimera riskbidrag genom simulering. Vidare diskuteras applikationer för tillgångs- och riskhantering inom ramen för livförsäkringsbolag.
16

Méthodes statistiques pour l’estimation du rendement paramétrique des circuits intégrés analogiques et RF / Statistical methods for the parametric yield estimation of analog/RF integratedcircuits

Desrumaux, Pierre-François 08 November 2013 (has links)
De nombreuses sources de variabilité impactent la fabrication des circuits intégrés analogiques et RF et peuvent conduire à une dégradation du rendement. Il est donc nécessaire de mesurer leur influence le plus tôt possible dans le processus de fabrications. Les méthodes de simulation statistiques permettent ainsi d'estimer le rendement paramétrique des circuits durant la phase de conception. Cependant, les méthodes traditionnelles telles que la méthode de Monte Carlo ne sont pas assez précises lorsqu'un faible nombre de circuits est simulé. Par conséquent, il est nécessaire de créer un estimateur précis du rendement paramétrique basé sur un faible nombre de simulations. Dans cette thèse, les méthodes statistiques existantes provenant à la fois de publications en électroniques et non-Électroniques sont d'abord décrites et leurs limites sont mises en avant. Ensuite, trois nouveaux estimateurs de rendement sont proposés: une méthode de type quasi-Monte Carlo avec tri automatique des dimensions, une méthode des variables de contrôle basée sur l'estimation par noyau, et une méthode par tirage d'importance. Les trois méthodes reposent sur un modèle mathématique de la métrique de performance du circuit qui est construit à partir d'un développement de Taylor à l'ordre un. Les résultats théoriques et expérimentaux obtenus démontrent la supériorité des méthodes proposées par rapport aux méthodes existantes, à la fois en terme de précision de l'estimateur et en terme de réduction du nombre de simulations de circuits. / Semiconductor device fabrication is a complex process which is subject to various sources of variability. These variations can impact the functionality and performance of analog integrated circuits, which leads to yield loss, potential chip modifications, delayed time to market and reduced profit. Statistical circuit simulation methods enable to estimate the parametric yield of the circuit early in the design stage so that corrections can be done before manufacturing. However, traditional methods such as Monte Carlo method and corner simulation have limitations. Therefore an accurate analog yield estimate based on a small number of circuit simulations is needed. In this thesis, existing statistical methods from electronics and non-Electronics publications are first described. However, these methods suffer from sever drawbacks such as the need of initial time-Consuming circuit simulations, or a poor scaling with the number of random variables. Second, three novel statistical methods are proposed to accurately estimate the parametric yield of analog/RF integrated circuits based on a moderate number of circuit simulations: An automatically sorted quasi-Monte Carlo method, a kernel-Based control variates method and an importance sampling method. The three methods rely on a mathematical model of the circuit performance metric which is constructed based on a truncated first-Order Taylor expansion. This modeling technique is selected as it requires a minimal number of SPICE-Like circuit simulations. Both theoretical and simulation results show that the proposed methods lead to significant speedup or improvement in accuracy compared to other existing methods.
17

探討半參數隨機邊界模型的技術與配置效率之一致性估計方法 / Consistent estimation of technical and allocative efficiencies for a semiparametric Stochastic Cost Frontier with Shadow Input Prices

陳冠臻, Chen, Kuan Chen Unknown Date (has links)
傳統參數隨機成本邊界模型需事先假設其函數型態,但真正的函數型態未知,若是假設錯誤的函數型態可能存在模型設定誤差,另外過去估計成本函數時,大多著重於技術效率的衡量,而忽略配置效率,如此一來,將導致模型參數估計產生偏誤,影響後來效率的計算。基於上述的問題,本研究將應用半參數隨機成本邊界模型且同時考量技術效率與配置效率,不但函數設定具有彈性且能正確的衡量效率值,然而在考量配置效率的衡量後,增加模型估計的困難度,使得估計收斂不易,因此本研究提出一個五階段的估計步驟,應用蒙地卡羅模擬進行分析,該估計步驟不但能簡化估計且能得到技術與配置效率的一致性估計。最後則將本研究提出的估計方法應用在實證研究上,探討14個東歐國家在轉型期間其技術與配置效率的衡量,使用不平衡縱橫資料,共340家商業銀行進行實證分析。 / Conventional parametric stochastic cost frontier models are likely to suffer from biased inferences due to misspecification and the ignorance of allocative efficiency (AE). To fill up the gap in the literature, this article proposes a semiparametric stochastic cost frontier with shadow input prices that combines a parametric portion with a nonparametric portion and that allows for the presence of both technical efficiency (TE) and AE. The introduction of AE and the nonparametric function into the cost function complicates substantially the estimation procedure. We develop a new estimation procedure that leads to consistent estimators and valid TE and AE measures, which are proved by conducting Monte Carlo simulations. An empirical study using unbalanced panel data on 340 commercial banks from 14 East European countries over the period 1993-2004 is performed to help shed some light on the usefulness of our procedure.
18

[en] FORECASTING PROBABILISTIC DENSITY DISTRIBUTION OF WIND POWER GENERATION USING NON-PARAMETRIC TECHNIQUES / [pt] PREVISÃO DA DISTRIBUIÇÃO DA DENSIDADE DE PROBABILIDADE DA GERAÇÃO DE ENERGIA EÓLICA USANDO TÉCNICAS NÃO PARAMÉTRICAS

SORAIDA AGUILAR VARGAS 11 July 2016 (has links)
[pt] Como resultado do processo de contração de novos Leilões de energia eólica e a entrada em operação de novos parques eólicos ao sistema elétrico Brasileiro, é necessário que o planejamento da operação das atividades de curto prazo como a regulação, atendimento da carga, balanceamento e programação do despacho das unidades geradoras entre outras atividades, seja efetuado de tal que os riscos técnicos e financeiros sejam minimizados. Porém esta não é uma tarefa simples, já que fornecer previsões exatas para esse processo apresenta uma série de desafios, como a incorporação da incerteza no cálculo das previsões. Daqui que a literatura técnica reporta diversas técnicas que proporcionam estimativas da densidade de probabilidade de geração de energia eólica, pois tais estimações permitem obter previsões da densidade de probabilidade para a energia eólica. Neste contexto, a previsão da velocidade do vento nos aproveitamentos eólicos passa a ser uma informação fundamental para os modelos de apoio à decisão que suportam a operação econômica e segura dos sistemas elétricos, pois a maioria dos modelos precisa da previsão da velocidade do vento para calcular a previsão da energia eólica. Este trabalho apresenta uma proposta uma estratégia de especificação não paramétrica para a previsão da geração de energia eólica, empregando a comumente conhecida densidade condicional por kernel, o qual permite calcular a função densidade de probabilidade da produção eólica para qualquer horizonte de tempo, condicionada à previsão da velocidade do vento obtida através da aplicação da metodologia de Análise Espectral Singular (SSA) para previsão. A metodologia foi validada com sucesso usando a série temporal das medias horárias da velocidade do vento e da produção eólica de um parque eólico Brasileiro. Os resultados foram comparados contra outras metodologias para a previsão da velocidade do vento, onde a abordagem não paramétrica proposta produz resultados muito proeminentes. / [en] As a result of the new contracting process wind power auctions and the entrance into operation of new wind farms to the Brazilian electrical system, it is requires that the planning of the operation of short-term activities such as regulation, balancing and programming dispatch of units commitment among other activities, is made such that the technical and financial risks are minimized. But this is not a simple task, since providing accurate forecasts for this process presents several challenges, as the incorporation of uncertainty in the calculation of the forecasts. Hence the technical literature reports several techniques that provide estimates of the probability of wind power generation density, because such estimates allow to obtain forecasts of the wind power probability density function. In this context, wind speed forecasting in wind farms becomes essential information for decision support models which helps the economic and safe operation of electrical systems, due to the fact that most of the models need to the wind speed predictions for forecasting wind energy. This thesis proposes a non-parametric specification strategy for forecasting of wind power generation, using the commonly known conditional kernel density estimation, which allows the estimation of the probability density function of wind power generation for any time horizon, conditioned on wind speed forecast obtained by applying the Singular Spectrum Analysis methodology (SSA). The methodology has been successfully validated using the time series of wind speed and hourly averages of wind production of a Brazilian wind farm. The results were compared against other methodologies for wind speed prediction, and the proposed non-parametric approach produced very prominent results.
19

Estimation non paramétrique pour les processus markoviens déterministes par morceaux / Nonparametric estimation for piecewise-deterministic Markov processes

Azaïs, Romain 01 July 2013 (has links)
M.H.A. Davis a introduit les processus markoviens déterministes par morceaux (PDMP) comme une classe générale de modèles stochastiques non diffusifs, donnant lieu à des trajectoires déterministes ponctuées, à des instants aléatoires, par des sauts aléatoires. Dans cette thèse, nous présentons et analysons des estimateurs non paramétriques des lois conditionnelles des deux aléas intervenant dans la dynamique de tels processus. Plus précisément, dans le cadre d'une observation en temps long de la trajectoire d'un PDMP, nous présentons des estimateurs de la densité conditionnelle des temps inter-sauts et du noyau de Markov qui gouverne la loi des sauts. Nous établissons des résultats de convergence pour nos estimateurs. Des simulations numériques pour différentes applications illustrent nos résultats. Nous proposons également un estimateur du taux de saut pour des processus de renouvellement, ainsi qu'une méthode d'approximation numérique pour un modèle de régression semi-paramétrique. / Piecewise-deterministic Markov processes (PDMP’s) have been introduced by M.H.A. Davis as a general family of non-diffusion stochastic models, involving deterministic motion punctuated by random jumps at random times. In this thesis, we propose and analyze nonparametric estimation methods for both the features governing the randomness of such a process. More precisely, we present estimators of the conditional density of the inter-jumping times and of the transition kernel for a PDMP observed within a long time interval. We establish some convergence results for both the proposed estimators. In addition, numerical simulations illustrate our theoretical results. Furthermore, we propose an estimator for the jump rate of a nonhomogeneous renewal process and a numerical approximation method based on optimal quantization for a semiparametric regression model.
20

Analysis of Copula Opinion Pooling with Applications to Quantitative Portfolio Management

Bredeby, Rickard January 2015 (has links)
In 2005 Attilio Meucci presented his article Beyond Black-Litterman: Views on Non-Normal Markets which introduces the copula opinion pooling approach using generic non-normal market assumptions. Copulas and opinion pooling are used to express views on the market which provides a posterior market distribution that smoothly blends an arbitrarily distributed market prior distribution with arbitrarily chosen views. This thesis explains how to use this method in practice and investigates its performance in different investment situations. The method is tested on three portfolios, each showing some different feature. The conclusions that can be drawn are e.g. that the method can be used in many different investment situations in many different ways, implementation and calculations can be made within a few seconds for a large data set and the method could be useful for portfolio managers using mathematical methods. The presented examples together with the method generate reasonable results.

Page generated in 0.5174 seconds