• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 307
  • 92
  • 59
  • 51
  • 12
  • 10
  • 7
  • 6
  • 6
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 644
  • 280
  • 161
  • 138
  • 137
  • 100
  • 72
  • 69
  • 67
  • 66
  • 66
  • 63
  • 57
  • 49
  • 48
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Análise da série do índice de Depósito Interfinanceiro: modelagem da volatilidade e apreçamento de suas opções. / Analysis of Brazilian Interbank Deposit Index series: volatility modeling and option pricing

Roberto Baltieri Mauad 05 December 2013 (has links)
Modelos bastante utilizados atualmente no apreçamento de derivativos de taxas de juros realizam, muitas vezes, premissas excessivamente restritivas com relação à volatilidade da série do ativo objeto. O método de Black and Scholes e o de Vasicek, por exemplo, consideram a variância da série como constante no tempo e entre as diferentes maturidades, suposição que pode não ser a mais adequada para todos os casos. Assim, entre as técnicas alternativas de modelagem da volatilidade que vêm sendo estudadas, destacam-se as regressões por kernel. Discutimos neste trabalho a modelagem não paramétrica por meio da referida técnica e posterior apreçamento das opções em um modelo HJM Gaussiano. Analisamos diferentes especificações possíveis para a estimação não paramétrica da função de volatilidade através de simulações de Monte Carlo para o apreçamento de opções sobre títulos zero cupom, e realizamos um estudo empírico utilizando a metodologia proposta para o apreçamento de opções sobre IDI no mercado brasileiro. Um dos principais resultados encontrados é o bom ajuste da metodologia proposta no apreçamento de opções sobre títulos zero cupom. / Many models which have been recently used for derivatives pricing make restrictive assumptions about the volatility of the underlying object. Black-Scholes and Vasicek models, for instance, consider the volatility of the series as constant throughout time and maturity, an assumption that might not be the most appropriate for all cases. In this context, kernel regressions are important technics which have been researched recently. We discuss in this framework nonparametric modeling using the aforementioned technic and posterior option pricing using a Gaussian HJM model. We analyze different specifications for the nonparametric estimation of the volatility function using Monte Carlo simulations for the pricing of options on zero coupon bonds and conduct an empirical study using the proposed methodology for the pricing of options on the Interbank Deposit Index (IDI) in the Brazilian market. One of our main results is the good adjustment of the proposed methodology on the pricing of options on zero coupon bonds.
232

Análise do aerossol atmosférico em Acra, capital de Gana / Analysis of atmospheric aerosol in Accra, capital of Ghana

Verissimo, Thiago Gomes 10 June 2016 (has links)
Cidades dos países da África Subsariana (SSA) têm passado por um intenso processo de urbanização, implicando em crescimento das atividades econômicas em geral e industriais em particular, assim como, o aumento do tráfego de veículos e da produção de lixo, dentre outras mudanças que afetam diretamente o meio ambiente e a saúde dos habitantes. Neste cenário, a identificação de fontes poluidoras do ar é essencial para a fundamentação de políticas públicas que visam assegurar o direito a uma boa qualidade de vida para a população. Esta pesquisa de Mestrado esteve integrada a um projeto internacional denominado Energy, air pollution, and health in developing countries, coordenado pelo Dr. Majid Ezzati, à época professor da Harvard School of Public Health, e integrando também pesquisadores da Universidade de Gana. Este projeto tinha por objetivo fazer avaliações dos níveis de poluição do ar em algumas cidades de países em desenvolvimento, voltando-se, neste caso particular para Acra (capital de Gana e maior cidade da SSA), e duas outras cidades de Gambia, onde até então inexistiam estudos mais substantivos, relacionando-os com as condições socioeconômicas específicas das diferentes áreas estudadas. Contribuímos com as análises de Fluorescência de Raios X (XRF) e de Black Carbon (BC), com as discussões e interpretações dos dados meteorológicos e no emprego dos modelos receptores. Mas do ponto de vista do aprofundamento de estudos da qualidade do ar e do impacto de fontes, este trabalho concentrou-se na região de Nima, bairro da capital de Gana, Acra. A partir da caracterização do aerossol atmosférico local, empregou-se modelos receptores para identificar o perfil e contribuição de fontes majoritárias do Material Particulado Atmosférico Fino MP2,5 e Grosso MP2,5-10. Foram coletadas 791 amostras (de 48 horas) entre novembro de 2006 e agosto de 2008 em dois locais, na principal avenida do bairro, Nima Road, e na área residencial, Sam Road, distantes 250 metros entre si. A concentração anual média em 2007 para MP2,5 encontrada na avenida foi de 61,6 (1,0) ug/m3 e 44,9 (1,1) ug/m3 na área residencial, superando a diretriz de padrão anual máximo de 10 ug/m3 recomendada pela Organização Mundial de Saúde (OMS). A porcentagem de ultrapassagem do padrão diário (OMS) de 25 ug/m3 foi de 66,5% e 92% para a área residencial e avenida, respectivamente, durante todo experimento. As concentrações químicas elementares foram obtidas por XRF e o BC por refletância intercalibrada por Thermal Optical Transmitance (TOT). Neste trabalho desenvolvemos uma metodologia de calibração do XRF e de intercalibração entre refletância e TOT, baseada em Mínimos Quadrados Matricial, o que nos forneceu incerteza dos dados ajustados e boa precisão nos valores absolutos de concentrações medidos. Análise de Fatores (AF) e Positive Matrix Factorization (PMF) foram utilizadas para associação entre fonte e fator, bem como para estimar o perfil destas fontes. A avaliação de parâmetros meteorológicos locais, como direção e intensidade dos ventos e posicionamento de fontes significativas de emissão de MP auxiliaram no processo de associação dos fatores obtidos por esses modelos e fontes reais. No período do inverno em Gana, um vento provindo do deserto do Saara, que está localizado ao nordeste do país, denominado Harmatão, passa por Acra, aumentando de um fator 10 a concentração dos poluentes relacionados à poeira de solo. Assim, as amostras dos dias de ocorrências do Harmatão foram analisadas separadamente, pois dificultavam a identificação de outras fontes por PMF e AF. As fontes majoritárias indicadas por esses dois métodos (AF e PMF), mostraram-se concordantes: Mar (Na, Cl), solo (Fe, Ti, Mn, Si, Al, Ca, Mg), emissões veiculares (BC, Pb, Zn, K), queima de biomassa (K, P, S, BC) e queima de lixo sólido e outros materiais a céu aberto (Br, Pb) . A redução da poluição do ar em cidades da SSA, caso de Acra, requer políticas públicas relacionadas ao uso de energia, saúde, transporte e planejamento urbano, com devida atenção aos impactos nas comunidades pobres. Medidas como pavimentação das vias, cobertura do solo com vegetação, incentivo ao uso de gás de cozinha e incentivo ao transporte público, ajudariam a diminuir os altos índices de poluição do ar ambiental nessas cidades. / Sub-Saharan Africa (SSA) cities have been intense developing process, resulting in generalized economical activities growing, specially industrial, as well as increase in the vehicular traffic and waste generation, among other changes directly affecting the environment and public health. Therefore, identifying the air pollution sources is an essential issue for public decisions to assure people rights to healthy life. This Master work has been integrated to an international project called Energy, air pollution, and health in developing countries, under coordination of Dr. Majid Ezzati, then at the Harvard School of Public Health, grouping also researchers from the University of Ghana. The aims of this project were to evaluate the air pollution level at some developing countries, by this time devoted to Accra (the capital of Ghana and the main city of SSA), and two other cities of Gambia. Since then, no substantive study was performed there, connecting air pollution to the regional social-economical levels. This Master project, provided the XRF and Black Carbon determination for all samples of the main project, and gave, else, support for meteorological and receptor modeling issues. But concerning the improving of the study of air quality and sources impact, the work focus Nima town, at Accra, the Ghana Capital. The characterization of species in the local atmospheric aerosol was used in Receptor Models to make factor to sources profile association, and respective apportionment in the local PM2.5 and PM10. Between November/2006 and August/2008, 791 filters (sampled for 48 h) collected the local atmospheric aerosol, in two sites separated by 250 m. One was at the main avenue (Nima Road) and other in a residential street (Sam Road). The PM2.5 annual average concentration to 2007 was 61,6 (1,0) ug/m3 near to the avenue and 44,9 (1,1) ug/m3 in the residential area, surpassing ~5 times the Word Health Organization (WHO) guidelines to annual mean (10 ug/m3). Another WHO guideline is not surpass 25 ug/m3 in more than 1% of the samples collected in one year - in each of these sites, 66,5% and 92% of the samples are above this limit. X-Ray Fluorescence (XRF) provided the elemental concentrations, while reflectance, inter calibrated by Thermal Optical Transmittance (TOT), gave the Black Carbon (BC) levels. In this work we performed a methodology for the XRF calibration and for the inter calibration between TOT and reflectance, using Matrix Least Square Fitting that gives the uncertainties of fitted data and improves the precision of the adjusted values. Factor Analysis (FA) and Positive Matrix Factorization (PMF) enabled the association between source and the determined factors, as well as, estimated the sources profile. Local meteorological data, like wind intensity and direction, and the identification of some heavy MP emission sources, helped the process of factors to sources association. During the winter period (January-March), Accra received the Harmattan wind, blowing from Sahara deserts, that increased the concentrations from soil in 10 times. Therefore, the samples from this period were separately analyzed, providing better detection of the other source by PMF and FA. The local main source detected by both methods showed coherency: sea salt (Na, Cl), soil (Fe, Ti, Mn, Si, Al, Ca, Mg), vehicular emissions (BC, Pb, Zn, K) and biomass burning (K, P, S, BC). Reduction of air pollution levels in SSA cities, like Accra, requires public actions providing clean energy sources, health care, public transportation, urban planing and attention to they impact for the poor communities. Relatively simple providences, like roads paving, vegetation covering of the land, use of gas for cooking, public transportation, should decrease the high air pollution level in those cities.
233

New Developments on Bayesian Bootstrap for Unrestricted and Restricted Distributions

Hosseini, Reyhaneh 29 April 2019 (has links)
The recent popularity of Bayesian inference is due to the practical advantages of the Bayesian approach. The Bayesian analysis makes it possible to reflect ones prior beliefs into the analysis. In this thesis, we explore some asymptotic results in Bayesian nonparametric inference for restricted and unrestricted space of distributions. This thesis is divided into two parts. In the first part, we employ the Dirichlet process in a hypothesis testing framework to propose a Bayesian nonparametric chi-squared goodness-of-fit test. Our suggested method corresponds to Lo's Bayesian bootstrap procedure for chi-squared goodness-of-fit test. Indeed, our bootstrap rectifies some shortcomings of regular bootstrap which only counts number of observations falling in each bin in contingency tables. We consider the Dirichlet process as the prior for the distribution of data and carry out the test based on the Kullback-Leibler distance between the Dirichlet process posterior and the hypothesized distribution. We prove that this distance asymptotically converges to the same chi-squared distribution as the classical frequentist's chi-squared test. Moreover, the results are generalized to the chi-squared test of independence for contingency tables. In the second part, our main focus is on Bayesian nonparametric inference for a restricted group of distributions called spherically symmetric distributions. We describe a Bayesian nonparametric approach to perform an inference for a bivariate spherically symmetric distribution. We place a Dirichlet invariant process prior on the set of all bivariate spherically symmetric distributions and derive the Dirichlet invariant process posterior. Indeed, our approach is an extension of the Dirichlet invariant process for the symmetric distributions on the real line to bivariate spherically symmetric distribution where the underlying distribution is invariant under a finite group of rotations. Further, we obtain the Dirichlet invariant process posterior for the infinite transformation group and we prove that it approaches a certain Dirichlet process. Finally, we develop our approach to obtain the Bayesian nonparametric posterior distribution for functionals of the distribution's support when the support satisfies certain symmetry conditions. When symmetry holds with respect to the parallel lines of axes (for example, in two dimensional space x = a and y = b) we employ our approach to approximate the distribution of certain functionals such as area and perimeter for the support of the distribution. This suggests a Bayesian nonparametric bootstrapping scheme. The estimates can be derived based on posterior averaging. Then, our simulation results demonstrate that our suggested bootstrapping technique improves the accuracy of the estimates.
234

Uncertainty Modeling for Nonlinear and Linear Heated Structures

January 2019 (has links)
abstract: This investigation focuses on the development of uncertainty modeling methods applicable to both the structural and thermal models of heated structures as part of an effort to enable the design under uncertainty of hypersonic vehicles. The maximum entropy-based nonparametric stochastic modeling approach is used within the context of coupled structural-thermal Reduced Order Models (ROMs). Not only does this strategy allow for a computationally efficient generation of samples of the structural and thermal responses but the maximum entropy approach allows to introduce both aleatoric and some epistemic uncertainty into the system. While the nonparametric approach has a long history of applications to structural models, the present investigation was the first one to consider it for the heat conduction problem. In this process, it was recognized that the nonparametric approach had to be modified to maintain the localization of the temperature near the heat source, which was successfully achieved. The introduction of uncertainty in coupled structural-thermal ROMs of heated structures was addressed next. It was first recognized that the structural stiffness coefficients (linear, quadratic, and cubic) and the parameters quantifying the effects of the temperature distribution on the structural response can be regrouped into a matrix that is symmetric and positive definite. The nonparametric approach was then applied to this matrix allowing the assessment of the effects of uncertainty on the resulting temperature distributions and structural response. The third part of this document focuses on introducing uncertainty using the Maximum Entropy Method at the level of finite element by randomizing elemental matrices, for instance, elemental stiffness, mass and conductance matrices. This approach brings some epistemic uncertainty not present in the parametric approach (e.g., by randomizing the elasticity tensor) while retaining more local character than the operation in ROM level. The last part of this document focuses on the development of “reduced ROMs” (RROMs) which are reduced order models with small bases constructed in a data-driven process from a “full” ROM with a much larger basis. The development of the RROM methodology is motivated by the desire to optimally reduce the computational cost especially in multi-physics situations where a lack of prior understanding/knowledge of the solution typically leads to the selection of ROM bases that are excessively broad to ensure the necessary accuracy in representing the response. It is additionally emphasized that the ROM reduction process can be carried out adaptively, i.e., differently over different ranges of loading conditions. / Dissertation/Thesis / Doctoral Dissertation Mechanical Engineering 2019
235

Spatial and Temporal Distribution of Total Phosphorus Concentration in Soil and Surface Water in the Everglades Protection Area

Sarker, Shishir Kumar 26 June 2018 (has links)
Draining of the Everglades allowed for the expansion of urban and agricultural development, reducing half of the size of the historic Everglades. The detrimental cascading effect on the Everglades ecosystem function is related to the total phosphorus (TP) concentrations of water inflow, the inflow rate and the distance from the discharge point. As Everglades restoration has approached 15 years since the inception of the Comprehensive Everglades Restoration Plan (CERP), there is a need to assess its progress across the ecosystem. Available data from 2004 to 2014 were collected for soils and from 2004 to 2016 for water to understand a decade of trends. Both Geographic Information System (GIS) and statistical data analysis were applied to determine changes in water quality and soil chemistry. Key findings indicate a declining trend in water TP, with mixed results for soil. Higher TP concentrations (>10 µg/L) were prevalent the areas less than 1 km from a canal or water discharge point for both soil and water. The TP in surface water was higher in the wet season compared to the dry season across the EPA possibly associated with hydrologic, climatic or other factors.
236

Sequential Procedures for Nonparametric Kernel Regression

Dharmasena, Tibbotuwa Deniye Kankanamge Lasitha Sandamali, Sandamali.dharmasena@rmit.edu.au January 2008 (has links)
In a nonparametric setting, the functional form of the relationship between the response variable and the associated predictor variables is unspecified; however it is assumed to be a smooth function. The main aim of nonparametric regression is to highlight an important structure in data without any assumptions about the shape of an underlying regression function. In regression, the random and fixed design models should be distinguished. Among the variety of nonparametric regression estimators currently in use, kernel type estimators are most popular. Kernel type estimators provide a flexible class of nonparametric procedures by estimating unknown function as a weighted average using a kernel function. The bandwidth which determines the influence of the kernel has to be adapted to any kernel type estimator. Our focus is on Nadaraya-Watson estimator and Local Linear estimator which belong to a class of kernel type regression estimators called local polynomial kerne l estimators. A closely related problem is the determination of an appropriate sample size that would be required to achieve a desired confidence level of accuracy for the nonparametric regression estimators. Since sequential procedures allow an experimenter to make decisions based on the smallest number of observations without compromising accuracy, application of sequential procedures to a nonparametric regression model at a given point or series of points is considered. The motivation for using such procedures is: in many applications the quality of estimating an underlying regression function in a controlled experiment is paramount; thus, it is reasonable to invoke a sequential procedure of estimation that chooses a sample size based on recorded observations that guarantees a preassigned accuracy. We have employed sequential techniques to develop a procedure for constructing a fixed-width confidence interval for the predicted value at a specific point of the independent variable. These fixed-width confidence intervals are developed using asymptotic properties of both Nadaraya-Watson and local linear kernel estimators of nonparametric kernel regression with data-driven bandwidths and studied for both fixed and random design contexts. The sample sizes for a preset confidence coefficient are optimized using sequential procedures, namely two-stage procedure, modified two-stage procedure and purely sequential procedure. The proposed methodology is first tested by employing a large-scale simulation study. The performance of each kernel estimation method is assessed by comparing their coverage accuracy with corresponding preset confidence coefficients, proximity of computed sample sizes match up to optimal sample sizes and contrasting the estimated values obtained from the two nonparametric methods with act ual values at given series of design points of interest. We also employed the symmetric bootstrap method which is considered as an alternative method of estimating properties of unknown distributions. Resampling is done from a suitably estimated residual distribution and utilizes the percentiles of the approximate distribution to construct confidence intervals for the curve at a set of given design points. A methodology is developed for determining whether it is advantageous to use the symmetric bootstrap method to reduce the extent of oversampling that is normally known to plague Stein's two-stage sequential procedure. The procedure developed is validated using an extensive simulation study and we also explore the asymptotic properties of the relevant estimators. Finally, application of our proposed sequential nonparametric kernel regression methods are made to some problems in software reliability and finance.
237

Nonparametric analysis for risk management and market microstructure

Cosma, Antonio 20 December 2004 (has links)
This research develops and applies nonparametric estimation tools in two sectors of interest of financial econometrics: risk management and market microstructure. In the first part we address the problem of estimating conditional quantiles in financial and economic time series. Research in this field received great impulse since quantile based risk measures such as Value at Risk (VaR) have become essential tools to assess the riskiness of trading activities. The great amounts of data available in financial time series allows building nonparametric estimators that are not subject to the risk of specification error of parametric models. A wavelet based estimator is developed. With this approach, minimum regularity conditions of the underlying process are required. Moreover the specific choice of the wavelets in this work leads to the constructions of shape preserving estimators of probability functions. In other words, estimates of probability functions, both densities and cumulative distribution functions, are probability functions themselves. This method is compared with competing methods through simulations and applications to real data. In the second part we carry out a nonparametric analysis of financial durations, that is of the waiting times between particular financial events, such as trades, quote updates, volume accumulation, that happen in financial markets. These data display very peculiar stylized facts one has to take into account when attempting to model them. We make use of an existing algorithm to describe nonparametrically the dynamics of the process in terms of its lagged realizations and of a latent variable, its conditional mean. The estimation devices needed to effectively apply the algorithm to our dataset are presented in this part of the work.
238

A New Generation of Mixture-Model Cluster Analysis with Information Complexity and the Genetic EM Algorithm

Howe, John Andrew 01 May 2009 (has links)
In this dissertation, we extend several relatively new developments in statistical model selection and data mining in order to improve one of the workhorse statistical tools - mixture modeling (Pearson, 1894). The traditional mixture model assumes data comes from several populations of Gaussian distributions. Thus, what remains is to determine how many distributions, their population parameters, and the mixing proportions. However, real data often do not fit the restrictions of normality very well. It is likely that data from a single population exhibiting either asymmetrical or nonnormal tail behavior could be erroneously modeled as two populations, resulting in suboptimal decisions. To avoid these pitfalls, we develop the mixture model under a broader distributional assumption by fitting a group of multivariate elliptically-contoured distributions (Anderson and Fang, 1990; Fang et al., 1990). Special cases include the multivariate Gaussian and power exponential distributions, as well as the multivariate generalization of the Student’s T. This gives us the flexibility to model nonnormal tail and peak behavior, though the symmetry restriction still exists. The literature has many examples of research generalizing the Gaussian mixture model to other distributions (Farrell and Mersereau, 2004; Hasselblad, 1966; John, 1970a), but our effort is more general. Further, we generalize the mixture model to be non-parametric, by developing two types of kernel mixture model. First, we generalize the mixture model to use the truly multivariate kernel density estimators (Wand and Jones, 1995). Additionally, we develop the power exponential product kernel mixture model, which allows the density to adjust to the shape of each dimension independently. Because kernel density estimators enforce no functional form, both of these methods can adapt to nonnormal asymmetric, kurtotic, and tail characteristics. Over the past two decades or so, evolutionary algorithms have grown in popularity, as they have provided encouraging results in a variety of optimization problems. Several authors have applied the genetic algorithm - a subset of evolutionary algorithms - to mixture modeling, including Bhuyan et al. (1991), Krishna and Murty (1999), and Wicker (2006). These procedures have the benefit that they bypass computational issues that plague the traditional methods. We extend these initialization and optimization methods by combining them with our updated mixture models. Additionally, we “borrow” results from robust estimation theory (Ledoit and Wolf, 2003; Shurygin, 1983; Thomaz, 2004) in order to data-adaptively regularize population covariance matrices. Numerical instability of the covariance matrix can be a significant problem for mixture modeling, since estimation is typically done on a relatively small subset of the observations. We likewise extend various information criteria (Akaike, 1973; Bozdogan, 1994b; Schwarz, 1978) to the elliptically-contoured and kernel mixture models. Information criteria guide model selection and estimation based on various approximations to the Kullback-Liebler divergence. Following Bozdogan (1994a), we use these tools to sequentially select the best mixture model, select the best subset of variables, and detect influential observations - all without making any subjective decisions. Over the course of this research, we developed a full-featured Matlab toolbox (M3) which implements all the new developments in mixture modeling presented in this dissertation. We show results on both simulated and real world datasets. Keywords: mixture modeling, nonparametric estimation, subset selection, influence detection, evidence-based medical diagnostics, unsupervised classification, robust estimation.
239

Das Arbeitsangebot verheirateter Frauen in den neuen und alten Bundesländern

Kempe, Wolfram January 1996 (has links)
In diesem Beitrag wird eine Regressionsanalyse vorgestellt, die die Einflüsse auf die Entscheidung verheirateter deutscher Frauen untersucht, eine Erwerbstätigkeit aufzunehmen. Um Differenzen im Verhalten von ost- und westdeutschen Frauen zu ermitteln, erfolgte die Untersuchung getrennt in zwei Datensätzen. Zur Vermeidung von Annahmen über die Art des Zusammenhanges wurde das Generalisierte Additive Modell (GAM) gewählt, ein semiparametrisches Regressionsmodell. Diese Modellform, die nichtparametrische und parametrische Regressionsmethoden in sich vereint, hat bisher wenig Verbreitung in der Praxis gefunden. Dies lag vor allem am Schätz verfahren, dem Backfitting. Seit etwa einem Jahr gibt es neue Ansätze, in dieser Modellform zu schätzen. Die analytischen Eigenschaften des neuen Schätzers lassen sich leichter bestimmen. Mit dieser Schätzung konnten Unterschiede zwischen Ost und West genau herausgearbeitet werden und die funktionalen Zusammenhänge zwischen Einflußvariablen und Antwortvariable untersucht werden. Die Analyse brachte deutliche Unterschiede im Erwerbsverhalten zwischen der Frauen beider Landesteile zum Vorschein. / This paper will focus on the regression analysis of labor supply decisions of married German women. In order to determine differences East and West German women were compared seperately. To avoid assumptions about the functional type of correlation the Generalized Additive Model, a semiparametric regression model, was chosen. So far, this pattern consisting of nonparametric and parametric methods has not found acceptance in practical application. Reason for that is the backfitting-estimator. One year ago new ideas for the estimation by GAM were found. The analytical features of the new estimator are easier to determine. Using this method differences between East and West were discovered in detail and functional correlations between endogenous and exogenous variables were investigated. This analysis unveiled significant differences of labor supply behavior among East and West Germany.
240

Estimation and testing the effect of covariates in accelerated life time models under censoring

Liero, Hannelore January 2010 (has links)
The accelerated lifetime model is considered. To test the influence of the covariate we transform the model in a regression model. Since censoring is allowed this approach leads to a goodness-of-fit problem for regression functions under censoring. So nonparametric estimation of regression functions under censoring is investigated, a limit theorem for a L2-distance is stated and a test procedure is formulated. Finally a Monte Carlo procedure is proposed.

Page generated in 0.0821 seconds