101 |
Modelo de regressão gama-G em análise de sobrevivência / Gama-G regression model in survival analysisHashimoto, Elizabeth Mie 15 March 2013 (has links)
Dados de tempo de falha são caracterizados pela presença de censuras, que são observações que não foram acompanhadas até a ocorrência de um evento de interesse. Para estudar o comportamento de dados com essa natureza, distribuições de probabilidade são utilizadas. Além disso, é comum se ter uma ou mais variáveis explicativas associadas aos tempos de falha. Dessa forma, o objetivo geral do presente trabalho é propor duas novas distribuições utilizando a função geradora de distribuições gama, no contexto de modelos de regressão em análise de sobrevivência. Essa função possui um parâmetro de forma que permite criar famílias paramétricas de distribuições que sejam flexíveis para capturar uma ampla variedade de comportamentos simétricos e assimétricos. Assim, a distribuição Weibull e a distribuição log-logística foram modificadas, dando origem a duas novas distribuições de probabilidade, denominadas de gama-Weibull e gama-log-logística, respectivamente. Consequentemente, os modelos de regressão locação-escala, de longa-duração e com efeito aleatório foram estudados, considerando as novas distribuições de probabilidade. Para cada um dos modelos propostos, foi utilizado o método da máxima verossimilhança para estimar os parâmetros e algumas medidas de diagnóstico de influência global e local foram calculadas para encontrar possíveis pontos influentes. No entanto, os resíduos foram propostos apenas para os modelos locação-escala para dados com censura à direita e para dados com censura intervalar, bem um estudo de simulação para verificar a distribuição empírica dos resíduos. Outra questão explorada é a introdução dos modelos: gama-Weibull inflacionado de zeros e gama-log-logística inflacionado de zeros, para analisar dados de produção de óleo de copaíba. Por fim, diferentes conjunto de dados foram utilizados para ilustrar a aplicação de cada um dos modelos propostos. / Failure time data are characterized by the presence of censoring, which are observations that were not followed up until the occurrence of an event of interest. To study the behavior of the data of that nature, probability distributions are used. Furthermore, it is common to have one or more explanatory variables associated to failure times. Thus, the goal of this work is given to the generating of gamma distributions function in the context of regression models in survival analysis. This function has a shape parameter that allows create parametric families of distributions that are flexible to capture a wide variety of symmetrical and asymmetrical behaviors. Therefore, through the generating of gamma distributions function, the Weibull distribution and log-logistic distribution were modified to give two new probability distributions: gamma-Weibull and gammalog-logistic. Additionally, location-scale regression models, long-term models and models with random effects were also studied, considering the new distributions. For each of the proposed models, we used the maximum likelihood method to estimate the parameters and some diagnostic measures of global and local influence were calculated for possible influential points. However, residuals have been proposed for data with right censoring and interval-censored data and a simulation study to verify the empirical distribution of the residuals. Another issue explored is the introduction of models: gamma-Weibull inflated zeros and gamma-log-logistic inflated zeros, to analyze production data copaiba oil. Finally, different data set are used to illustrate the application of each of the models.
|
102 |
Uma sistemática para utilização de dados censurados de garantia para obtenção da confiabilidade automotiva /Zappa, Eugênio January 2019 (has links)
Orientador: Messias Borges Silva / Resumo: Com um mercado cada vez mais veloz, competitivo e com consumidores mais exigentes que não toleram falhas de produtos, que são amparados por legislações de proteção e defesa do consumidor, as empresas necessitam se esforçar no aprimoramento da qualidade de seus produtos. Entretanto, mesmo com a aplicação de tecnologias no desenvolvimento e fabricação de produtos, as falhas ainda acontecem. Para que um produto possa desempenhar sua função sem falhas num determinado tempo desejável, nas mais diversas condições reais as quais são submetidos, deve-se conhecer e aumentar a sua confiabilidade. Embora os dados de garantia que as empresas possuam dos seus produtos sejam fontes de informações valiosas para a obtenção da confiabilidade de um produto, estes dados ainda são insuficientes, imprecisos ou incompletos para uso direto, sendo necessário o uso de métodos apropriados ainda não muito disseminados. Este trabalho visa aplicar o método de censura por taxa de uso que viabiliza o uso de dados de garantia em análises mais precisas de confiabilidade para que as empresas possam aprimorar os seus produtos. Por meio de uma revisão da literatura e com o uso de dados de garantia, verificou-se a viabilidade da aplicação do método proposto. Com comprovação estatística, o método proposto de modelagem dos dados de garantia atingiu os resultados do estudo de referência adotado. Conclui-se que o método proposto com o objetivo de conhecer com precisão a confiabilidade do produto é aplicável e não ex... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: With an ever faster, more competitive market, and more demanding consumers who cannot tolerate product failures that are backed by consumer protection and protection laws, companies need to strive to improve the quality of their products. However, even with the application of technologies in product development and manufacturing, failures still occur. For a product to be able to perform its function without fail in a certain desirable time, under the most diverse real conditions to which it is submitted, its reliability must be known and increased. Although the assurance data that companies have of their products is a valuable source of information for the reliability of a product, this data is still insufficient, inaccurate or incomplete for direct use, and appropriate methods not yet widely disseminated are required. . This work aims to apply the usage rate censored method that enables the use of warranty data in more accurate reliability analyzes so that companies can improve their products. Through a literature review and the use of guarantee data, the feasibility of applying the proposed method was verified. With statistical proof, the proposed guarantee data modeling method reached the results of the adopted reference study. It is concluded that the proposed method with the objective of knowing precisely the product reliability is applicable and does not require specialized reliability software for its execution. Therefore, its application can contribute to the developm... (Complete abstract click electronic access below) / Mestre
|
103 |
Modelling dependence in actuarial science, with emphasis on credibility theory and copulasPurcaru, Oana 19 August 2005 (has links)
One basic problem in statistical sciences is to understand the relationships among multivariate outcomes. Although it remains an important tool and is widely applicable,
the regression analysis is limited by the basic setup that requires to identify one dimension of the outcomes as the primary measure of interest (the "dependent"
variable) and other dimensions as supporting this variable (the "explanatory" variables). There are situations where this relationship is not of primary interest.
For example, in actuarial sciences, one might be interested to see the dependence between annual claim numbers of a policyholder and its impact on the premium
or the dependence between the claim amounts and the expenses related to them. In such cases the normality hypothesis fails, thus Pearson's correlation or concepts based
on linearity are no longer the best ones to be used. Therefore, in order to quantify the dependence between non-normal outcomes one needs different statistical tools,
such as, for example, the dependence concepts and the copulas.
This thesis is devoted to modelling dependence with applications in actuarial sciences and is divided in two parts: the first one concerns dependence in frequency
credibility models and the second one dependence between continuous outcomes. In each part of the thesis we resort to different tools, the stochastic orderings
(which arise from the dependence concepts), and copulas, respectively.
During the last decade of the 20th century, the world of insurance was confronted with important developments of the a posteriori tarification, especially in the
field of credibility. This was dued to the easing of insurance markets in the European Union, which gave rise to an advanced segmentation. The first important
contribution is due to Dionne & Vanasse (1989), who proposed a credibility model which integrates a priori and a posteriori information on an individual basis.
These authors introduced a regression component in the Poisson counting model in order to use all available information in the estimation of accident frequency.
The unexplained heterogeneity was then modeled by the introduction of a latent variable representing the influence of hidden policy characteristics. The vast majority
of the papers appeared in the actuarial literature considered time-independent (or static) heterogeneous models. Noticeable exceptions include the pioneering papers
by Gerber & Jones (1975), Sundt (1988) and Pinquet, Guillén & Bolancé (2001, 2003). The allowance for an unknown underlying random parameter
that develops over time is justified since unobservable factors influencing the driving abilities are not constant. One might consider either shocks (induced by
events like divorces or nervous breakdown, for instance) or continuous modifications (e.g. due to learning effect).
In the first part we study the recently introduced models in the frequency credibility theory, which can be seen as models of time series
for count data, adapted to actuarial problems. More precisely we will examine the kind of dependence induced among annual claim numbers by the introduction of random
effects taking unexplained heterogeneity, when these random effects are static and time-dependent. We will also make precise the effect of reporting claims on the
a posteriori distribution of the random effect. This will be done by establishing some stochastic monotonicity property of the a posteriori distribution
with respect to the claims history. We end this part by considering different models for the random effects and computing the a posteriori corrections of the
premiums on basis of a real data set from a Spanish insurance company.
Whereas dependence concepts are very useful to describe the relationship between multivariate outcomes, in practice (think for instance to the computation of reinsurance
premiums) one need some statistical tool easy to implement, which incorporates the structure of the data. Such tool is the copula, which allows the construction of multivariate
distributions for given marginals. Because copulas characterize the dependence structure of random vectors once the effect of the marginals has been factored out,
identifying and fitting a copula to data is not an easy task. In practice, it is often preferable to restrict the search of an appropriate copula to some reasonable
family, like the archimedean one. Then, it is extremely useful to have simple graphical procedures to select the best fitting model among some competing alternatives
for the data at hand.
In the second part of the thesis we propose a new nonparametric estimator for the generator, that takes into account the particularity of the data, namely censoring and truncation.
This nonparametric estimation then serves as a benchmark to select an appropriate parametric archimedean copula. This selection procedure will be illustrated
on a real data set.
|
104 |
Mean preservation in censored regression using preliminary nonparametric smoothingHeuchenne, Cédric 18 August 2005 (has links)
In this thesis, we consider the problem of estimating the regression function in location-scale regression models.
This model assumes that the random vector (X,Y) satisfies Y = m(X) + s(X)e, where m(.) is an
unknown location function (e.g. conditional mean, median, truncated mean,...), s(.) is an unknown scale function,
and e is independent of X. The response Y is subject to random right censoring, and the covariate X is completely
observed.
In the first part of the thesis, we assume that
m(x) = E(Y|X=x) follows a polynomial model.
A new estimation
procedure for the unknown regression parameters is proposed, which extends the classical least squares procedure to
censored data. The proposed method is inspired by the method of Buckley and James (1979), but is, unlike the latter method, a
non-iterative procedure due to nonparametric preliminary estimation. The asymptotic normality of the estimators is established.
Simulations are carried out for both methods and they show that the proposed estimators have usually smaller variance and smaller
mean squared error than the Buckley-James estimators.
For the second part, suppose that m(.)=E(Y|.) belongs to some parametric class of
regression functions. A new estimation procedure for the true, unknown vector of parameters is proposed, that extends the
classical least squares procedure for nonlinear regression to the case where the response is subject to censoring. The proposed
technique uses new `synthetic' data points that are constructed by using a nonparametric relation between Y and X.
The consistency and asymptotic normality of the proposed estimator are established, and the estimator is compared via simulations
with an estimator proposed by Stute in 1999.
In the third part, we study the nonparametric estimation of the regression function m(.). It is well known that
the completely nonparametric estimator of the conditional distribution F(.|x) of Y given X=x suffers from inconsistency
problems in the right tail (Beran, 1981), and hence the location function m(x) cannot be estimated consistently in a completely
nonparametric way, whenever m(x) involves the right tail of F(.|x) (like e.g. for the conditional mean).
We propose two alternative estimators of m(x), that do not share the above inconsistency problems. The idea is to make use of the
assumed location-scale model, in order to improve the estimation of F(.|x), especially in the right tail.
We obtain the asymptotic properties of the two proposed estimators of m(x). Simulations show that the proposed estimators outperform
the completely nonparametric estimator in many cases.
|
105 |
Essays on banking, credit and interest ratesRoszbach, Kasper January 1998 (has links)
This dissertation consists of four papers, each with an application of a discrete dependent variable model, censored regression or duration model to a credit market phenomenon or monetary policy question. The first three essays deal with bank lending policy, while the last one studies interest rate policy by Central Banks. In the first essay, a bivariate probit model is estimated to contrast the factors that influence banks’ loan granting decision and individuals’ risk of default. This model is used as a tool to construct a Value at Risk measure of the credit risk involved in a portfolio of consumer loans and to investigate the efficiency of bank lending policy. The second essay takes the conclusions from the first paper as a starting point. It investigates if the fact that banks do not minimize default risk can be explained by the existence of return maximization policy. For this purpose, a Tobit model with sample selection effects and variable censoring limits is developed and estimated on the survival times of consumer loans. The third paper focuses on dormancy, instead of default risk or survival time, as the most important factor affecting risk and return in bank lending. By means of a duration model the factors determining the transition from an active status to dormancy are studied. The estimated model is used to predict the expected durations to dormancy and to analyze the expected profitability for a sample loan applicants. In the fourth paper, the discrete nature of Central Bank interest rate policy is studied. A grouped data model, that can take the long periods of time without changes in the repo rate by the Central Bank into account, is estimated on weekly Swedish data. The model is found to be reasonably good at predicting interest rate changes. / Diss. (sammanfattning) Stockholm : Handelshögsk.
|
106 |
Second-order least squares estimation in regression models with application to measurement error problemsAbarin, Taraneh 21 January 2009 (has links)
This thesis studies the Second-order Least Squares (SLS) estimation method in regression models with and without measurement error. Applications of the methodology in general quasi-likelihood and variance function models, censored models, and linear and generalized linear models are examined and strong consistency and asymptotic normality are established. To overcome the numerical difficulties of minimizing an objective function that involves multiple integrals, a simulation-based SLS estimator is used and its asymptotic properties are studied. Finite sample performances of the estimators in all of the studied models are investigated through simulation studies. / February 2009
|
107 |
Regression models with an interval-censored covariateLangohr, Klaus 16 June 2004 (has links)
El análisis de supervivencia trata de la evaluación estadística de variables que miden el tiempo transcurrido hasta un evento de interés. Una particularidad que ha de considerar el análisis de supervivencia son datos censurados. Éstos aparecen cuando el tiempo de interés no puede ser observado exactamente y la información al respecto es parcial. Se distinguen diferentes tipos de censura: un tiempo censurado por la derecha está presente si el tiempo de supervivencia es sabido mayor a un tiempo observado; censura por izquierda está dada si la supervivencia es menor que un tiempo observado. En el caso de censura en un intervalo, el tiempo está en un intervalo de tiempo observado, y el caso de doble censura aparece cuando, también, el origen del tiempo de supervivencia está censurado.La primera parte del Capítulo 1 contiene un resumen de la metodología estadística para datos censurados en un intervalo, incluyendo tanto métodos paramétricos como no-paramétricos. En la Sección 1.2 abordamos el tema de censura noinformativa que se supone cumplida para todos los métodos presentados. Dada la importancia de métodos de optimización en los demás capítulos, la Sección 1.3 trata de la teoría de optimización. Esto incluye varios algoritmos de optimización y la presentación de herramientas de optimización. Se ha utilizado el lenguaje de programación matemática AMPL para resolver los problemas de maximización que han surgido. Una de las características más importantes de AMPL es la posibilidad de enviar problemas de optimización al servidor 'NEOS: Server for Optimization' en Internet para que sean solucionados por ese servidor.En el Capítulo 2, se presentan los conjuntos de datos que han sido analizados. El primer estudio es sobre la supervivencia de pacientes de tuberculosis co-infectados por el VIH en Barcelona, mientras el siguiente, también del área de VIH/SIDA, trata de usuarios de drogas intra-venosas de Badalona y alrededores que fueron admitidos a la unidad de desintoxicación del Hospital Trias i Pujol. Un área completamente diferente son los estudios sobre la vida útil de alimentos. Se presenta la aplicación de la metodología para datos censurados en un intervalo en esta área. El Capítulo 3 trata del marco teórico de un modelo de vida acelerada con una covariante censurada en un intervalo. Puntos importantes a tratar son el desarrollo de la función de verosimilitud y el procedimiento de estimación de parámetros con métodos del área de optimización. Su uso puede ser una herramienta importante en la estadística. Estos métodos se aplican también a otros modelos con una covariante censurada en un intervalo como se demuestra en el Capítulo 4.Otros métodos que se podrían aplicar son descritos en el Capítulo 5. Se trata sobre todo de métodos basados en técnicas de imputación para datos censurados en un intervalo. Consisten en dos pasos: primero, se imputa el valor desconocido de la covariante, después, se pueden estimar los parámetros con procedimientos estadísticos estándares disponibles en cualquier paquete de software estadístico.El método de maximización simultánea ha sido implementado por el autor con el código de AMPL y ha sido aplicado al conjunto de datos de Badalona. Presentamos los resultados de diferentes modelos y sus respectivas interpretaciones en el Capítulo 6. Se ha llevado a cabo un estudio de simulación cuyos resultados se dan en el Capítulo 7. Ha sido el objetivo comparar la maximización simultánea con dos procedimientos basados en la imputación para el modelo de vida acelerada. Finalmente, en el último capítulo se resumen los resultados y se abordan diferentes aspectos que aún permanecen sin ser resueltos o podrían ser aproximados de manera diferente. / Survival analysis deals with the evaluation of variables which measure the elapsed time until an event of interest. One particularity survival analysis has to account for are censored data, which arise whenever the time of interest cannot be measured exactly, but partial information is available. Four types of censoring are distinguished: right-censoring occurs when the unobserved survival time is bigger, left-censoring when it is less than an observed time, and in case of interval-censoring, the survival time is observed within a time interval. We speak of doubly-censored data if also the time origin is censored.In Chapter 1 of the thesis, we first give a survey on statistical methods for interval-censored data, including both parametric and nonparametric approaches. In the second part of Chapter 1, we address the important issue of noninformative censoring, which is assumed in all the methods presented. Given the importance of optimization procedures in the further chapters of the thesis, the final section of Chapter 1 is about optimization theory. This includes some optimization algorithms, as well as the presentation of optimization tools, which have played an important role in the elaboration of this work. We have used the mathematical programming language AMPL to solve the maximization problems arisen. One of its main features is that optimization problems written in the AMPL code can be sent to the internet facility 'NEOS: Server for Optimization' and be solved by its available solvers.In Chapter 2, we present the three data sets analyzed for the elaboration of this dissertation. Two correspond to studies on HIV/AIDS: one is on the survival of Tuberculosis patients co-infected with HIV in Barcelona, the other on injecting drug users from Badalona and surroundings, most of whom became infected with HIV as a result of their drug addiction. The complex censoring patterns in the variables of interest of the latter study have motivated the development of estimation procedures for regression models with interval-censored covariates. The third data set comes from a study on the shelf life of yogurt. We present a new approach to estimate the shelf lives of food products taking advantage of the existing methodology for interval-censored data.Chapter 3 deals with the theoretical background of an accelerated failure time model with an interval-censored covariate, putting emphasize on the development of the likelihood functions and the estimation procedure by means of optimization techniques and tools. Their use in statistics can be an attractive alternative to established methods such as the EM algorithm. In Chapter 4 we present further regression models such as linear and logistic regression with the same type of covariate, for the parameter estimation of which the same techniques are applied as in Chapter 3. Other possible estimation procedures are described in Chapter 5. These comprise mainly imputation methods, which consist of two steps: first, the observed intervals of the covariate are replaced by an imputed value, for example, the interval midpoint, then, standard procedures are applied to estimate the parameters.The application of the proposed estimation procedure for the accelerated failure time model with an interval-censored covariate to the data set on injecting drug users is addressed in Chapter 6. Different distributions and covariates are considered and the corresponding results are presented and discussed. To compare the estimation procedure with the imputation based methods of Chapter 5, a simulation study is carried out, whose design and results are the contents of Chapter 7. Finally, in the closing Chapter 8, the main results are summarized and several aspects which remain unsolved or might be approximated in another way are addressed.
|
108 |
Essays on Innovation, Patents, and EconometricsEntezarkheir, Mahdiyeh January 2010 (has links)
This thesis investigates the impact of fragmentation in the ownership of complementary patents or patent thickets on firms' market value. This question is motivated by the increase in the patent ownership fragmentation following the pro-patent shifts in the US since 1982. The first chapter uses panel data on patenting US manufacturing firms from 1979 to 1996, and estimates the impact of patent thickets on firms' market value. I find that patent thickets lower firms' market value, and firms with a large patent portfolio size experience a smaller negative effect from their thickets. Moreover, no systematic difference exists in the impact of patent thickets on firms' market value over time. The second chapter extends this analysis to account for the indirect impacts of patent thickets on firms' market value. These indirect effects arise through the effects of patent thickets on firms' R\&D and patenting activities. Using panel data on US manufacturing firms from 1979 to 1996, I estimate the impact of patent thickets on market value, R\&D, and patenting as well as the impacts of R\&D and patenting on market value. Employing these estimates, I determine the direct, indirect, and total impacts of patent thickets on market value. I find that patent thickets decrease firms' market value, while I hold the firms’ R\&D and patenting activities constant. I find no evidence of a change in R\&D due to patent thickets. However, there is evidence of defensive patenting (an increase in patenting attributed to thickets), which helps to reduce the direct negative impact of patent thickets on market value.
The data sets used in Chapters 1 and 2 have a number of missing observations on regressors. The commonly used methods to manage missing observations are the listwise deletion (complete case) and the indicator methods. Studies on the statistical properties of these methods suggest a smaller bias using the listwise deletion method. Employing Monte Carlo simulations, Chapter 3 examines the properties of these methods, and finds that in some cases the listwise deletion estimates have larger biases than indicator estimates. This finding suggests that interpreting estimates arrived at with either approach requires caution.
|
109 |
Essays in direct marketing : understanding response behavior and implementation of targeting strategiesSinha, Shameek 06 July 2011 (has links)
In direct marketing, understanding the response behavior of consumers to marketing initiatives is a pre-requisite for marketers before implementing targeting strategies to reach potential as well as existing consumers in the future. Consumer response can either be in terms of the incidence or timing of purchases, category/ brand
choice of purchases made as well as the volume or purchase amounts in each category. Direct marketers seek to explore how past consumer response behavior as well as their
targeting actions affects current response patterns. However, considerable heterogeneity
is also prevalent in consumer responses and the possible sources of this heterogeneity need to be investigated. With the knowledge of consumer response and the corresponding heterogeneity, direct marketers can devise targeting strategies to attract potential new consumers as well as retain existing consumers.
In the first essay of my dissertation (Chapter 2), I model the response behavior of donors in non-profit charity fund-raising in terms of their timing and volume of
donations. I show that past donations (both the incidence and volume) and solicitation for alternative causes by non-profits matter in donor responses and the heterogeneity in donation behavior can be explained in terms of individual and community level donor characteristics. I also provide a heuristic approach to target new donors by using a
classification scheme for donors in terms of the frequency and amount of donations and then characterize each donor portfolio with corresponding donor characteristics.
In the second essay (Chapter 3), I propose a more structural approach in the targeting of customers by direct marketers in the context of customized retail couponing. First I model customer purchase in a retail setting where brand choice decisions in a product category depend on pricing, in-store promotions, coupon targeting as well as the face values of those coupons. Then using a utility function specification for the retailer which implements a trade-off between net revenue (revenue – coupon face value) and
information gain, I propose a Bayesian decision theoretic approach to determine optimal
customized coupon face values. The optimization algorithm is sequential where past as well as future customer responses affect targeted coupon face values and the direct marketer tries to determine the trade-off through natural experimentation. / text
|
110 |
Essays on Innovation, Patents, and EconometricsEntezarkheir, Mahdiyeh January 2010 (has links)
This thesis investigates the impact of fragmentation in the ownership of complementary patents or patent thickets on firms' market value. This question is motivated by the increase in the patent ownership fragmentation following the pro-patent shifts in the US since 1982. The first chapter uses panel data on patenting US manufacturing firms from 1979 to 1996, and estimates the impact of patent thickets on firms' market value. I find that patent thickets lower firms' market value, and firms with a large patent portfolio size experience a smaller negative effect from their thickets. Moreover, no systematic difference exists in the impact of patent thickets on firms' market value over time. The second chapter extends this analysis to account for the indirect impacts of patent thickets on firms' market value. These indirect effects arise through the effects of patent thickets on firms' R\&D and patenting activities. Using panel data on US manufacturing firms from 1979 to 1996, I estimate the impact of patent thickets on market value, R\&D, and patenting as well as the impacts of R\&D and patenting on market value. Employing these estimates, I determine the direct, indirect, and total impacts of patent thickets on market value. I find that patent thickets decrease firms' market value, while I hold the firms’ R\&D and patenting activities constant. I find no evidence of a change in R\&D due to patent thickets. However, there is evidence of defensive patenting (an increase in patenting attributed to thickets), which helps to reduce the direct negative impact of patent thickets on market value.
The data sets used in Chapters 1 and 2 have a number of missing observations on regressors. The commonly used methods to manage missing observations are the listwise deletion (complete case) and the indicator methods. Studies on the statistical properties of these methods suggest a smaller bias using the listwise deletion method. Employing Monte Carlo simulations, Chapter 3 examines the properties of these methods, and finds that in some cases the listwise deletion estimates have larger biases than indicator estimates. This finding suggests that interpreting estimates arrived at with either approach requires caution.
|
Page generated in 0.0351 seconds