Spelling suggestions: "subject:"estatistics anda probability"" "subject:"estatistics anda aprobability""
131 |
Optimal Interest Rate for a Borrower with Estimated Default and Prepayment RiskHoward, Scott T. 27 May 2008 (has links)
Today's mortgage industry is constantly changing, with adjustable rate mortgages (ARM), loans originated to the so-called "subprime" market, and volatile interest rates. Amid the changes and controversy, lenders continue to originate loans because the interest paid over the loan lifetime is profitable. Measuring the profitability of those loans, along with return on investment to the lender is assessed using Actuarial Present Value (APV), which incorporates the uncertainty that exists in the mortgage industry today, with many loans defaulting and prepaying. The hazard function, or instantaneous failure rate, is used as a measure of probability of failure to make a payment. Using a logit model, the default and prepayment risks are estimated as a function of interest rate. The "optimal" interest rate can be found where the profitability is maximized to the lender.
|
132 |
Clustering financial time series for volatility modelingJarjour, Riad 01 August 2018 (has links)
The dynamic conditional correlation (DCC) model and its variants have been widely used in modeling the volatility of multivariate time series, with applications in portfolio construction and risk management. While popular for its simplicity, the DCC uses only two parameters to model the correlation dynamics, regardless of the number of assets. The flexible dynamic conditional correlation (FDCC) model attempts to remedy this by grouping the stocks into various clusters, each with its own set of parameters. However, it assumes the grouping is known apriori.
In this thesis we develop a systematic method to determine the number of groups to use as well as how to allocate the assets to groups. We show through simulation that the method does well in identifying the groups, and apply the method to real data, showing its performance. We also develop and apply a Bayesian approach to this same problem.
Furthermore, we propose an instantaneous measure of correlation that can be used in many volatility models, and in fact show that it outperforms the popular sample Pearson's correlation coefficient for small sample sizes, thus opening the door to applications in fields other than finance.
|
133 |
Bayesian hierarchical normal intrinsic conditional autoregressive model for stream networksLiu, Yingying 01 December 2018 (has links)
Water quality and river/stream ecosystems are important for all living creatures. To protect human health, aquatic life and the surrounding ecosystem, a considerable amount of time and money has been spent on sampling and monitoring streams and rivers. Water quality monitoring and analysis can help researchers predict and learn from natural processes in the environment and determine human impacts on an ecosystem. Measurements such as temperature, pH, nitrogen concentration, algae and fish count collected along the network are all important factors in water quality analysis. The main purposes of the statistical analysis in this thesis are (1) to assess the relationship between the variable measured in the water (response variable) and other variables that describe either the locations on/along the stream network or certain characteristics at each location (explanatory variable), and (2) to assess the degree of similarity between the response variable values measured at different locations of the stream, i.e. spatial dependence structure. It is commonly accepted that measurements taken at two locations close to each other should have more similarity than locations far away. However, this is not always true for observations from stream networks. Observations from two sites that do not share water flow could be independent of each other even if they are very close in terms of stream distance, especially those observations taken on objects that move passively with the water flow. To model stream network data correctly, it is important to quantify the strength of association between observations from sites that do not share water.
|
134 |
Spectral Analysis of Time-Series Associated with Control SystemsSmith, Karl Leland 01 May 1965 (has links)
The progress of science is based to a large degree on experimentation. The scientist, engineer, or researcher is usually interested in the results of a single experiment only to the extent that he hopes to generalize the results to a class of similar experiments associated with an underlying phenomenon. The process by which this is done is called inductive inference and is always subject to uncertainty. The science of statistical inference can be used to make inductive inferences for which the degree of uncertainty can be measure in terms of probability. A second type of inference called deductive inference is conclusive. If the premises are true, deductive inference leads to true conclusions. Proving the theorems of mathematics is an example of deductive inference; while in the empirical sciences, inductive inference is used to find new knowledge.
In engineering and physical science, analytical , i.e., deterministic techniques have been developed to provide deductive descriptions of the real world. Sometimes the assumptions required to make deterministic techniques appropriate are too restrictive since no provision is made for stochastic or uncertainty involved in concluding real world situations. In these situations, the science of statistics provides a basis for generalizing the results of experiments associated with the phenomena of interest.
In order to make statistical inference sound, the experimenter must decide in advance which factors must be controlled in the experiment. The factors which are unknown or which cannot be controlled must be controlled by the device of randomization. Uncontrolled factors express themselves as experimental error in the experiment. Randomization is used in order to insure that the experimental error satisfies the probability requirements specified in the statistical model for the experiment, thereby making it possible for the experimenter to generalize the results of his experiment using significance and confidence probability statements.
Much of statistics is devoted to situations for which experiments are conducted according to schemes of restricted randomization. Therefore, the experimental errors are independent and are assumed to have a common, yet unknown, probability distribution that can be characterized by estimating the mean and the variance.
However, there are certain other types of experimental situations for which it is desirable to observe a physical phenomena with the observations ordered in time or space. The resulting observations can be called a time series.The experimental errors of a time series are likely to be correlated. Consequently, if an unknown probability distribution is to be characterized, covariances as well as the respective means and the variances must be estimated.
A time series resulting from observation of a given physical phenomena may exhibit dominant deterministic properties if the experiment can be well controlled. Or, the time series may exhibit dominant statistical properties if it is impossible or impractical to isolate and control various influencing factors. Generally an experiment will consist of both deterministic and statistical elements in some degree in a real world situation.
The procedures of analysis presented in Chapter III consider the statistical analysis of periodic and aperiodic digital (discrete) time series, in both the time and frequency domains, using Fourier analysis, covariance and correlation analysis, and the estimation of power and cross power spectral density functions.
Time ordered observations are important in the analysis of engineering systems. Certain characteristics of engineering systems are discussed in Chapter IV, and the input-output concept of control system engineering introduced. The input-output technique is not limited to control system engineering problems, but may be applicable in other areas of science also.
A deterministic method of ascertaining the output performance of an engineering system consists of subjecting the system to a sinusoidal input function of time, and then measuring the output function of time. If the engineering system is linear, the well-developed techniques are available for analysis; but if the system is nonlinear, then more specialized analysis procedures must be developed for specific problems.
In a broad sense, the frequency-response approach consists of investigating the output of a linear system to sinusoidal oscillations of the input. If the system of nonlinear, then the frequency-response approach must be modified; one such modification is the describing function technique. These techniques are also discussed in Chapter IV.
Under actual experimental conditions, the deterministic approach of subjecting a system to a sinusoidal input function for purposes of analysis is likely to be complicated by nonlinearities of the system and statistical characteristics of the data. The physical characteristics of the data will undoubtedly be obscured by random measuring errors introduced by transducers and recording devices, and uncontrollable environmental and manufacturing influences. Consequently, generalized procedures for analyzing nonlinear systems in the presence of statistical variation are likely to be required to estimate the input-output characteristics if the system is to work with inferential models applied to recorded data. Such procedures are presented in Chapter III and Chapter V.
In Chapter V the empirical determination from input-output rocket test data of a deterministic and statistical model for predicting rocket nozzle control system requirements is complicated by the fact that the control system is nonlinear and the nozzle data is non-stationary consisting of both systematic and random variation. The analysis techniques developed are general enough for analysis of other types of nonlinear systems.
If the nonlinear effect of coulomb friction can be estimated and the responses are adjusted accordingly, the nozzle system bears a close relationship to a linear second order differential equation consisting of an acceleration times moment of enertia component, a gas dynamic spring component and a viscous friction component. In addition, vibration loading is present in the data. Consequently, estimation of auto correlation and power spectral density functions is used to isolate these vibrations.
Analysis of the control system data is also considered in terms of auto correlations, and in terms of a power spectral density functions. Random input functions rather than sinusoidal input functions may be required under more general experimental conditions.
Chapter VI numerically illustrates the analysis procedures. The actual rocket test data used in developing the analysis was classified; consequently, only fictitious data are used in this paper to illustrate the procedures.
Chapter VIII is concerned with illustrating the procedures of Chapter III utilizing various time series data. The last part of Chapter VII is concerned with estimation of the power spectral function using techniques of multiple regression; i.e., the model of the General Linear Hypothesis. A definite limitation is the model assumption concerning the residual error of the model. The assumption concerning the error of the model can probably be made more tenable by suitable transformation of either the original time series data or the autocovariances. In any even the spectral function developed by assuming the model for the General Linear Hypothesis gives the same spectral function as defined in Chapter III. However, such quantities as the variance, tests of hypotheses and variance of the spectral function can now be estimated, if the assumptions concerning residual error are valid.
Chapter VIII summarizes the results of previous chapters.
|
135 |
A Test for Determining an Appropriate Model For Accelerated Life DataChen, Yuan-Who 01 May 1987 (has links)
The purpose of this thesis was to evaluate a method for testing the appropriateness of accelerated life model. This method is based upon a polynomial approximation. The parameters are estimated and used for testing the appropriateness of the model.
An example illustrates the polynomial method. Real data are applied for this method. Comparison with another method demonstrates that the polynomial method is much simpler and has comparable accuracy.
|
136 |
The Power Law Distribution of Agricultural Land SizeChamberlain, Lauren 01 December 2018 (has links)
This paper demonstrates that the distribution of county level agricultural land size in the United States is best described by a power-law distribution, a distribution that displays extremely heavy tails. This indicates that the majority of farmland exists in the upper tail. Our analysis indicates that the top 5% of agricultural counties account for about 25% of agricultural land between 1997-2012. The power-law distribution of farm size has important implications for the design of more efficient regional and national agricultural policies as counties close to the mean account for little of the cumulative distribution of total agricultural land. This has consequences for more efficient management and government oversight as a disruption in one of the counties containing a large amount of farmland (due to natural disasters, for instance) could have nationwide consequences for agricultural production and prices. In particular, the policy makers and government agencies can monitor about 25% of total agricultural land by overseeing just 5% of counties.
|
137 |
The Use of Contingency Table Analysis as a Robust Technique for Analysis of VarianceChiu, Mei-Eing 01 May 1982 (has links)
The purpose of this paper is to compare Analysis of Variance with Contingency Table Analysis when the data being analyzed do not satisfy Analysis of Variance assumptions. The criteria for comparison are the powers of the Standard variance-ratio and the Chi-square test.
The test statistic and powers were obtained by Monte Carlo.
1. Calculate test statistic for each of 100 trials, this process was repeated 12 times. Each time different combination of means and variances were used.
2. Powers were obtained for each of 12 combinations of means and variances.
Whether Analysis of Variance or Contingency Table Analysis is a better alternative depends on if we are interested in equality of population means or differences of population variances.
|
138 |
Structural change detection via penalized regressionWang, Bo 01 August 2018 (has links)
This dissertation research addresses how to detect structural changes in stochastic linear models. By introducing a special structure to the design matrix, we convert the structural change detection problem to a variable selection problem. There are many existing variable selection strategies, however, they do not fully cope with structural change detection. We design two penalized regression algorithms specifically for the structural change detection purpose. We also propose two methods involving these two algorithms to accomplish a bi-level structural change detection: they locate the change points and also recognize which predictors contribute to the variation of the model structure. Extensive simulation studies are shown to demonstrate the effectiveness of the proposed methods in a variety of settings. Furthermore, we establish asymptotic theoretical properties to justify the bi-level detection consistency for one of the proposed methods. In addition, we write an R package with computationally efficient algorithms for detecting structural changes. Comparing to traditional methods, the proposed algorithms showcase enhanced detection power and more estimation precision, with added capacity of specifying the model structures at all regimes.
|
139 |
Exact Analysis of Variance with Unequal VariancesYanagi, Noriaki 01 May 1980 (has links)
The purpose of this paper was to present the exact analysis of variance with unequal variances. Bishop presented the new procedure for the r-way layout ANOVA. In this paper, one and two way layout ANOVA were explained and Bishop's method and Standard method were compared by using a Monte Carlo method.
|
140 |
Parameter Estimation in Nonstationary M/M/S Queueing ModelsVajanaphanich, Pensri 01 May 1982 (has links)
If either the arrival rate or the service rate in an M/M/S queue exhibit variability over time, then no steady state solution is available for examining the system behavior. The arrival and service rates can be represented through Fourier series approximations. This permits numerical approximation of the system characteristics over time.
An example of an M/M/S representation of the operations of emergency treatment at Logan Regional hospital is presented. It requires numerical integration of the differential equation for L(t), the expected number of customers in the system at time t.
|
Page generated in 0.0815 seconds