281 |
An empirical study of a financial signalling modelCampbell, Alyce January 1987 (has links)
Brennan and Kraus (1982,1986) developed a costless signalling model which can explain why managers issue hybrid securities—convertibles(CB's) or bond-warrant packages(BW's). The model predicts that when the true standard deviation (σ) of the distribution of future firm value is unknown to the market, the firm's managers will issue a hybrid with specific characteristics such that the security's full information value is at a minimum at the firm's true σ. In this fully revealing equilibrium market price is equal to this minimum value.
In this study, first the mathematical properties of the hypothesized bond-valuation model were examined to see if specific functions could have a minimum not at σ = 0 or σ = ∞ as required for signalling. The Black-Scholes-Merton model was the valuation model chosen because of ease of use, supporting empirical evidence, and compatibility with the Brennan-Kraus model. Three different variations, developed from Ingersoll(1977a); Geske( 1977,1979) and Geske and Johnson(1984); and Brennan and Schwartz(1977,1978), were examined. For all hybrids except senior CB's, pricing functions with a minimum can be found for plausible input parameters. However, functions with an interior maximum are also plausible. A function with a maximum cannot be used for signalling.
Second, bond pricing functions for 105 hybrids were studied. The two main hypotheses were: (1) most hybrids have functions with an interior minimum; (2) market price equals minimum theoretical value. The results do not support the signalling model, although the evidence is ambiguous. For the σ range 0.05-0.70, for CB's (BW's) 15(8) Brennan-Schwartz functions were everywhere positively sloping, 11(2) had an interior minimum, 22(0) were everywhere negatively sloping, and 35(12) had an interior maximum. Market prices did lie closer to minima than maxima from the Brennan-Schwartz solutions, but the results suggest that the solution as implemented overpriced the CB's. BW's were unambiguously overpriced. With consistent overpricing, market prices would naturally lie closer to minima. Average variation in theoretical values was, however, only about 5 percent for CB's and about 10 percent for BW's. This, coupled with the shape data, suggests that firms were choosing securities with theoretical values relatively insensitive to a rather than choosing securities to signal σ unambiguously. / Business, Sauder School of / Graduate
|
282 |
Statistical analysis of discrete time series with application to the analysis of workers' compensation claims dataFreeland, R. Keith 05 1900 (has links)
This thesis examines the statistical properties of the Poisson AR(1) model of Al-Osh and Alzaid (1987) and McKenzie (1988). The analysis includes forecasting,
estimation, testing for independence and specification and the addition of regressors to
the model.
The Poisson AR(1) model is an infinite server queue, and as such is well suited
for modeling short-term disability claimants who are waiting to recover from an injury or
illness. One of the goals of the thesis is to develop statistical methods for analyzing series
of monthly counts of claimants collecting short-term disability benefits from the
Workers' Compensation Board (WCB) of British Columbia.
We consider four types of forecasts, which are the k-step ahead conditional mean,
median, mode and distribution. For low count series the k-step ahead conditional
distribution is practical and much more informative than the other forecasts.
We consider three estimation methods: conditional least squares (CLS),
generalized least squares (GLS) and maximum likelihood (ML). In the case of CLS
estimation we find an analytic expression for the information and in the GLS case we find
an approximation for the information. We find neat expressions for the score function and
the observed Fisher information matrix. The score expressions leads to new definitions of
residuals.
Special care is taken to test for independence since the test is on the boundary of
the parameter space. The score test is asymptotically equivalent to testing whether the
CLS estimate of the correlation coefficient is zero. Further we define a Wald and
likelihood ratio test.
Then we use the general specification test of McCabe and Leybourne (1996) to
test whether the model is sufficient to explain the variation found in the data.
Next we add regressors to the model and update our earlier forecasting, estimation
and testing results. We also show the model is identifiable.
We conclude with a detailed application to monthly WCB claims counts. The
preliminary analysis includes plots of the series, autocorrelation function and partial
autocorrelation function. Model selection is based on the preliminary analysis, t-tests for
the parameters, the general specification test and residuals. We also include forecasts for
the first six months of 1995. / Business, Sauder School of / Graduate
|
283 |
Neural Mechanisms of Aversive Prediction Errors:Walker, Rachel Ann January 2020 (has links)
Thesis advisor: Michael McDannald / Uncertainty is a pervasive facet of life, and responding appropriately and proportionally to uncertain threats is critical for adaptive behavior. Aversive prediction errors are signals that allow for appropriate fear responses, especially in the face of uncertainty, and provide a critical updating mechanism to adapt to change. Positive prediction errors (+PE) are generated when an actual outcome of an event is worse than the predicted outcome and increase fear upon future encounters with the related predictive cue. Negative prediction errors (-PE) are generated when the predicted outcome is worse than the actual outcome and decrease fear upon future encounters with the related predictive cue. While some regions have been offered as the neural source of positive and negative prediction errors, no causal evidence has been able to identify their sources of generation. The objective of this dissertation was to causally identify the neural basis of aversive prediction error signaling. Using precise neural manipulations paired with a robust behavioral fear discrimination task, I present causal evidence for vlPAG generation of +PEs and for a ventrolateral periaqueductal grey (vlPAG) to medial central amygdala (CeM) pathway to carry out +PE fear updating. Further, I demonstrate that while dorsal raphe serotonergic neurons are not the source of -PE generation, they appear to receive and utilize this signal. Understanding the neural network responsible for aversive prediction error signaling will not only inform understanding of the neurological basis of fear but also may provide insights into disorders, such as PTSD and anxiety disorders, that are characterized by excessive/inappropriate fear responses. / Thesis (PhD) — Boston College, 2020. / Submitted to: Boston College. Graduate School of Arts and Sciences. / Discipline: Psychology.
|
284 |
The correlation between scores in selected eighth grade tests and achievement in ninth grade algebraTeSelle, Wilbur Arie 01 January 1960 (has links)
This study was concerned with algebra readiness. The policy of the Stockton Unified School District, where the study was made, was to schedule students to take elementary algebra in grade nine if their test scores indicated a reasonable expectation of success. The problem was to determine which of the group test scores available at the end of grade wight would best predict success in first-year algebra.
|
285 |
Predicting fleet-vehicle energy consumption with trip segmentationUmanetz, Autumn 26 April 2021 (has links)
This study proposes a data-driven model for prediction of the energy consumption of fleet vehicles in various missions, by characterization as the linear combination of a small set of exemplar travel segments.
The model was constructed with reference to a heterogenous study group of 29 light municipal fleet vehicles, each performing a single mission, and each equipped with a commercial OBD2/GPS logger. The logger data was cleaned and segmented into 3-minute periods, each with 10 derived kinetic features and a power feature. These segments were used to define three essential model components as follows:
The segments were clustered into six exemplar travel types (called "eigentrips" for brevity)
Each vehicle was defined by a vector of its average power in each eigentrip
Each mission was defined by a vector of annual seconds spent in each eigentrip
10% of the eigentrip-labelled segments were selected into a training corpus (representing historical observations), with the remainder held back for testing (representing future operations to be predicted). A Light Gradient Boost Machine (LGBM) classifier was trained to predict the eigentrip labels with sole reference to the kinetic features, i.e., excluding the power observation. The classifier was applied to the held-back test data, and the vehicle's characteristic power values applied, resulting in an energy consumption prediction for each test segment.
The predictions were then summed for each whole-study mission profile, and compared to the logger-derived estimate of actual energy consumption, exhibiting a mean absolute error of 9.4%. To show the technique's predictive value, this was compared to prediction with published L/100km figures, which had an error of 22%. To show the level of avoidable error, it was compared with an LGBM direct regression model (distinct from the LGBM classifier) which reduced prediction error to 3.7%. / Graduate
|
286 |
Optimal Control of a District Cooling System with Thermal Energy Storage using Neural NetworksCox, Sam J 10 August 2018 (has links)
Thermal energy storage can offer significant cost savings with time varying pricing. This study examines the effectiveness of using neural networks to model a district cooling system with ice storage for optimal control. Neural networks offer a fast performance estimation of a district cooling system with external inputs. A physics based model of the district cooling system is first developed to act as a virtual plant for the controller to communicate system states, in real time. Next, the neural network modeling the plant is developed and trained. This model is optimized using a genetic algorithm due to the on/off controls. Finally, a thermal load prediction algorithm is integrated to test under weather forecasts. It is shown through a case study that the optimal control scheme can effectively adapt to varying loads and varying prices to effectively reduce operating costs of the district cooling network by 16% for time of use pricing and 13% under real time pricing.
|
287 |
A study of the inter-relationships among pre-schooling, mental test scores, school marks and practical successLynch, Elizabeth Anne 01 January 1931 (has links) (PDF)
No description available.
|
288 |
A study of the efficacy of the group Rorschach test in predicting scholastic achievement.Brownell, Marjorie H. 01 January 1947 (has links) (PDF)
No description available.
|
289 |
Prediction of grades in intermediate algebra.Johnston, Leroy M. 01 January 1942 (has links) (PDF)
No description available.
|
290 |
Towards Structured Prediction in Bioinformatics with Deep LearningLi, Yu 01 November 2020 (has links)
Using machine learning, especially deep learning, to facilitate biological research
is a fascinating research direction. However, in addition to the standard classi cation
or regression problems, whose outputs are simple vectors or scalars, in bioinformatics,
we often need to predict more complex structured targets, such as 2D images
and 3D molecular structures. The above complex prediction tasks are referred to as
structured prediction. Structured prediction is more complicated than the traditional
classi cation but has much broader applications, especially in bioinformatics, considering
the fact that most of the original bioinformatics problems have complex output
objects.
Due to the properties of those structured prediction problems, such as having
problem-speci c constraints and dependency within the labeling space, the straightforward
application of existing deep learning models on the problems can lead to
unsatisfactory results. In this dissertation, we argue that the following two ideas
can help resolve a wide range of structured prediction problems in bioinformatics.
Firstly, we can combine deep learning with other classic algorithms, such as probabilistic
graphical models, which model the problem structure explicitly. Secondly,
we can design and train problem-speci c deep learning architectures or methods by
considering the structured labeling space and problem constraints, either explicitly
or implicitly. We demonstrate our ideas with six projects from four bioinformatics
sub elds, including sequencing analysis, structure prediction, function annotation,
and network analysis. The structured outputs cover 1D electrical signals, 2D images, 3D structures, hierarchical labeling, and heterogeneous networks. With the help of
the above ideas, all of our methods can achieve state-of-the-art performance on the
corresponding problems.
The success of these projects motivates us to extend our work towards other more
challenging but important problems, such as health-care problems, which can directly
bene t people's health and wellness. We thus conclude this thesis by discussing such
future works, and the potential challenges and opportunities.
|
Page generated in 0.0348 seconds