491 |
Control of spray cooling in the continuous casting of steelBaptista, Luis Antonio de Souza January 1979 (has links)
The basic principles of spray control during casting speed changes in the continuous casting of steel have been studied. The normal spray control practice in which water flow in the sprays is made proportional to the casting speed has been found to be completely inadequate from the standpoint of controlling the surface temperature of the strand. A computer model based upon the principle of controlling the sprays as a function of the age of the metal passing through the machine has been developed. The model predicts both the average local residence time and water flux in a spray zone for any casting speed change. An accompanying heat flow model has also been developed for characterization of cooling rates and simulation of the effects of spray control practices on the surface temperature of the strand.
The control model has been used for the development of a new spray control practice in an industrial slab caster.
Tests have been performed both using the new practice and the normal spray control practice. Although, a complete verification has not been possible at this time, the model appears to realistically predict the thermal requirements of the strand during casting speed change.
The necessity for automatic control of the sprays has become evident during the development of the present work. The principles of automatic control of the sprays have been studied and the control model adapted for this purpose. It has been shown that a true automatic control of spray cooling can be attained using the mathematical model for spray control. / Applied Science, Faculty of / Materials Engineering, Department of / Unknown
|
492 |
Improved forest harvest planning : integration of transportation analysis with a management unit cut scheduling modelYamada, Michael M. January 1980 (has links)
Forest harvest planning involves determining, in time and place, the flow of timber to be generated from the forest resource. Existing planning models have addressed the temporal aspects of timber supply. However, the spatial aspects of timber supply planning, particularly at the management unit level, have principally been ignored.
This study presents an analytical framework for examining the transportation system of a management unit, its interrelationship with the timber base, and the impacts on strategic harvest planning. The transportation system is evaluated through network analysis techniques. Routing strategies from the stand to the mill are examined. The costs of primary access development and log transport are integrated with the forest inventory, providing a more complete assessment of timber value. Homogeneous stand aggregations and associated yield projections, pertinent to management unit planning, are formed using factor and cluster analysis. Dynamic programming allows optimal allocations of the stand groupings across stratifications which recognize transport and accessibility costs. The resulting timber classes are coupled with management prescriptions and evaluated through a cut scheduling model. Report generation capabilities then allow interpretation of the harvest scheduling results in terms of not only the timber classes, but in the spatial context of the individual stands. The methodology is applied to a British Columbia Public Sustained Yield Unit. The usefulness of the system is demonstrated through analyses which:
1) identify road development and transport costs,
2) evaluate alternative wood flow patterns,
3) identify the volume flow potential of the unit,
4) identify the dollar flow potential of the unit, and 5) illustrate the contribution of integrating the transportation system in the scheduling of harvests. / Forestry, Faculty of / Graduate
|
493 |
A model for optimal infrastructure investment in boom townsPoklitar, Joanne Carol January 1980 (has links)
A linear model to determine the optimal policy for investment in social infrastructure is formulated and its solution is obtained using the Maximum Principle. The unique solution is characterized by a-bang-bang control, with only one interval of investment in social capital, and the endpoints of this interval can be numerically determined, given values for the parameters of the model. A generalization of the model which allows instantaneous jumps in the level of social capital is also analyzed, and the solution to the modified problem is shown to be a uniquely determined impulse control. The final extension of the model allows us to determine an upper bound for the optimal time horizon. / Science, Faculty of / Mathematics, Department of / Graduate
|
494 |
Some mathematical programming models in the design and manufacture of plywoodRaghavendra, Bangalore Gururajachar January 1982 (has links)
One factor of wood loss in the manufacture of plywood is implicit in the form of excess thickness in plywood due to the choice of veneer thicknesses and plywood designs used in assembly. The thickness and designs currently in use appear to have come largely from tradition and there is no evidence in the literature to show what constitutes the most economical veneer thicknesses and plywood designs for a mill. The problem of determining them is very complex since many types of plywood are assembled in each mill as some integral multiple combination of a few veneers satisfying the 'balanced design' and other structural specifications. The consumption of logs is dependent on the excess thickness in plywood and the economics of the mill further depend on how efficiently a given set of veneers and designs are used to satisfy the orderfile requirements. In this dissertation, these aspects of the Plywood Design and Manufacturing (PDM) problem are addressed using a mathematical programming approach.
The problem of finding the optimal veneer thicknesses, associated plywood designs and product mix is formulated as a non-linear mixed integer mathematical programming model. Utilizing the structure of the constraints and by selecting appropriate variables to branch on, it is demonstrated that the PDM problem can be solved efficiently through an implicit enumeration algorithm involving a tree search procedure. The subproblem to be solved at each feasible node of the tree is a Linear Multiple Choice Knapsack (LMCK) problem whose solution can be obtained explicitly following its coefficient structure. A computer code is written in FORTRAN for the implicit enumeration algorithm.
Data obtained from a plywood mill in B.C. is analysed using the PDM model and this code. It is demonstrated that the annual net revenue of the mill can be substantially increased through the use of the PDM model.
The PDM model is further extended to mill situations involving more than one species and varying orderfile requirements. The model is reformulated in each case and it is demonstrated that essentially the same tree search procedure can be used to solve all these models. When the orderfile is independent of species, the subproblem to be solved at each node of the tree is a Generalized Network problem. It is shown that this Generalized Network problem can be reduced to a Generalized Transportation problem utilizing the structure of the coefficients and solved as an ordinary Transportation problem. When the orderfile is dependent on species, the subproblem decomposes into several Linear Multiple Choice Knapsack problems. If more than one species of veneer can be mixed within a plywood panel, the subproblem is a linear programming problem.
The PDM model is further shown to be a special case of a disjunctive programming problem. Following the development of the PDM model, methods to determine the efficiency of plywood designs and the optimum number of veneer thicknesses for a plywood mill are developed. / Business, Sauder School of / Graduate
|
495 |
Modelling geomorphology in landscape evolutionMartin, Yvonne. 05 1900 (has links)
Many landscape evolution models have considered the interaction of exogenic and
endogenic processes. However, geomorphological processes have not been successfully
incorporated in landscape evolution models. The thesis begins with a critical analysis of
methodologies for the study of large-scale geomorphological processes. A framework based on
a generalization of the relevant processes is recommended.
Hillslope and channel submodels, which are based on typical processes operating in
coastal regions of British Columbia, are introduced. The following hillslope processes are
considered: (i) slow, quasi-continuous mass movements; (ii) fast, episodic mass movements; and
(iii) weathering. The transport relation for fast, episodic mass movements was found to be
nonlinear. Fluvial transport in both low and high-gradient channels and debris flow transport are
considered in the channel submodel. A bed load transport equation, which is a revised version of
the Bagnold stream power formula, is derived. Suspended load is calculated using a suspended
load/contributing area correlation. Connections between hillslope and channel processes are
considered to ensure adequate representation in the model.
The hillslope and channel submodels are explored in one-dimensional and surface model
runs for small drainage basins in the Queen Charlotte Islands, British Columbia. Tests of the
fluvial submodel demonstrate the robustness of the bed load equation used in this study. A
conceptualization of the landscape into unstable and stable regimes is introduced. Results of
surface model runs emphasize the key role of low-order channels in transferring sediment from
hillslopes to main channels. The exercise of constructing and running the model highlighted
major gaps in our present understanding of geomorphological process operation and sediment
routing. Suggestions for future research are extensive and are outlined in the concluding chapter
of the thesis. / Arts, Faculty of / Geography, Department of / Graduate
|
496 |
Essays on discretionary inflationNeiss, Katharine Stefanie 05 1900 (has links)
The focus of the following three essays rests on the Kydland-Prescott (1977) and
Barro-Gordon (1983) model of time inconsistent discretionary monetary policy. The first
essay derives a model in which the costs and benefits to inflation are tied to the underlying
features of the economy. The benefit to inflation arises due to monopolistic competition
among firms and the cost is due to a staggered timing structure for nominal money. The
benefit of this approach is that it can be shown that factors that increase the monetary
authority's incentive to inflate may also increase the costs to inflation, and therefore do
not necessarily result in a worsened inflation bias. In particular, the model shows that
discretionary inflation in the economy is nonmonotonically related to the distortion. The
model also indicates that changes in the real interest rate affect the monetary authority's
incentives and hence the discretionary rate of inflation. An increase in the labor share
raises the discretionary rate. Lastly, lack of commitment, costs to inflation, and the
presence of a distortion are crucial for discretionary inflation to be biased above the
Friedman (1969) rule. The second essay builds on the first, extending the model to
an open economy environment. The extended model indicates several channels through
which openness affects the monetary authority's incentives. Most significantly, the model
cannot replicate the Romer (1993) and Lane (1995) result that openness reduces the
discretionary rate of inflation. Again, the model relates the underlying features of the
economy on the discretionary rate, and an economy's foreign asset position. Strategic
incentives are also important for determining whether an open economy's rate of inflation is less than that of a comparable closed economy. The last essay analyzes empirically the
relationship between the overall degree of competition among firms, as measured by the
markup, and the average rate of inflation for the OECD group of countries. In line with
the time-consistency argument, results indicate a positive relationship between markups
and inflation. This finding is robust to the inclusion of several explanatory variables,
such as terms of trade effects, and central bank independence. The evidence is weak,
however, in the presence of per capita GDP. / Arts, Faculty of / Vancouver School of Economics / Graduate
|
497 |
Consumption, leisure and the demand for money and money substitutesDonovan, Donal John January 1977 (has links)
The purpose of this research is to develop and test a model of the demand for money within a general optimising model of household behaviour. The framework adopted is the direct utility approach. The services of money and money substitutes, along with the services of consumption goods (durable and non durable) and leisure are assumed to enter as arguments in the representative household's utility function.
The theoretical part of the thesis consists of applying the tools of modern utility theory to the particular problem of the demand for money. The development and solution of the model provides a clear basis for interpreting the demand equations used in estimation, and also makes explicit various assumptions implicit in previous empirical models in this area. In particular, derivation of the rental price of money and money substitutes serves to clarify the role of expectations and the relationship between the rental prices of money and goods within the direct utility model.
The major part of the thesis consists of applying the model to annual Canadian data for the period 1947-1974. A substantial portion of the empirical contribution is the construction of data series consistent with the theoretical framework of the model. We differ from other researchers in this area in using the ARIMA model to take expected capital gains into account when constructing rental price series for durable goods.
Three different groups of models are examined empirically. The first group contains only consumption goods and leisure. The second group includes aggregate 'money' and aggregate 'near money' along with consumption goods and leisure, while the third group contains only 'liquid assets', i.e., disaggregated components of 'money' and 'near money.'
The demand equations for each model are derived from a Gorman polar form representation of the indirect utility function, and are evaluated using a constrained estimation technique. The presence of autocorrelation is explored, and the model tested for parametric stability over time. Tests of the restrictions implied by the theory of utility maximising behaviour and of homotheticity are performed.
The estimated models were found generally to be consistent with the underlying theory, and also provided some useful information. Money has an expenditure elasticity less than one, while near money is a luxury good. There is no evidence of substitutability between aggregate money and aggregate near money; however, some substitutability is reported between chartered bank personal savings deposits, and trust and loan company savings deposits. / Arts, Faculty of / Vancouver School of Economics / Graduate
|
498 |
Synchronous generator models for the simulation of electromagnetic transientsBrandwajn, Vladimir January 1977 (has links)
Techniques for modelling of synchronous generators in the simulation of electromagnetic transients are described. First of all, an adequate mathematical model of the generator is established. It uses the conventional set of generator data only, which are readily available, but it is flexible enough to accommodate additional data, if and when such become available. The resulting differential equations of the generator are then transformed into linear algebraic equations, with a time varying coefficient matrix, by using the numerically stable trapezoidal
rule of integration. These equations can be interfaced with the equations of an electromagnetic transients program in one of two ways:
(a) Solve the equations of the generator simultaneously with the equations of a three-phase Thevenin equivalent circuit of the transmission network seen from the generator terminals.
(b) Replace the generator model with a modified Thevenin equivalent
circuit and solve the network equations with the generator treated as known voltage sources e[sup red][sub ph] (t-Δt) behind constant resistances [R [sup red][sub ph]]. After the network solution at each
time step, the stator quantities are known and used to solve
the equations for the rotor windings. These two methods cover, in principle, all possible interfacing techniques.
They are not tied to the trapezoidal rule of integration, but can be used with any other implicit integration technique. The results obtained with these two techniques are practically identical. Interfacing
by method (b), however, is more general since it does not require
a Thevenin equivalent circuit of the network seen from the generator
terminals. The numerical examples used in this thesis contain comparisons
with field test results in order to verify the adequacy of the generator model as well as the correctness of the numerical procedures.
A short discussion of nonlinear saturation effects is also presented. A method of including these effects into the model of the generator is then proposed.
Typical applications of the developed numerical procedures include dynamic overvoltages, torsional vibrations of the turbine-generator shaft system, resynchronization of the generator after pole slipping and detailed assessment of generator damping terms in transient stability simulations. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
|
499 |
Essays in the economics of uncertaintyEpstein, Larry Gedaleah January 1977 (has links)
There have been several recent advances in the theory of choice under uncertainty that have extended the restrictive mean-variance framework. Working within the context of a model of expected utility maximization, Rothschild and Stiglitz (1970) and Diamond and Stiglitz (1974) present intuitively appealing and theoretically sound definitions of "greater risk or uncertainty". Moreover, they show that their definitions are useful as well as consistent, in that they may be used to derive comparative statics results in which economists are interested.
In the first part of the thesis we argue that the above analyses and most related ones are restricted to models where both the decision variable and the exogenous random variable that defines the stochastic environment, are scalars. Then we extend many of the definitions and results to the context of a general multivariate decision problem. In particular, a generalized notion of risk independence is shown to be relevant to behaviour under uncertainty.
This general analysis is then applied to two specific decision problems: first, the standard two-period consumer choice problem where current consumption must be decided upon subject to uncertainty about future income and prices; and second, the corresponding problem in the theory of the firm, where a competitive firm must make some production decisions subject to uncertainty about the prices that will prevail for some products and factors of production. We extend earlier studies of these problems by considering disaggregated models, by adopting theoretically consistent definitions of
increased uncertainty and by investigating the role of production flexibility in determining firm behaviour under uncertainty. In both the consumer and producer models the crucial properties of preferences and technology are pointed out and flexible functional forms are hypothesized that are amenable to empirical estimation. The theory of duality plays an important part throughout the formulation and analysis of both models.
Finally the basic theory of producer behaviour analysed above is applied to aggregate U.S. manufacturing data for the 1947-71 period. We assume that the capital stock decision must be made one period before the capital comes into operation, subject to expectations about uncertain future prices, while all other factors and outputs may be adjusted fully to current prices. An added important ingredient of the model is the distinction between the capital stock and utilization (depreciation) decisions, the latter being made in each period after that period's prices are known. The consistency of the model with the data is investigated and the empirical significance of our formulation of the capital utilization decision is tested. / Arts, Faculty of / Vancouver School of Economics / Graduate
|
500 |
A stochastic analysis of steady-state groundwater flow in a bounded domainSmith, James Leslie January 1978 (has links)
A stochastic analysis of groundwater flow leads to probability distributions on the predicted hydraulic head values. This variability reflects our uncertainty in the system being modeled due to the spatial heterogeneity of hydraulic conductivity. Monte Carlo techniques can be used to estimate the head distributions. This approach relies on the repetitive generation of discrete-block conductivity
realizations. In this study, steady state flow through one and two-dimensional flow domains is investigated. A space law based on a first order, nearest neighbour stochastic process model is used to generate the multilateral spatial dependence in the hydraulic conductivity values within the block structure. This allows consideration of both statistically isotropic and anisotropic autocorrelation functions.
It is shown that the probability distribution of hydraulic head and the head gradient or the flux across the boundaries of the flow domain, must be interpreted in terms of:
1) The spatial variation of expected head gradients.
2) The standard deviation in the conductivity distribution.
3) The ratio of the integral scale of the autocorrelation function for conductivity to the distance between boundaries on the flow domain.
4) The arrangement of stationary units within the flow domain.
The standard deviations in hydraulic head increase with an increase in either the conductivity standard deviation or the strength of the correlation between neighbouring conductivity values. Provided the integral scales of the medium are preserved, the standard deviations in head show only a minor dependence on the discretization interval. The head standard deviations are approximately halved in a two-dimensional model from those in a one-dimensional model with an equivalent space law. Spatial trends in the mean conductivity can considerably alter the magnitude and spatial variation in the hydraulic head standard deviations.
The geometric mean has been suggested by others as a suitable effective conductivity in a heterogeneous two-dimensional flow domain. This study shows that only in the case of uniform flow through a single stationary unit is this concept valid. If the mean gradient field is nonuniform, or if the mean conductivity has a spatial trend, predictions based on the geometric mean do not satisfy the necessary equivalence criteria.
Direct comparisons cannot be made, but the Monte Carlo and spectral approaches to the solution of the stochastic flow equations predict a similar behavior.
A first order, nearest neighbour model is matched to a data
set collected from a relatively uniform but stratified, unconsolidated sand deposit. The data show statistically anisotropic autocorrelation functions, both in the integral scale and in the functional form of the correlation. A broader class of spatial models may need to be considered to describe the cyclic behavior of sedimentary sequences. / Science, Faculty of / Earth, Ocean and Atmospheric Sciences, Department of / Graduate
|
Page generated in 0.1243 seconds