• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3753
  • 1249
  • 523
  • 489
  • 323
  • 323
  • 323
  • 323
  • 323
  • 314
  • 101
  • 98
  • 60
  • 13
  • 12
  • Tagged with
  • 7319
  • 7319
  • 773
  • 628
  • 621
  • 551
  • 500
  • 487
  • 484
  • 466
  • 392
  • 388
  • 355
  • 353
  • 346
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
471

Landmark development : gaming simulation framework for planning

Ross, Gerald Howard Barney January 1970 (has links)
Planners have generally failed to prevent the urban strife (including civil disorders, housing shortages, visual blight and rising pollution which characterizes so many North American cities. While they cannot necessarily be blamed for these occurrences, they cannot entirely be exhonerated. Planning techniques for guiding and controlling urban development have not kept pace with the rapid growth of our modern cities. Certain techniques have been borrowed from other fields, notably simulation modelling, but their use has frequently been hampered by a lack of data and by the high cost of implementation, furthermore, these techniques have generally failed to filter down to the Profession at large, with the result that they have largely remained the preserve of the technical expert who may not be in the mainstream of broader planning principles. The sophisticated nature of these techniques has promoted their isolation from the day-to-day planning processes. One alternative to a rigourous computer simulation is to employ a gaming simulation. The latter may permit a considerable simplication of the model by allowing the players (in this case, planners) to become ' simulation actors' who emulate the behavior of various interest groups or institutions in response to carefully selected rewards. This format has the advantage of precipitating the direct involvement of planners in the model and of facilitating their understanding of problems through the process of abstracting from reality. Such an abstraction is often conducive to the achieving of an overview; this may permit planners to be less distracted by the routine problems of planning administration, which are short term in nature, and to redirect their focus to longer term considerations. The purpose of this Study is to develop a gaining simulation framework for the analysis of planning problems which are not readily amenable to many quantitative techniques and for the evaluation of alternative planning strategies. This framework or tool is capable of incorporating a series of very simple interrelationships into a recursive process which will ultimately generate the implications of various decision alternatives and which will permit planners to identify optimum strategies. The framework incorporates the simulational features of a 'gaming simulation' and the strategy evaluating features of 'game theory'. The former have generally constituted abstractions from reality which were merely assertions in mathematical form but which were not particularly useful for either rigourous analysis or accurate forecasting. The lack of mathematical rigour in their structures has tended to inhibit their use for any but educational purposes, notably prediction and research. The latter have been confined to the identification of optimum strategies in only the most simple exchanges, which cannot generally be related to the complexities of the real world. This Study represents a step towards combining these two approaches. The gaming simulation framework, when 'primed' with appropriate data, will generate optimum strategies which may be followed by the participants. Its mathematical structure constitutes an amalgam of Markov processes, network analysis and Eayesian decision analysis. This technique is primarily designed to be used in the day-to-day planning process in large cities rather than in the cloistered research context, although it may later prove to have even wider applications. The null hypothesis is presented in the Study which states that the framework is not capable of generating an optimal solution. It was then refuted using probability theory to demonstrate that an -optimal solution was attainable. The use of the framework in the planning context was then illustrated by applying it to the specific public/private negotiations preceding major urban landmark developments in Canada. / Applied Science, Faculty of / Community and Regional Planning (SCARP), School of / Graduate
472

On assimilating sea surface temperature data into an ocean general circulation model

Weaver, Anthony T. January 1990 (has links)
The feasibility of sea surface temperature (SST) data improving the performance of an ocean general circulation model (OGCM) is investigated through a series of idealized numerical experiments. The GFDL Bryan-Cox-Semtner primitive equation model is set-up as an eddy resolving, unforced, flat bottomed channel of uniform depth. 'Observed' SST data taken from a reference ocean established in a control run are continuously assimilated into an 'imperfect' model using a simple 'nudging' scheme based on a surface relaxation condition of the form Q = C(SST — T₁) where Q is the heat flux and T₁ is the temperature at the top level of the model. The rate of assimilation is controlled by adjusting the constant inverse relaxation time parameter C. Numerical experiments indicate that the greatest improvement in the model fields is achieved in the extreme case of infinite assimilation (i.e., C = ᅇ) in which the 'observed' SST is directly inserted into the model. This improvement is quantified by monitoring the reduction in the root mean square (RMS) errors relative to the simulated reference ocean. Assimilation with longer relaxation time-scales (i.e., smaller C's) proves quite ineffective in reducing the RMS errors. The improvement in the direct insertion numerical experiment stems from the model's ability to transfer assimilated SST into subsurface information through strong advective processes. The assimilation of cool surface data induces convective overturning which transfers the 'cool' information downward rapidly but adversely affects the vertical thermal structure by an unrealistic deepening of the mixed layer. By contrast, warm surface data do not penetrate downward readily. Thus, the systematically biased downward flux of coolness gradually produces unrealistically cool subsurface waters. / Science, Faculty of / Earth, Ocean and Atmospheric Sciences, Department of / Graduate
473

Modeling of double heterojunction bipolar transistors

Ang, Oon Sim January 1990 (has links)
A one-dimensional analytical model in the Ebers-Moll formulation of a graded base double heterojunction bipolar transistor (DHBT) is developed and used to examine the effects of base grading, the emitter-base barrier and the base-collector barrier on the d.c. current gain, offset voltage and the high frequency performance of a N — Al[formula omitted]Ga₁[formula omitted]As/p — Al[formula omitted]Ga₁[formula omitted]As/N — Al[formula omitted]Ga₁[formula omitted]As DHBTs. Recombination processes considered in the space charge regions and the neutral regions are: Shockley-Read-Hall, radiative and Auger. The trade-off between base-grading, which reduces the base current, and the neutral base recombination, which is brought about by varying the aluminium the junctions, results in an optimum aluminium mole fraction profile regarding the d.c. current gain. For high frequency performance, a similar trade-off to that of the d.c. situation exists. In this case, the important manifestation of the increased collector-base barrier height is an increase in the base transit time. The aluminium mole fraction profile which optimises the unity gain cut-off frequency, f[formula omitted], and the unity power gain cut-off frequency, f[formula omitted], is established. DHBTs which are symmetrical, both in aluminium mole fraction and doping concentration profiles, are shown to have low common-emitter offset voltages, V[formula omitted],[formula omitted]. Base-grading reduces V[formula omitted],[formula omitted] in devices in which the difference between the emitter and collector aluminium mole fraction is < 0.1; otherwise, V[formula omitted],[formula omitted] increases as base-grading increases. The model is also used to examine the performance of a N-Al[formula omitted]Ga₁[formula omitted]As/p-In[formula omitted]Ga₁[formula omitted]As/N-Al[formula omitted]Ga₁[formula omitted]As DHBT. It is shown that radiative and Auger recombination limit the d.c. current gain in this device. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
474

A decentralized model for loanable fund allocations in commercial banks

Leung, Kwok Ki January 1985 (has links)
This thesis proposes a decentralized procedure for optimal allocation of assets in loan divisions within commercial banks; simulation runs of the procedure demonstrate the feasibility of such an establishment in commercial banks. The decentralized procedure can not only reduce information inputs of the asset allocation system, but also provide a systematic procedure for estimations of the cost of funds and the cost of risk used in the profitability analysis. The simulation results also suggest that the appropriate price paid for the future customer relationships should vary according to the level of loanable funds available to the bank. Moreover, the simulations reveal that the interaction of risk within a bank is very complicated and point out the need for establishing some sort of procedures to take into account these interactions. / Business, Sauder School of / Graduate
475

The dynamics of capital structure choice

Yu, Albert Chun-ming January 1985 (has links)
This thesis employs two-period state-contingent model based upon the "tax shield plus bankruptcy costs" approach to examine the dynamic capital structure decision. By allowing recapitalization at the end of period one, we can analyse the dynamics of the firm's capital structure choice. Also, the effect of a call provision on bonds can be examined. Simulated results show that the firm will recapitalize at the end of period one only if the gain in firm value, with- or ex-dividend, resulting from recapitalization exceeds the after-tax flotation costs. There exists a tolerable recapitalization boundary within which the firm will not recapitalize. This implies that the empirically observed capital structure is not necessarily at the acme of the firm value function, as most empirical studies assume. Another important result is that a call provision on bonds may be wealth reducing; the call provision may reduce the wealth of shareholders by inducing recapitalization in states which is suboptimal if there is no call provision, and incurs flotation costs which could have been avoided. The gain in firm value resulting from recapitalization may be too small to justify the extra flotation costs and thus reduces the overall firm value. / Business, Sauder School of / Graduate
476

A mathematical model for airfoils with spoilers or split flaps

Yeung, William Wai-Hung January 1985 (has links)
A flow model for a Joukowsky airfoil with an inclined spoiler or split flap is constructed based on the early work by Parkinson and Jandali. No restriction is imposed on the airfoil camber, the inclination and length of the spoiler or split flap, and the angle of incidence. The flow is assumed to be steady, two-dimensional, inviscid and incompressible. A sequence of conformal transformations is developed to deform the contour of the airfoil and the spoiler (split flap) onto the circumference of the unit circle over which the flow problem is solved. The partially separated flow region behind these bluff bodies is simulated by superimposing suitable singularities in the transform plane. The trailing edge, the tip of the spoiler (flap) are made critical points in the mappings so that Kutta conditions are satisfied there. The pressures at these critical points are matched to the pressure inside the wake, the only empirical input to the model. Some studies of an additional boundary condition for solving the flow problem were carried out with considerable success. The chordwise pressure distributions and the overall lift force variations are compared with experiments. Good agreement in general is achieved. The model can be extended readily to airfoils of arbitrary profile with the application of the Theodorsen transformation. / Applied Science, Faculty of / Mechanical Engineering, Department of / Graduate
477

Bankruptcy : a proportional hazard approach

Betton, Sandra Ann January 1987 (has links)
The recent dramatic increase in the corporate bankruptcy rate, coupled with a similar rate of increase in the bank failure rate, has re-awakened investor, lender and government interest in the area of bankruptcy prediction. Bankruptcy prediction models are of particular value to a firm's current and future creditors who often do not have the benefit of an actively traded market in the firm's securities from which to make inferences about the debtor's viability. The models commonly used by many experts in an endeavour to predict the possibility of disaster are outlined in this paper. The proportional hazard model, pioneered by Cox [1972], assumes that the hazard function, the risk of failure, given failure has not already occurred, is a function of various explanatory variables and estimated coefficients multiplied by an arbitrary and unknown function of time. The Cox Proportional Hazard model is usually applied in medical studies; but, has recently been applied to the bank failure question [Lane, Looney & Wansley, 1986]. The model performed well in the narrowly defined, highly regulated, banking industry. The principal advantage of this approach is that the model incorporates both the survival times observed and any censoring of data thereby using more of the available information in the analysis. Unlike many bankruptcy prediction models, such as logit and probit based regression models, the Cox model estimates the probability distribution of survival times. The proportional hazard model would, therefore, appear to offer a useful addition to the more traditional bankruptcy prediction models mentioned above. This paper evaluates the applicability of the Cox proportional hazard model in the more diverse industrial environment. In order to test this model, a sample of 109 firms was selected from the Compustat Industrial and Research Industrial data tapes. Forty one of these firms filed petitions under the various bankruptcy acts applicable between 1972 and 1985 and were matched to 67 firms which had not filed petitions for bankruptcy during the same period. In view of the dramatic changes in the bankruptcy regulatory environment caused by the Bankruptcy reform act of 1978, the legal framework of the bankruptcy process was also examined. The performance of the estimated Cox model was then evaluated by comparing its classification and descriptive capabilities to those of an estimated discriminant analysis based model. The results of this study indicate that while the classification capability of the Cox model was less than that of discriminant analysis, the model provides additional information beyond that available from the discriminant analysis. / Business, Sauder School of / Graduate
478

A finite element model of ocean circulation

Bermejo-Bermejo, Rodolfo January 1986 (has links)
Preliminary results of a two-layer quasi-geostrophic box model of a wind-driven ocean are presented. The new aspects of this work in relation with conventional eddy models are a finite element formulation of the quasi-geostrophic equations and the use of no-slip boundary condition on the horizontal solid boundaries. In contrast to eddy resolving models that utilize free-slip boundary conditions our results suggest that the obtention of ocean eddies with the no-slip constraints requires a more restricted range of parameters, in particular much lower horizontal eddy viscosity eddy coefficients AH and higher Froude numbers F₁ and F₂. We show explicitly that a given range of parameters, which is eddy generating when the free-slip boundary condition is used, leads to a quasi-laminar flow in both, upper and lower, layers. An analytical model to interpret the numerical results is put forth. It is an extension of an earlier model of Ierley and Young (1983) in that the relative vorticity terms are of primary importance for the dynamics. Thus, it is shown that the boundary layer dynamics is active in the interior of the second layer, and it can be concluded from our method that for given F₁ and F₂ such that the lower layer geostrophic contours are closed, to the existence of the western boundary layer will prevent the homogenization of the potential vorticity so long as AH is large enough to stabilize the northwestern undulations of the flow. / Science, Faculty of / Earth, Ocean and Atmospheric Sciences, Department of / Graduate
479

Statistical modelling of sediment concentration

Thompson, Mai Phuong January 1987 (has links)
One technique that is commonly used to replace the costly daily sampling of sedimentconcentration in assessing sediment discharge is the "rating curve" technique. This technique relies on the form of the relationship between sediment concentration and water discharge to estimate long-term sediment loads. In this study, a regression/time-series approach to modelling the relationship between sediment concentration and water discharge is developed. The model comprises of a linear regression of the natural logarithm of sediment concentration on the natural logarithm of water discharge and an autoregressive time-series of order one or two for the errors of the regression equation. The main inferences from the model are the estimation of annual sediment loads and the calculation of their standard errors. Bias correction factors for the bias resulted from the inverse transformation of the natural logarithm of sediment concentration are studied. The accuracy of the load estimates is checked by comparing them to figures published by Water Survey of Canada. / Science, Faculty of / Statistics, Department of / Graduate
480

Discrete-time closed-loop control of a hinged wavemaker

Hodge, Steven Eric January 1986 (has links)
The waves produced by a flap-type wavemaker, hinged in the middle, are modelled using first-order linear wavemaker theory. A simplified closed-loop, discrete-time system is proposed. This includes a proportional plus integral plus derivative (PID) controller, and the wavemaker in order to compare the actual wave spectral density with the desired wave spectral density at a single frequency. Conventional discrete-time control theory is used with the major difference being the use of a relatively long timestep duration between changes in waveboard motion. The system response is calculated for many controller gain combinations by the computer simulation program CBGANES. System stability is analyzed for the gain combinations by using two different methods. One method is an extension of the Routh criterion to discrete-time and the other is a state-space eigenvalue approach. The computer simulation and the stability analysis provide a means for selecting possible controller gains for use at a specific frequency in an actual wave tank experiment. The computer simulation performance response and the two stability analyses predict the same results for varying controller gains. It is evident that integral control is essential in order to achieve a desired response for this long duration timestep application. The variation in discrete timestep duration and in desired spectral density (an indirect indication of frequency variation) provide variation in the constraints on controller gain selection. The controller gain combinations yielding the fastest stable response at a single frequency are for large proportional gain and small integral and derivative gains. / Applied Science, Faculty of / Mechanical Engineering, Department of / Graduate

Page generated in 0.0962 seconds