• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3754
  • 1249
  • 523
  • 489
  • 323
  • 323
  • 323
  • 323
  • 323
  • 314
  • 101
  • 98
  • 60
  • 13
  • 12
  • Tagged with
  • 7320
  • 7320
  • 773
  • 629
  • 621
  • 551
  • 500
  • 487
  • 484
  • 466
  • 393
  • 388
  • 355
  • 353
  • 346
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
461

Genetic algorithms, their applications and models in nonlinear systems identification

Wan, Frank Lup Ki January 1991 (has links)
The Genetic Algorithm was used to estimate the hydraulic compliance of the hydraulic system on the UBC teleoperated heavy duty excavator. Using real recorded and simulation data from the excavator, the Genetic Algorithm has successfully identified the compliance of single link and multi-link hydraulic system of the excavator. A Parallel GA ( PGA ) was implemented with 16 T800 Transputers. It achieved a speedup factor of 12 over a traditional GA. With such a high speedup factor, real-time monitoring of hydraulic compliance and other hydraulic parameters is becoming possible. New mechanisms such as the distributed fitness function, the active error analysis were used to enhance the performance of a PGA. A PGA which incorporated these mechanisms actually outperformed a traditional GA in key areas such as variance of the estimated parameter and parameter tracking ability. Finally, a physical model that explains the fundamental properties of GAs was introduced. The physical model ( a hypercube ) not only provides an excellent explanation of GAs searching power, but also gives insight to GAs users ways to improve and to predict the performance of GAs in most applications. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
462

A strategic capacity planning tool for a firm in the newsprint industry

Booth, Darcie Lee January 1990 (has links)
A strategic planning tool has been developed to help a firm in the North American newsprint industry decide whether to expand its capacity. This tool can also be used as an industry model, to forecast capacity decisions under various conditions. Key features of the model are the explicit consideration of the interdependence between firms and the recognition of the lumpiness of capacity expansion. Individual firms and groups of firms are modelled. All firms are assumed to determine their best capacity option taking into consideration the capacity decisions of other firms. The model uses an open loop Nash equilibrium concept to solve the capacity expansion problem. Firms also simultaneously determine their profit-maximizing production in each year, given their capacities. Demand functions for each year are specified, and demand scenarios may be subject to uncertainty. The model was applied to the newsprint industry for the 1979 to 1983 time period. The top five firms in the industry were modelled as individual firms. The next eight firms were modelled as two groups of four identical firms. The behaviour of the fringe (i.e., the remaining 20% of total industry capacity) was forecast exogenously. Historical firm and industry capacities, production levels and prices were compared to model simulations under three different assumptions for firm objectives: profit maximization, market share maximization subject to a profitability constraint, and maximization of expected utility assuming exponential utility functions for all firms (with different assumptions about attitudes of firms towards risk). The constrained market share maximization hypothesis best explained observed behaviour. Multiple equilibria were often computed and methods for addressing this problem were discussed. / Business, Sauder School of / Graduate
463

Structured policies for complex production and inventory models

Sun, Daning January 1990 (has links)
For inventory models minimizing the long-run average cost over an infinite horizon, the existence of optimal policies was an open question for a long time. Consider a deterministic, continuous time inventory system satisfies the following conditions: the production network is acyclic, the joint setup cost function is monotone, the holding cost and the backlogging cost rates are nonnegative, the demand rates are constant over time, the production rates are infinite or finite non-increasing, and backlogging may be allowed or not. For this very general extension of the Wilson-Harris EOQ model, we prove the existence of optimal policies. Very few properties of optimal policies have been discovered since the 1950's. Restricting the above inventory model to infinite production rates, we present some new properties of optimal policies, such as the Latest Ordering Property, and explicit expressions for echelon inventories and order quantities in terms of ordering instants. An assembly production system with n facilities has a constant external demand occurring at the end facility. Production rates at each facility are finite and non-increasing along any path in the assembly network. Associated with each facility are a set-up cost and positive echelon holding cost rate. The formulation of the lot-sizing problem is developed in terms of integer-ratio lot size policies. This formulation provides a unification of the integer-split policies formulation of Schwarz and Schrage [34] (1975) and the integer-multiple policies formulation of Moily [20] (1986), allowing either assumption to be operative at any point in the system. A relaxed solution to this unified formulation provides a lower bound to the cost of any feasible policy. The derivation of this Lower Bound Theorem is novel and relies on the notion of path holding costs, a generalization of echelon holding costs. An optimal power-of-two lot size policy is found by an 0(n³ log n) algorithm and its cost is within 2% of the optimum in the worst case. Mitchell [18] (1987) extended Roundy's 98%-effectiveness results for one-warehouse multi-retailer inventory systems with backlogging. We extend this 98%-effectiveness result for series inventory systems with backlogging. The nearly-integer-ratio policies still work. The continuous relaxation provides a lower bound on the long-run average cost of any feasible policy. The backlogging model is also reduced in 0{n) time to an equivalent model without backlogging. Roundy's results [27] (1983) are then applied for finding a 98%-effective backlogging policy in O(nlogn) time. In an EOQ model with n products, joint setup costs provide incentives for joint replenishment. These joint setup costs may be modelled as a positive, nondecreasing, submodular set function. A grouping heuristic partitions the n products into groups, and all products in the same group are always jointly replenished. Each group is then considered as a single "aggregate product" being replenished independently of the other groups, and therefore according to the EOQ formula. As a result, possible savings when several groups are simultaneously replenished are simply ignored. Our main result is that the cost of the best such grouping solution cannot be worse than 44.8% above the optimum cost. Known examples show that it can be as bad as 22.4% above the optimum cost. These results contrast with earlier results for power-of-two policies, the best of which never being worse than about 2% above the optimum cost. / Business, Sauder School of / Graduate
464

The relationship between selected market indices and individual securities using Sharpe's beta coefficient

Chen, James C. L. January 1971 (has links)
This study attempts to determine the usefulness of Sharpe's Beta Coefficient in explaining the relationship between selected indices and individual securities. Basically, this involved doing a correlation-regression analysis on the returns of randomly selected securities against those of specific market indices. The returns for both variables were calculated traditionally, that is, by taking the price differential between the closing price at the end of the previous and present quarter and adding the quarterly dividend (where applicable) and dividing the total by the initial price. This was performed for six test periods. Generally, the tests yielded negative results. The amount of explained variation in individual security returns by the Beta Coefficient is negligible. This study concludes by providing some explanations and suggesting modifications. / Business, Sauder School of / Graduate
465

Lateral inhibition and the area operator in visual pattern processing

Connor, Denis John January 1969 (has links)
The static interaction of the receptor nerves in the lateral eye of the horsesoe crab, Limulus, is called lateral inhibition. It is described by the Hartline equations. A simulator has been built to study lateral inhibition with a view to applying it in a pre-processor for a visual pattern recognition system. The activity in a lateral inhibitory receptor network is maximal in regions of non-uniform illumination. This enhancement of intensity contours has been extensively studied for the case of black and white patterns. It is shown that the level of activity near a black-white boundary provides a measure of its local geometric properites. However, the level of activity is dependent on the boundary orientation. A number of methods for reducing this orientation dependence are explored. The activity in a lateral inhibitory network adjacent to a boundary can be modelled by an area operator. It is shown that the value of this operator along an intensity boundary provides a description of the boundary that is related to its intrinsic description — curvature as a function of arc length. Since the operator is maximal on an intensity boundary, this description has been called the ridge function for the boundary. A ridge function can also be obtained using a lateral inhibitory, network. The properties of this function are discussed. It is shown how ridge functions might be incorporated into a pattern recognition algorithm. A novel method for detecting the bilateral and rotational symmetries in a pattern is described. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
466

Theoretical and empirical relationships among data matrices : difficulty, discrimination and similarity

Tindall, Albert Douglas January 1968 (has links)
Theoretical and empirical relationships between paired comparison, PC, same-different choice times and perceived difficulty on a cartwheel task are investigated. An ordering of pairs of stimuli by the use of discrimination choice time predicts the subject's ordering of these pairs according to difficulty of discrimination. Two general models are developed to predict unilateral similarity proportions from PC response latencies. Though both models predict that unilateral similarity proportions are related to directional PC choices, only the ratio of differences model predicts the obtained standard stimulus effect. / Arts, Faculty of / Psychology, Department of / Graduate
467

The evaluation of alternative airport plans

Smith, Margaret Aileen January 1968 (has links)
In the past, the planning of airports has largely been an intuitive process, leading to an often serious misallocation of resources. It is the contention of this thesis that the adoption of a more economic and integrated method of evaluating alternative airport plans could eliminate some of this mis-investment, and that the groundwork for such an evaluation process has already been done in the field of port planning. The evaluation method proposed is the use of a mathematical model of the airport's operation and of the benefit and cost interrelationships arising from the activities performed. The model can then be used to simulate the value of the benefits and costs of a number of possible alternative plans. It is the purpose of this thesis to discuss the applicability of the port model as a tool for airport planning and to point out the ease with which it could be applied both from the point of view of modifications and data requirements and availability. As background to the evaluation process, Chapter 2 presents some general theory and problems of economic evaluation and of the measurement of benefits and costs. Chapter 3 presents a description of planning processes currently being used by the Department of Transport in planning Canada's airports and points out some of the flaws in this approach. Chapter 4 then describes the type of port model now developed in so far as it can be used to determine interrelationships between investment, cost to ships of using the port, cost of port operation, and net community benefits. The calculations derived from the application of the model can then be used to determine the net present value of the benefit and cost streams arising from alternative ways of achieving a given level of port output, and thus to select the best possible combination of facilities. Chapter 5 then points out the similarities and differences between port and airport operation and hence the applicability of and the modifications required in the application of the port model to airport planning situations. The remainder of the chapter delineates the type of data required to construct and use an airport model and the availability of this data to the airport planners. Finally, Chapter 6 summarizes the findings and concludes that, while it has its limitations as a terminal model, as a representation of airport operation and as an evaluation process, the port model can be adapted relatively easily to airport planning to provide a more integrated, more economic approach to the evaluation of alternative airport plans. / Business, Sauder School of / Graduate
468

Energy dissipation in paper tearing as time-dependent phenomenon

Sun, Bernard Ching-Huey January 1967 (has links)
The nature of ballistic-type internal paper tear test methods has been reviewed. The kinetic energy of the tester sector is considered to be the prime contributor to paper rupture. In agreement with energy dissipation concepts and the principle of energy conservation, a mathematical model expressing tearing energy was derived based on kinetic energy variations in paper during tearing. It is shown that this mathematical model can be used to calculate the net energy of the tester sector, which is available for tearing paper, and the residual energy. Consequently, the difference between net and residual energy, or tearing energy, is that portion expended in the rupture process. Furthermore, the mathematical model relates tearing energy to velocity, hence can be used to examine the effect of tear rate and time-dependent properties of paper subjected to tearing stress. A method was devised for measuring the time required to tear standard samples. From an oscilloscope trace, the tear distance and time relationship was measured and represented by a quadratic equation. From this equation, sector swing and tearing velocities were calculated for computing various energy factors and their variation at any instant of the tearing process. Results have shown that ballistic-type tear test methods are time-dependent, in that time required to tear paper varies with the sample condition. The higher the number of plies torn simultaneously, the longer was the time required to tear a paper sheet. The energy required to tear paper was also time-dependent, increasing with decreasing tear rate. It was found that the direct relationship between tearing strength and number of plies torn simultaneously does not always hold, but that a constant direct relationship exists between tearing strength and tearing energy. Although the ballistic-type tear test is time-dependent, inherent specimen properties may have a profound effect on results. Test results with an Elmendorf tear tester on five paper grades varying in tearing strength from 14 to 156 g/sheet have confirmed that the energy dissipation concept is adequate. / Forestry, Faculty of / Graduate
469

A geometric model of skyline thinning damage

Ormerod, David William January 1971 (has links)
In thinning, physical damage to the residual trees may result from abrasion during felling and yarding. The amount of damage is primarily a function of the stand geometry, the thinning prescription, and the felling strategy, under a given extraction system. In terms of controlling the level of damage, there are two interdependent aspects to the problem. One is to prescribe a desirable thinning that is compatible with the extraction system. The other is to efficiently engineer the extraction under the given silvicultural constraints. For skyline thinning it is assumed that a geometric model of the stand, and of the extraction system, provides a framework for examining potential physical damage, in terms of this interdependence. Based on a three-dimensional model, skyline thinning is simulated, and indices of potential damage are enumerated. For a sample stand of Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco), the potential damage is studied as a function of the prescription, and the system parameter that determines the felling directions. Three different selection prescriptions are examined; a Low, a Crown, and a Graded. The synthetic data is discussed in terms of frequency distributions, and as a function of the parameter mentioned. It is demonstrated that the system is very sensitive to this parameter. While the effect of the different prescribed thinnings might be thought to be intuitively obvious, some enigmatic phenomena are apparent. It is proposed that such a study is a means of examining both the silvicultural and engineering aspects of the problem of physical damage in the residual stand, for a skyline thinning system. It is hoped that such deliberation will provide a rational framework to determine the effect of this damage upon thinning regimes. / Forestry, Faculty of / Graduate
470

Portfolio diversification : a theoretical and empirical analysis

Crawford, Graeme Frederick January 1970 (has links)
This thesis presents a technique for analysing the relationships between the number of securities in a diversified portfolio and portfolio return and variance of return. It includes an analytical and descriptive presentation of the concepts and objectives of portfolio analysis in a theoretical framework. The material presented is used as a vehicle to introduce an empirical analysis of the portfolio selection process. For the empirical analysis a model is developed to simulate the selection process for the optimum portfolio. This is similar to that derived by using the quadratic programming technique of the Markowitz model. The utility function used for the selection of the optimal portfolio at each stage of diversification is of the form suggested by Farrar. The ex post data for the empirical analysis consists of ten samples of fifteen securities selected from the Financial Post Data Bank and only common stock is considered. The period covered is from 1959 to 1969 using annual data. The results derived show the form of the efficient portfolio frontier under varying degrees of aversion to risk. The optimum portfolio for either a mildly risk averse or an extremely risk averse investor should consist of approximately four to six securities under the assumptions of the model. / Business, Sauder School of / Graduate

Page generated in 0.1391 seconds