Log breakdown by sawing can be viewed as a multi-phase process that converts logs into boards by a series of cutting operations. In the primary phase, logs are sawn into s labs of wood known as flitches or cants. These are further processed by secondary operations, that resaw, edge (cut lengthwise) and trim (cut widthwise) the raw material, resulting in the manufacture of the board product whose value is influenced by its composite dimensions and quality (as indicated by a grade). Board grade is in turn determined by the number, type, size, and location of defects. Owing to its biological origins, each log, and subsequent board, is unique. Furthermore, as each sawmill, and processing centre within the mill, has a unique configuration, the problem of determining how each log entering a mill should be sawn is very complex. Effective computer simulation of log breakdown processes must therefore entail detailed descriptions of both geometry and quality of individual logs. Appropriate strategies at each breakdown phase are also required. In this thesis models for emulating log breakdown are developed in conjunction with an existing sawing simulation system which requires, as input, detailed three-dimensional descriptions of both internal and external log characteristics. Models based on heuristic and enumerative procedures, and those based upon the principles of dynamic programming (DP) are formulated, encoded, and compared. Log breakdown phases are considered both independently and in a combined integrated approach-working backwards from the board product through to the primary log breakdown phase. This approach permits methodology developed for the later processes to be embedded within the primary phase thus permitting the determination of a global rather than local solution to the log breakdown problem whose objective is to seek the highest possible solution quality within the minimum possible time. Simulation results indicate that solution quality and processing speeds are influenced by both solution methodology and degree of data complexity. When the structure of either factor is simplified, solutions are generated more rapidly-but with an accompanying reduction in solution quality. A promising compromise that combines DP techniques with mathematical functions based on a subset of the original data is presented. / Subscription resource available via Digital Dissertations only.
Identifer | oai:union.ndltd.org:ADTP/274425 |
Date | January 1997 |
Creators | Todoroki, Christine Louisa |
Publisher | ResearchSpace@Auckland |
Source Sets | Australiasian Digital Theses Program |
Language | English |
Detected Language | English |
Source | http://wwwlib.umi.com/dissertations/fullcit/9913695 |
Rights | Subscription resource available via Digital Dissertations only. Items in ResearchSpace are protected by copyright, with all rights reserved, unless otherwise indicated., http://researchspace.auckland.ac.nz/docs/uoa-docs/rights.htm, Copyright: The author |
Page generated in 0.0048 seconds