• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 850
  • 2
  • 1
  • Tagged with
  • 974
  • 974
  • 839
  • 826
  • 826
  • 314
  • 159
  • 155
  • 141
  • 118
  • 116
  • 116
  • 102
  • 101
  • 96
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Virtual process capability

Mackertich, Neal A 01 January 1998 (has links)
The quality cost of non-conformance associated with first run production builds is typically more than five times that of later production runs. If a manufacturing organization is to gain market share and increase its profitability, it must explore methods of accelerating its learning curves through defect prevention. Current "Transition to Production" concept methodologies attempt with limited success to accelerate organizational learning through Design for Manufacturability (DFM), design phase dimensional management studies, manufacturing floor statistical methods (SPC, DOE, etc.), and various qualitative strategies. While each of these techniques are effective to some degree in reducing future nonconformances, an integrated, design-phase approach utilizing current technology is needed. "Virtual Process Capability" (VPC) is a methodology for integrating statistical process capability knowledge directly into the hardware design phase, resulting in the improved performance and reduced product costs typically associated with mature product manufacturing. The intent behind the methodology is to realistically simulate the manufacture of hardware products by understanding their underlying model equations and the statistical distributions of each involved contributing parameter. Once each product has been simulated and an expected percentage defective has been estimated, mathematical programming and statistical quality engineering techniques are then utilized for improvement purposes. Data taken from the practical application of this methodology at Raytheon Aircraft has conservatively estimated that for each dollar invested ten are saved. As a technical extension to this developed methodology, statistical insights and methods are provided as to how product and process improvement analysis is best accomplished. Included within this area of discussion is the statistical development and validation of improved measures for the more efficient detection of dispersion and mean effects than that of more traditional methods. Additionally, the use of mathematical programming techniques is creatively employed as an improved mechanism in the optimization of nominal-the-best type problems.
2

Cycle decomposition, Steiner trees, and the facility layout problem

Keen Patterson, Margaret 01 January 2003 (has links)
The facility layout problem is modelled as a cycle decomposition process in which the maximum-weight clique and travelling salesman problems are utilized to extract cycles from the graph adjacency matrix. The space between cycles is triangulated so the graph is maximally planar. The adjacency graph is then systematically developed into a block plan layout. With use of the maximum-weight clique algorithm, the procedure addresses layout problems that are not 100% dense. Many examples are utilized to demonstrate the flexibility of the algorithm and the resulting adjacency graph and block plan layout drawings. The Steiner Circulation Network solution derived from an adjacency graph solution and its dual graph, provides a minimum cost system of hallways and connecting links for the material handling system. Using the flows between activities and departments in a layout problem, the circulation network provides the necessary link between the steps of finding the adjacency graph solution and finding useful block plan layout. A case study demonstrates how the solution for the layout and its material handling system can be integrated. Computational results up to size n = 100 are presented along with a comparative study with a competitive algorithm.
3

A unified complex network framework for environmental decision -making with applications to green logistics and electronic waste recycling

Toyasaki, Fuminori 01 January 2005 (has links)
In this dissertation, I developed a unified complex network framework for environmental decision-making. I focused on complex network systems arising in the context of green logistics, including global supply chains, and electronic waste recycling. The framework that I developed is able to handle many decision-makers at the tiers of the networks, and enables the prediction of the flows of the materials between tiers as well as the prices at the tiers, along with the emissions generated, and the incurred costs and profits. I first developed a theoretical framework for supply chain networks with environmental concerns in the context of decision-making in the Information Age today. I allowed different decision-makers to weight the criteria (including the environmental ones) in distinct fashion. Subsequently, I generalized the original model to a global supply chain network model which included environmental criteria, electronic commerce, and risk management. I then developed an integrated reverse supply chain management framework that allows for the modeling, analysis, and computation of the material flows as well as the prices associated with the different tiers in the multitiered electronic recycling network. I also extended this model in order to deal with the environmental risk caused by hazardous material generated from the recycling processes. I assumed that the environmental risk due to hazardous material depends on the amount of residual hazardous material that is not extracted from electronic wastes by the processor as well as on the storage amount of hazardous waste that is not disposed yet by the hazardous material disposer. The models and computational methods were based on the methodologies of variational inequality theory for the study of the statics (cf. Nagurney (1999)) and projected dynamical systems for the dynamics (cf. Nagurney and Zhang (1996)).
4

Left ventricular volume estimation from three -dimensional echocardiography

Shin, Il-Seop 01 January 2007 (has links)
The objective of this research is to extend the clinical utility of three-dimensional echocardiography (3DE) by developing automated means for left ventricular (LV) function estimation. This dissertation presents our work on a semi-automated algorithm that, without extensive supervision, tracks the LV boundary through the spatial and temporal sequences of two-dimensional frames that constitute a 3DE data set. The construction of the algorithm is based on a framework of factor graph representations and probability propagation. One key component is the derivation of "LV edge likelihoods" as image features that provide "soft" edge information rather than a "hard" edge map to make LV tracking more robust to frame-to-frame variations in feature sizes and intensity levels. The algorithm begins with the operator marking some highly visible landmark points along the LV boundary in a few spatially separated frames. This takes a few seconds to complete, and is the only operator input to initiate the procedure. Full boundary estimates in these initial frames are completed by spline fitting to the selected points. Using spatial continuity in the LV boundary, these estimates establish search regions for the intermediate frames, within which boundary points are specified as those having highest edge likelihood. A similar procedure, employing forward and backward tracking, is used for the temporal sequence of frames at each spatial location. LV volume as a function of time is calculated from the set of estimated boundaries using a modified version of planimetry. Our system's performance is tested on gated-rotational and real-time 3DE data from both normal and diseased hearts, obtained using Philips ultrasound systems. The results are validated by comparison to the "gold standard" of nuclear scan (Single Photon Emission Computed Tomography or SPECT) volumes for the same hearts. As a critical part of our algorithm, the development of a method for characterizing the mean square error (MSE) in LV volume estimates is presented, which serves as the quality indicator that can flag unreliable estimates. The utility of this self-verification information is also validated as part of the nuclear scan comparison studies. Preliminary results show that it does seem to distinguish between more and less reliable estimates.
5

Stochastic facility layout planning and traffic flow network design

Li, Wu-ji 01 January 1993 (has links)
This dissertation is concerned with a facility layout and traffic flow network design problem where the focus is on assigning activities to locations in order to minimize the congestion within a circulation system. This dissertation first presents a series of mathematical models of the facility layout problem in which stochastic congestion within the movement system is modelled. The models can be used to assist the design and placement of departments or activities within a facility or surrounding environment where the congestion of the pedestrian or customer traffic flow is of major concern. Numerical examples are provided to illustrate the difference between the various formulations. Second, a new heuristic algorithm called STEP, Sample Test pair-wise Exchange Procedure, to solve these complex problems is developed. With its straightforward approach, the algorithm can solve large-scale Quadratic and Stochastic Quadratic Assignment Problems with efficient computing times and excellent solution performance. Computational experience for solving many test examples is presented. Extensive work has been carried out for selecting and evaluating appropriate facility network design configurations. This is done by evaluating the performance of the alternative network designs such as star, grid, and ring topologies in terms of customers' sojourn time in the system. Finally, the dissertation concludes with a discussion of open problems and directions for future research.
6

Automated techniques for formal verification of SoCs

Sinha, Roopak January 2009 (has links)
System-on-a-chip (SoC) designs have gained immense popularity as they provide designers with the ability of integrating all components (called IPs) of an application-specific computer system onto a single chip. However, one of the main bottlenecks of the SoC design cycle is the validation of complex designs. As system size grows, validation time increases beyond manageable limits. It is desirable that design inconsistences are found and fixed early in the design process, as validation overheads are significantly higher after IPs are integrated. This thesis presents a range of techniques for the automatic verification and design of SoCs that aim to reduce post-integration validation costs. Firstly, local module checking algorithm, a practical implementation of module checking, is presented. This technique allows for the comprehensive verification of IPs such that they guarantee the satisfaction of critical specifications regardless of the SoC they are used in. Local module checking is shown to be able to validate IPs in much lesser time on average than global module checking, and can help in handling many important validation tasks much before the integration stage. Next, a number of protocol conversion techniques that assist in the composition of IPs with incompatible protocols is presented. The inconsistencies between IP protocols, called mismatches, are bridged by the automatic generation of some extra glue-logic, called a converter. Converters generated by the proposed techniques can handle control, datawidth and clock mismatches between multiple IPs in a unified manner. These approaches ensure that the integration of IPs is correct-by-construction, such that the final system is guaranteed to satisfy key specifications without the need for further validation. Finally, a technique for automatic IP reuse using forced simulation is presented, which involves automatically generating an adaptor that guides an IP such that it satisfies desired specifications. The proposed technique can generate adaptors in many cases where existing IP techniques fail. As it is guaranteed that reused IPs satisfy desired specifications, post-integration validation costs are significantly reduced. For each proposed technique, a comprehensive set of results is presented that highlights the significance of the solution. It is noted that the proposed approaches can help automate SoC design and achieve significant savings in post-integration validation costs.
7

A study of the flow properties of New Zealand wood pulp suspensions

Duffy, Geoffrey G. January 1972 (has links)
One of the most important process operations in the pulp and paper industry is the transport of pulp in pipe lines. Because pipe friction losses are much higher than with water under comparable conditions, accurate design correlations for each pulp are important to the industry. The purpose of this investigation was to design and build a flow rig suitable for investigating a wide range of pulp conditions, to obtain pipe friction loss data for New Zealand pulps, and to produce design correlations and procedures for the industry. This thesis is therefore concerned primarily with describing the experimental equipment and procedures, presenting pipe friction loss data for a variety of New Zealand pulps, including a design correlation for them, and developing design methods for computing friction losses. It includes, in addition, data on drag reduction observed at high velocities of flow, and a discussion of flow mechanisms in each regime of flow. The equipment was designed to produce friction loss data from three pipe diameters simultaneously for each consistency of pulp. Flow rate was controlled without throttling the flow. Pipe friction loss data are presented for five Kraft pulps and one neutral sulphite semi-chemical pulp. Data were obtained from 1,2,3 and 4 in. diameter PVC pipes for a wide range of consistencies and flow rates up to 0.8 ft3/sec. Standard Lampen mill evaluations on hand sheets made from the pulps are presented, as well as data on the characteristics of the fibres. The Kraft pulps exhibited the characteristic maxima and minima but the semi-chemical pulp did not exhibit these turning points. For Kraft pulps head losses before the respective maxima were increased by refining the pulp and using rough pipe; and decreased by adding short-fibre Tawa and by drying and reslushing the pulp. In comparison with maxima for the unbeaten Kraft Pulp, the maxima of the head loss curves for all Kraft pulps were shifted to lower velocities by the above-mentioned operations. This would reduce the friction loss in many practical cases. In particular, rough pipe lowers the magnitude of friction loss in this regime, and can therefore yield a considerable economic advantage. A single design correlation for Kraft pulps is presented for the regime of flow before the maxima in the head loss curves. The limits of the correlation are given. Friction losses of New Zealand pulps were found to be lower than those previously reported in the literature. Two methods of design are presented for the regimes at velocities above the maxima in the head loss curves. A procedure is suggested for pulp and paper mills to obtain their own limits for the design correlation and to verify the correlation proposed in this investigation for their own pulps. A design correlation for the Tawa NSSC pulp is also presented. Mechanisms of flow are discussed for Kraft pulps and a semi-chemical pulp. Visual observations in an artificially roughened pipe for the regime of flow before the maxima of the head loss curves have confirmed fibre-wall contact in this regime. Data obtained at the first sign of permanent plug disruption have been correlated with data at the onset of drag reduction. Fully developed turbulence was found to occur at the maximum level of drag reduction. Some velocity profiles are reported for the transition regime using a modified annular-purge probe. In addition the disruptive shear stress of fibre networks has been correlated by three different methods. Data for the onset of drag reduction are presented and compared with data previously obtained from large diameter pipes from other investigations. This correlation is used as a method for designing piping systems at high flow rates.
8

The plug flow of paper pulp suspensions

Moller, Klaus January 1972 (has links)
The investigation reported in this thesis is part of a programme of research concerning the flow behaviour of paper pulp suspensions commenced at the University of Auckland in 1969. A primary aim of the research was to supply the industry with reliable pipe friction data for the pulps manufactured in New Zealand mills. Secondly, it was hoped to increase the fundamental understanding of the mechanisms of flow of the suspensions in pipes and so devise a more satisfactory method of correlation than the one used at present. Pipe friction data were obtained for two N.Z. groundwood pulps, two N.Z. Kraft pulps and one imported Kraft pulp in 1, 2, 3 and 4 inch pipes for a wide range of consistencies and velocities. The data were of the same form as previously reported in the literature, but for a given set of conditions the friction losses were lower for the N.Z. pulps. For Kraft pulps the curves of head loss versus velocity exhibited the usual maxima and minima, but for groundwoods the decrease in head loss from the maximum to the minimum and the subsequent rise were replaced by an approximately level portion. The data in the regime before the maxima in the head loss curves for Kraft pulps were correlated to allow extrapolation to the larger pipes used in the paper mill. This regime incorporates the majority of practical flow situations for consistencies over two per cent. The limits of the regime were approximately defined by values of the dimensionless friction factor. The correlation method used was a slight modification of that employed by previous authors. The data for groundwood pulps were correlated in a similar way. The head losses predicted by the new correlations were consistently lower than those calculated from previous equations. Observation of the flow in perspex pipes confirmed the mechanisms of flow proposed by some previous authors, but disagreed with the mechanisms proposed by others. The mechanisms of flow of groundwood pulps were found to be essentially similar to those of Kraft pulps except that the groundwoods exhibited a plug cleavage phenomenon at very low velocities. The different shapes of friction curve for the two types of pulp were attributed directly to their macroscopic properties. A flow model was developed on the basis of the observed flow behaviour in pipes in which the suspensions move as a fibre/water plug surrounded by a sheared water annulus. The model assumed that the annulus formed as a result of the action of the hydrodynamic shear stress on the fibre network comprising the plug. The analysis resulted in an expression relating the average velocity and the longitudinal pressure gradient in the pipe and also incorporated the pipe radius, the viscosity of the suspending medium μ and a pseudo shear modulus for the fibre network G. The plug flow model was found to apply to the data in the regime before the maximum in the head loss curve. The relation between the pressure gradient and the pipe diameter as predicted by the model was slightly erroneous for some pulps, although it was the same as that in the standard empirical correlation used in design by the industry. This led to the conclusion that the deflection of fibre ends on the plug surface also contributed to the formation of the annulus, as proposed by previous authors. The relative importance of the two mechanisms of annulus formation was used to explain the occurrence of the maxima and minima in the head loss curves for chemical pulps. The plug flow model was found to be closely related to both the direct correlation method used in the past and to the standard pseudoplastic model for non-Newtonian pipe flow. The model was also applied to analogous flow in a rotational viscometer. The values of the pseudo shear modulus G calculated from the rotational viscometry data were the same as those calculated from pipe flow data under certain conditions. However, limitations in the equipment and the effect of gravitational settling restricted the results to a narrow range. The behaviour of the pulp suspensions in batch settling tests varied markedly from pulp to pulp. There was a high correlation between the pseudo shear modulus G obtained from pipe flow data and the final height of the suspension in a settling test. Likewise there was a relationship between the effective viscosity of the suspending medium μ (as modified by the proportion of fines in the pulp) and the initial settling rate in a batch test. This suggested that a simple and accurate method of determining pipe friction data from batch settling test data is possible. Settling tests also showed that air content and the presence of acidic and basic ions, but not the viscosity of the suspending medium, increased the strength of fibre networks. A further correlation method to incorporate all flow regimes was suggested from the results of the present investigation and from indications in the literature that fibre networks behave like Bingham plastics when they are sheared.
9

Bridge deck analysis

Buckle, I. G. January 1968 (has links)
In this thesis the structural analysis of two basic types of bridge deck systems are discussed: 1. the nmltibeam bridge deck II. the skewed anisotropic bridge deck. The major difficulty in the analysis of I, the multibeam deck, arises from its lack of transverse bending stiffness; load distribution occurs by shear transference at interlocking shear keys. An analysis method, developed from transfer matrix theory is proposed and shown to be satisfactory for such a structure. Model studies on a quarter scale multibeam bridge deck are described together with field tests on the prototype decks - the southern motorway bridges crossing Slippery Creek. Agreement between theory, model studies and field tests is illustrated. The satisfactory analysis of II, the skewed anisotropic deck, is complicated by its anistropic elastic properties and skewed geometry. An analysis procedure is introduced which is an extension of the finite element technique already established in other plate bending and plane stress problems. Using therefore the matrix displacement method and finite element discretization, the method has been programmed for solution by digital computer. Comparison of the computed displacements with those obtained by experiment on skewed isotropic and anisotropic steel plates is given. The finite element method is seen to be a powerful analytical tool, particularly because of its ability to handle elastic anisotropy and arbitrary geometric shapes.
10

Hydroelastic excitation of cylinders.

Small, Arthur Francis, 1946- January 1971 (has links)
The transverse vibration of a bluff body in a steady fluid flow is a phenomenon that has been observed and discussed through the ages. Named after the Greek God of Wind Aeolius, the Aeolian Tones, such as are emitted by the wind in telephone or power lines, were known to the Greeks, who produced musical sounds from an Aeolian Harp by hanging it in a current of wind. The first recorded association of the transverse vibration with a periodic wake pattern was made by Leonardo Da Vinci, who observed and sketched the process of alternate periodic vortex shedding from a bluff body to form the staggered vortex trail in its wake. It is unlikely that prior to the nineteenth century many structural failures occurred due to hydrodynamic excitation. Wood, stone and brick were the main construction materials, and the elementary design methods used were very conservative ensuring that the structure had high frequency, large mass and a large damping factor. A very high flow velocity was therefore needed to initiate structural oscillations by vortex shedding and, if initiated these structural oscillations would usually be quickly damped out. During the nineteenth century the rapid advancement in the art of civil engineering design and the introduction and development of concrete and steel as a construction material led to the design of streamlined structures with more economical dimensions and consequently lower frequencies, smaller masses and smaller damping factors. Although design codes made a reasonable allowance for static loadings, dynanic loadings caused by earthquakes and hydrodynamic excitation were eirther ignored or underestimated.

Page generated in 0.1172 seconds