• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 850
  • 2
  • 1
  • Tagged with
  • 974
  • 974
  • 839
  • 826
  • 826
  • 314
  • 159
  • 155
  • 141
  • 118
  • 116
  • 116
  • 102
  • 101
  • 96
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

The De Saint-Venant equations in curved channels

Nalder, Guinevere Vivianne January 1998 (has links)
After introducing the subject of curvilinear flow, particularly in the context of meandering natural channels, this thesis then describes the three conventional models for unsteady flow in open channels, namely kinematic, diffusion and dynamic. These descriptions are in terms of the straight channel de Saint-Venant equations. The discussion also considers some aspects of the diffusion model which raise questions as to the appropriateness of the usual engineering approach to this model. As to date, these models treat curvature cursorily, if at all, the models are then expanded to incorporate curvature in a more systematic manner. This is done by deriving the de Saint-Venant equations in terms of curvilinear coordinates. The models are then presented in terms of the curvilinear mass-conservation and various forms of the curvilinear momentum equation. The new models are found to be expressed by equations of the form 'linear model + curvilinear correction' thus allowing the engineer to estimate the size of any curvature effect. The derived dynamic model is compared with a laboratory study, and the results indicate that the new curvilinear model is a reasonable description of dam-break flow. Subsequent calculations, based on field data, of the celerity of the dynamic wave illustrate how big the corrections can be.
262

Artificial intelligence in a hybrid system with an industrial application

Cammell, Geoffrey Martin January 1994 (has links)
This dissertation introduces the Artificial Computer Expert (ACE), a system designed to assist non-computer skilled personnel utilise complex computer software through the use of artificial intelligence in the creation, selection and execution of programs. The system comprises three parts: 1. A graphical interface and conceptual framework which allows an expert to define the structure of his knowledge relating to his field. 2. A compiler to work through such a structure, forming partial solution paths which indicate the relationships that exist in the structure. 3. An interpreter to run through the solution process, joining together the partial solution paths and creating instances of data files as required in order to reach the overall goal. The ACE system is presented in the context of an industrial application, demonstrating how it may be used to form sawmill cutting patterns (which indicate how lumber is to be milled from a set of logs). This application belongs to a class of scheduling problems known as ‘cutting stock problems’, which for anything other than small or simple cases typically require the presence of an on-site scheduling expert. The application developed produces acceptable cutting patterns without the need for such a scheduling expert, using the same software tools currently used by mill management to plan their production. / Whole document restricted, but available by request, use the feedback form to request access.
263

An investigation on diffuser augmented wind turbine design

Phillips, Derek Grant January 2003 (has links)
Diffuser Augmented Wind Turbines (DAWTs) are one of many concepts to have been proposed to reduce the cost of renewable energy. As the most commercially viable, they have been the focus of numerous theoretical, computational, and experimental investigations. Although intimated in these studies to be able to augment the power output of a wind turbine, the extent of this power increase, or augmentation, the factors influencing DAWT performance, the optimal geometric form and their economical benefit remained unanswered. It is these issues that have been addressed in this investigation. In reviewing historic investigations on DAWTs it has been identified that excessive wind tunnel blockage, inappropriate measurement technique, varied definitions of augmentation, and the inclusion of predicted performance based on incorrect assumptions have in general led to the overstatement of DAWT performance in those studies. In reassessing the performance of the most advanced of those DAWT designs, Grumman's DAWT 45, it has been calculated that the actual performance figures for the 2.62 exit-area-ratio and 0.488 length-to-diameter ratio DAWT were an available augmentation of 2.02, a shaft augmentation of 0.64 and a diffuser efficiency of 56%. By contrast, the development of the Mo multi-slotted DAWT in this investigation has yielded a design whose shaft augmentation of 1.38 was achieved by a diffuser with exit-area-ratio of only 2.22 and overall length-to-diameter ratio of 0.35. Such performance improvement has been obtained by gaining both an understanding of the flow characteristics of DAWTs and the geometric influences. More specifically it has been shown that: the velocity across the blade-plane is greater than the free-stream velocity and increases towards the rotor periphery; that the rotor thrust or disc loading impacts upon diffuser performance by altering the flow behaviour through it; and that DAWTs are able to maintain an exit pressure coefficient more negative than that attainable by a conventional bare turbine. The net result is that DAWTs encourage a greater overall mass-flow as well as extract more energy per unit of mass-flow passing through the blade-plane than a conventional bare turbine. The major drivers of DAWT performance have been shown to be the ability of the design to maximise diffuser efficiency and produce the most sub-atmospheric exit pressure possible. Parametric investigation of the various DAWT geometric components has shown peak performance to be obtained when: the external flow is directed radially outward by maximising the included angle of the external surface in conjunction with a radially orientated exit flap; by applying boundary-layer control to a trumpet shaped diffuser via a pressurised cavity within the double-skin design of the multi-slotted DAWT; having an exit-area-ratio of the order of 2.22; and by employing an inlet contraction with inlet-area-ratio matched to the mass-flow passing through the DAWT under peak operating conditions. To translate the available augmentation into shaft power a modified blade element method has been developed using an empirically-derived axial velocity equation. The resulting blade designs whose efficiencies reached 77%, twice those of Grumman, highlight the accuracy of the modified blade element method in calculating the flow conditions at the blade-plane of the multi-slotted DAWT. It was also noted that the rotor efficiencies remain below 'best practice' and therefore offer the potential for further increases in shaft augmentation. However, in order to achieve such gains, a number of limitations present in the current method must be addressed. In assessing the likely commercial suitability of the multi-slotted DAWT a number of real-world influences have been examined. Shown to have little if any effect on DAWT performance were Reynolds number, ground proximity and wind shear. Turbulence in the onset flow on the other hand had the beneficial effect of reducing separation within the diffuser. Finally, DAWT performance was assessed under yaw misalignment where it was shown that the multi-slotted DAWT performed favourably in comparison to that associated with a conventional bare turbine. The major drawback identified in the DAWT concept by this investigation was its drag loading and the fact that drag and augmentation were interdependent. The result is that the cost of a conventional DAWT is dictated by the necessity to withstand an extreme wind event despite the fact that augmentation is only required up to the rated wind speed. The overall conclusion drawn was that in order to optimise a DAWT design economically, and therefore make the DAWT concept a commercial reality, a creative solution that minimises drag under an extreme wind event would be required.
264

Bridge abutment scour countermeasures

Van Ballegooy, Sjoerd January 2005 (has links)
The use of riprap and cable-tied blocks as scour countermeasures at bridge abutments is investigated. Riprap is the most common armouring scour protection method used at bridge abutments and approach embankments. Despite the widespread use of riprap protection, the guidelines for its use at bridge abutments are based on limited research. The aim of the experimental study was to determine the requirements of riprap and cable-tied block apron countermeasures to protect bridge abutments from scour damage, and to produce design guidelines for their use. The two types of bridge abutments used in the experimental study were a spill-through abutment situated on the floodplain of a compound channel, and a wing-wall abutment sited at the edge of the main channel. The spill-through abutment experiments were run under clear-water conditions, and the variations in the scour hole geometry were measured for different abutment and compound channel geometries, apron widths and apron types. The wing-wall abutment experiments were run under live-bed conditions, and the settled apron geometries were measured for different flow depths, flow velocities, apron widths, apron types and apron placement levels. The flow fields around the abutments were also measured for both abutment types. The clear-water spill-through abutment results show that the protection aprons do not significantly reduce the scour depth at abutments, but instead deflect the scour hole further away from the abutment, protecting it from scour failure. The experiments also show that cable-tied block aprons allow the scour hole to form much closer to the abutment compared to equivalent riprap aprons. Equations were developed to predict the scour hole position and size, and the minimum apron extent required to prevent the scour hole from undermining the abutment. For the live-bed wing-wall abutment experiments, the troughs of the propagating bed-forms undermined the outer edges of the aprons, causing them to settle. Equations were developed to predict the settled apron geometry at the equilibrium scour conditions. The predicted scour hole depth and position for clear-water scour conditions, or the predicted apron settlement geometry for live-bed scour conditions can be used in a geotechnical stability analysis of the abutment. The geotechnical stability analysis forms the basis of the abutment scour countermeasure design procedure, which was developed from the experimental study. Further experimental work is required to increase the robustness of the bridge abutment scour countermeasure design procedure and make it applicable to a wider range of situations.
265

Nonlinear structural analysis using strut-and-tie models

To, Nicholas Hin Tai January 2005 (has links)
Increasing popularity of the strut-and-tie methodology among research communities and practising engineers is due to its rational analytical approach and its superiority, compared to the conventionally employed empirical methods for analysing disturbed regions in structural systems. Nevertheless, this analysis methodology is not used as a routine procedure in design offices, primarily because of the perceived ambiguity and complexity involved in appropriate model formulation. In addition, until recently application of the strut-and-tie methodology has been limited to the prediction of strength, with utilisation of this modelling technique to capture nonlinear structural deformation being rather minimal [ACI Bibliography (1997)]. The research project reported herein represents an original contribution to the development of the strut-and-tie methodology by providing a systematic approach for applying this modelling technique to nonlinear structural concrete analyses. The study proposes a orginally developed computer-based strut-and-tie model formulation procedure that permits prediction of the nonlinear monotonic and cyclic response of structural systems with distinct reinforcement details. The procedure being presented in this thesis is a refined version of that reported previously [To et al. (2001 &.2002b)]and the accuracy of the analytical modelling is verified using experimental data. Several issues pertaining to model formulation are thoroughly investigated. These issues include the strategy of model formulation for Bernoulli (or beam) and disturbed regions of structural systems, the satisfactory positioning of model elements, the appropriate stress-strain material models for concrete and reinforcing steel, the suitable effective strength of model elements, the inclined angle of diagonal concrete struts in beam and column members, and the concrete tension carrying capacity and associated tension stiffening effect. In addition, the seismic response of various prototype structures when subjected to the experimentally employed cyclic forces and the time-history earthquake loadings was predicted using the originally developed cyclic strut-and-tie models. A summary encapsulating the findings of this project and recommendations for future research work in the area of nonlinear strut-and-tie modelling is also presented.
266

The oxidation reactions of heterogeneous carbon cathodes used in the electrolytic production of aluminium

James, Bryony Joanne January 1997 (has links)
A technique has been developed that allows the gasification reaction rates of representative samples of carbon-carbon composite materials to be examined. The technique involves heating the sample in a controlled and monitored environment; the product gases of the reaction are then analysed by a mass spectrometer, allowing their identification and quantification. The technique was used to characterise the oxidation reactions of cathode carbons. These materials are composites and so the oxidation reactions of their constituent raw materials were also examined. Surface area was determined for each sample, allowing specific rates of reaction to be determined, normalising surface area effects. The anodes, cathodes and sidewalls of aluminium smelting cells are made of composite carbon materials comprising filler materials (such as coke, anthracite and graphite) and a binder (almost exclusively coal tar pitch). Whilst the oxidation of anode carbons has received extensive study the oxidation reactions of cathodes have been neglected largely because they have not been a cause of smelting cell failure. However, with the longer lives now being achieved from smelting cells the long term degradation reactions, such as oxidation, will have to be considered. Oxidation of cathodes in the area of the collector bar will increase resistance and affect the heat balance of the cell. Gasification reactions of carbon materials are frequently characterised using techniques such as thermal gravimetric analysis (TGA). These techniques are accurate for examining such reactions when the sample is of small size and a single carbon type. To characterise composite carbons correlations have been made between the overall oxidation resistance (determined by weight loss) and the ignition temperature of one of the constituent materials (determined by TGA). The results obtained using the new technique of product gas analysis (PGA) revealed an exponential dependence of oxidation rate on temperature for the carbons examined. At higher rates the limiting condition appeared to be mass transfer through the pores of the sample. Arrhenius plots of reaction rates allowed the activation energy of oxidation to be determined for each material. When the rate was controlled by the chemical reactivity of the material the activation energies determined agreed well with values obtained from the literature. The two graphites examined had activation energies of 164 and 183 kJ.mol-1, Ea of graphite has been measured in the range 175-281 kJ.mol-1, the latter figure being for a highly pure graphite. For the two anthracites Ea was ll3 and 118 kJ.mol-1, literature values have it between 100 and l5l kJ.mol-1. The pitches, used as binders of cathode carbons, had Ea equal to 112 and 123 kJ.mol-1, values from the literature range from 121-165 kJ.mol-1. Activation energies were determined for the cathode materials, and were clearly influenced by the reactivity of the constituent materials. An amorphous cathode carbon, having nominally 30% graphite, had an activation energy of 121 kJ.mol-1. A semigraphitic cathode material comprising 100% graphite in a pitch binder had an activation energy of 123 kJ.mol-1. The similarity of these values to those for Ea of the pitch and anthracite indicates that the binder phase is having a strong influence on cathode reactivity. These values of Ea accord well with values determined for similar samples, reported in the literature ranging from 114 to 138 kJ.mol-1. A semigraphitised cathode material had an activation energy of 176 kJ.mol-1 in the same range as that of graphite. This sample oxidised significantly less rapidly at all temperatures. The variation in reactivity of the constituent materials of cathode carbons accounts for the highly selective oxidation behaviour observed in these materials. Porosity development is rapid as binder matrix is preferentially oxidised, leading to an acceleration of oxidation rate with increasing burnoff. The rate begins to decelerate once all the binder matrix has been oxidised, the residue being less reactive than the starting material. The structure of the materials was quantified using X-ray diffraction (XRD). A peak ratio method was employed, comparing the intensify of the 4oz peaks of cathode carbons and a standard electrographite. Once effects of cathode porosity had been normalised an excellent correlation between increasing peak intensity ratio and increasing oxidation resistance was found.
267

Control and optimisation of coagulant dosing in drinking water treatment

Edney, Daniel B. L. January 2005 (has links)
Whole document restricted, see Access Instructions file below for details of how to access the print copy. / Correct coagulant dosage is necessary for the efficient operation of conventional drinking water treatment plants, yet no accurate or automated way of determining this exists. Streaming current (SC) is a measurement of charge on particles in water and is useful in feedback control of coagulant dosage. Analysis of the movement of change within a SC sensor can provide some explanation of its slow response, while signal processing utilising Fourier analysis improves the instrument's bandwidth. Presently inaccurate manual jar tests are the only way determine the SC required for best coagulation. An online automated jar tester is presented to improve on this. It uses an automatic sampling system that takes a sample from the process stream. An optimisation algorithm makes repeated step adjustments to the SC set point and gradually moves it in the direction of improving jar test results. The system was evaluated on both a small-scale model and a full-scale plant. Noise in the test measurements means the optimal set point cannot be located accurately enough, but the results indicate that this is possible. Greater accuracy would allow optimisation of turbidity and costs for multiple chemicals. A representative neural network model can be made of the dynamic relationship between coagulant dosage and streaming current in a scale model, with an alkali dosed to simulate a disturbance. In a rapid mixer, the measured response is significantly slower than the true response. Several common types of linear controller are designed and their performance at set point tracking and disturbance rejection is compared on this system. Model predictive control with a Kalman filter performs best in these tests, while the self-tuning regulator has benefits when the rate of set point change is slower. A non-linear feed-forward radial basis function network that adapts to the system's steady-state inverse can effectively augment a linear controller for this system. Adaptation rules based on vector eligibility are derived from dynamic back-propagation and extended to the general dynamic non-linear case. This can result in a useful and efficient feed-forward neural controller for dosing systems that can be represented by a Wiener model.
268

Implementation of a bubble model in FLAC and its application in dynamic analysis

Ni, Bing January 2007 (has links)
Methodologies of implementing nonlinear constitutive models of soil in FLAC are studied in order to reduce numerical distortion, which has been found to occur in nonlinear dynamic analysis when a nonlinear soil model is implemented using an ‘apparent modulus’ approach. Analyses undertaken using several simple nonlinear soil models indicate that use of ‘plastic correction’ approach can eliminate or minimize the problem. This approach is therefore adopted in the thesis to implement in FLAC a bounding surface bubble soil model, i.e. the Bubble model. Satisfactory performance of the Bubble model has been obtained in dynamic analysis without using any of the additional mechanical damping given in FLAC. An analytical study on the Bubble model is carried out with FLAC. On the basis of the study, the hardening function is modified to better incorporate size ratio effects of the yield surface and is explored to eliminate abrupt transition in stiffness from elastic region to yielding. Pore water pressure is formulated with the assumption that the pore water pressure is generated as a response to the constant volume constraint which prevents the tendency for volume change when plastic volumetric strain takes place. The formulation is added to the Bubble model so that pore water pressure can be generated automatically by the model for fully saturated and undrained soil. FLAC analyses indicate that the Bubble model is generally in good agreement with published experimental data. The parameters and initial conditions associated with the Bubble model are studied with FLAC analyses in triaxial stress space to investigate their influence on the model and to investigate their effective ranges. Both large and small strain behaviors of the model are explored in the parametric study. Finally, the Bubble model is applied in the modeling of vertical vibration of rigid strip foundations. The influence of soil nonlinearity on vertical compliance of rigid foundations is investigated. Some major factors are considered, which include initial stress level in soil, level of excitation and mass ratio of foundation.
269

Modelling cardiac activation from cell to body surface

Buist, Martin L. January 2001 (has links)
In this thesis, the forward problem of electrocardiography is investigated from a cellular level through to potentials on the surface of the torso. This integrated modelling framework is based on three spatial scales. At the smallest spatial resolution, several cardiac cellular models are implemented that are used to represent the underlying cellular electrophysiology. A bidomain framework is used to couple multiple individual cells together and this provides a mathematical model of the myocardial tissue. The cardiac geometry is described using finite elements with high order cubic Hermite basis function interpolation. An anatomically based description of the fibrous laminar cardiac microstructure is then defined relative to the geometric mesh. Within the local element space of the cardiac finite elements, a fine collocation mesh is created on which the bidomain equations are solved. Each collocation point represents a continuum cell and contains a cellular model to describe the local active processes. This bidomain implementation works in multiple coordinate systems and over deforming domains, in addition to having the ability to spatially vary any parameter throughout the myocardium. On the largest spatial scale the passive torso regions surrounding the myocardium are modelled using a generalised Laplace equation to describe the potential field and current flows. The torso regions are discretised using either finite elements or boundary elements depending on the electrical properties of each region. The cardiac region is coupled to the surrounding torso through several methods. A traditional dipole source approach is implemented that creates equivalent cardiac sources through the summation of cellular dipoles. These dipoles are then placed within a homogeneous cardiac region and the resulting potential field is calculated throughout the torso. Two new coupling techniques are developed that provide a more direct path from cellular activation to body surface potentials. One approach assembles all of the equations from the passive torso regions and the equations from the extracellular bidomain region into a single matrix system. Coupling conditions based on the continuity of potential and current flow across the myocardial surfaces are used to couple the regions and therefore solving the matrix system yields a solution that is continuous across all of the solution points within the torso. The second approach breaks the large system into smaller subproblems and the continuity conditions are iii iv imposed through an iterative approach. Across each of the myocardial surfaces, a fixed point iteration is set up with the goal of converging towards zero potential and current flow differences between adjacent regions. All of the numerical methods used within the integrated modelling framework are rigorously tested individually before extensive tests are performed on the coupling techniques. Large scale simulations are run to test the dipole source approach against the new coupling techniques. Several sets of simulations are run to investigate the effects of using different ionic current models, using different bidomain model simplifications, and the role that the torso inhomogeneities play in generating body surface potentials. The main question to be answered by this study is whether or not the traditional approach of combining a monodomain heart with an equivalent cardiac source in a two step approach is adequate when generating body surface potentials. Comparisons between the fully coupled framework developed here and several dipole based approaches demonstrate that the resulting sets of signals have different magnitudes and different waveform shapes on both the torso and epicardial surface, clearly illustrating the inadequacy of the equivalent cardiac source models. It has been found that altering the modelling assumptions on each spatial scale produces noticeable effects. At the smallest scale, the use of different cell models leads to significantly different body surface potential traces. At the next scale the monodomain approach is unable to accurately reproduce the results from a full bidomain framework, and at the largest level the inclusion of different torso inhomogeneities has a large effect on the magnitude of the torso and epicardial potentials. Adding a pair of lungs to the torso model changes the epicardial potentials by an average of 16% which is consistent with the experimental range of between 8 and 20%. This provides evidence that only a complex, coupled, biophysically based model will be able to properly reproduce clinical ECGs.
270

Saturated pool boiling and subcooled flow boiling of mixtures at atmospheric pressure

Wenzel, Ulrich. January 1992 (has links)
An experimental and theoretical investigation of heat transfer to liquid mixtures has been performed using binary and ternary mixtures of acetone, isopropanol and water. Two data-bases were established which contain measurements of the heat transfer coefficient under saturated pool boiling and subcooled flow boiling conditions. A third database comprises measurements of heat transfer and pressure drop in a plate heat exchanger. The performance of two heat transfer enhancement techniques, namely the coating of the heat transfer surface with teflon and a perforated brass foil, was studied under saturated pool boiling conditions. A model was developed, which can be used to predict the heat transfer coefficient. The model is based on the additive superposition of convective and boiling heat transfer coefficients. It is applicable for heat transfer to mixtures and single component fluids under saturated and subcooled boiling conditions. The empirical parameters in the correlations used in the model were not altered to fit the measurements of this study. The predictions of the model were compared to the experimental data, which covers the convective heat transfer regime, the transition region and the fully developed nucleate boiling regime. It was found that the best agreement between predicted an measured values was achieved, if the linear mixing law was used to calculate the ideal heat transfer coefficient rather than the correlations by Stephan-Preußer or Stephan-Abdelsalam. The heat transfer coefficient under saturated pool boiling conditions could be predicted with an accuracy of 12.6 %. A comparison between over 2000 measured heat transfer coefficients under subcooled flow boiling conditions in an annulus and the predictions of the model showed good agreement with a mean error of 10.3 %. The accuracy of the model was found to be independent of the fluid velocity and composition, as well as of the magnitude and mechanism of heat transfer. The heat flux in a plate heat exchanger could be predicted with a mean error of 6.9 % for a wide range of fluid velocities, subcoolings and compositions. The heat transfer coefficient on the test liquid side of the exchanger could be predicted with a mean error of 10 %. The heat transfer model was used for a theoretical study of the heat transfer to mixtures boiling on a finned surface. It was found that the fin geometry and thermal conductivity have a distinct influence on the local and mean heat transfer coefficients. The results indicate that the application of fins is more effective for boiling of mixtures than for boiling of single component liquids.

Page generated in 0.1161 seconds