• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 129561
  • 18589
  • 11188
  • 8080
  • 6978
  • 6978
  • 6978
  • 6978
  • 6978
  • 6952
  • 5592
  • 2329
  • 1457
  • 1297
  • 527
  • Tagged with
  • 217389
  • 40687
  • 33383
  • 30118
  • 28869
  • 25740
  • 22517
  • 19196
  • 16626
  • 16188
  • 16103
  • 13243
  • 12885
  • 12858
  • 12801
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

Design and implementation of a high strain Town rate biaxial tension test for elastomeric materials and biological soft tissue

Graham, Aaron 11 September 2020 (has links)
The mechanical properties of biological tissues are of increasing research interest to disciplines as varied as designers of protective equipment, medical researchers and even forensic Finite Element Analysis (FEA). The mechanical properties of biological tissue such as skin are relatively well known at low strain rates and strains, but there is a paucity of data on the high rate, high strain behaviour of skin - particularly under biaxial tension. Biaxial tensile loading mimics in vivo conditions more closely than uniaxial loading [1, 2], and is necessary in order to characterise a hyper-elastic material model[3]. Furthermore, biaxial loading allows one to detect the anisotropy of the sample without introducing noise from inter-sample variability - unlike uniaxial tensile testing. This work develops a high strain rate bulge test device capable of testing soft tissue or polymer membranes at high strain rates. The load history as well as the full field displacement data is captured via a pressure transducer and high speed 3D Digital Image Correlation (DIC). Strain rates ranging from 0.26s −1 to 827s −1 are reliably achieved and measured. Higher strain rates of up to 2500s −1 are achieved, but are poorly measured due to equipment limitations of the high speed cameras used. The strain rates achieved had some variability, but were significantly more consistent than those achieved by high rate biaxial tension tests found in the literature. In addition to control of the apex strain rate, the bi-axial strain ratio is controlled via the geometry of the specimen fixture. This allowed for strain ratios of up to 2 to be achieved at the apex 1 . When testing anisotropic membranes, the use of full field 3D DIC allowed for accurate and efficient detection of the principal axis of anisotropy in the material. No skin is tested, but instead three types of polydimethylsiloxane (PDMS, ”silicone') skin simulant are tested. These simulants were chosen to fully encapsulate the range of mechanical behaviour expected from skin - they were chosen to have stiffness's, strain hardening exponents and degrees of anisotropy significantly above or below the behaviour exhibited by skin. This ensured that the device was validated over a wider range of conditions than expected when testing skin. A novel approach to specimen fixation and speckling for silicone membranes is developed, as well as a fibre reinforced skin simulant that closely mimics the rate hardening and anisotropic behaviour of skin. In addition to bulge tests, uniaxial tensile tests are conducted on the various simulant materials in order to characterise their low strain rate behaviour. The composite skin simulant is characterised using a modified version of the anisotropic skin model developed by Weiss et al (1996) [4], and the pure silicone membranes are characterised using the Ogden hyper-elastic model.
382

An aircraft and provide information about flight performance and local microclimate

Johnson, Bruce Edward January 2013 (has links)
Includes abstract. / Includes bibliographical references / The application of using Unmanned Aerial Vehicles (UAVs) to locate thermal updraft currentsis a relatively new topic. It was first proposed in 1998 by John Wharington, and, subsequently, several researchers have developed algorithms to search and exploit thermals. However, few people have physically implemented a system and performed field testing. The aim of this project was to develop a low cost system to be carried on a glider to detect thermals effectively. A system was developed from the ground up and consisted of custom hardware and software that was developed specifically for aircraft. Data fusion was performed to estimate the attitude of the aircraft; this was done using a direction cosine (DCM) based method. Altitude and airspeed data were fused by estimating potential and kinetic energy respectively; thus determining the aircraft’s total energy. This data was then interpreted to locate thermal activity. The system comprised an Inertial Measurement Unit (IMU), airspeed sensor, barometric altitude sensor, Global Positioning System (GPS), temperature sensor, SD card and a realtime telemetry link. These features allowed the system to determine aircraft position, height, airspeed and air temperature in realtime. A custom-designed radio controlled (RC) glider was constructed from composite materials in addition to a second 3.6 m production glider that was used during flight testing. Sensor calibration was done using a wind tunnel with custom designed apparatus that allowed a complete wing with its pitot tube to be tested in one operation. Flight testing was conducted in the field at several different locations over the course of six months. A total of 25 recorded flights were made during this period. Both thermal soaring and ridge soaring were performed to test the system under varying weather conditions. A telemetry link was developed to transfer data in realtime from the aircraft to a custom ground station. The recorded results were post-processed using Matlab and showed that the system was able to detect thermal updrafts. The sensors used in the system were shown to provide acceptable performance once some calibration had been performed. Sensor noise proved to be problematic, and time was spent alleviating its effects.
383

The use of GIS for the development of a fully embedded predictive fire model

Sibolla, Bolelang H January 2009 (has links)
Fire is very important for maintaining balance in the ecosystems and is used by fire management across the world to regulate growth of vegetation in natural conservation areas. However, improper management of fire may lead to hazardous behaviour. Fire modelling tools are implemented to provide fire managers with a platform to test and plan fire management activities. Fire modelling occurs in two parts: fire behaviour models and fire spread models, where fire behaviour models account for the behaviour of fires that is used in fire spread models to model the propagation of a fire event. Since fire is a worldwide phenomenon a number of fire modelling approaches have been developed across the world. Most existing fire models only model either fire behaviour or fire spread, but not both, hence full integration of fire models into GIS is not completely implemented. Full integration of environmental modelling in GIS refers to the case where an environmental model such as a fire model is implemented within a GIS environment, without requiring any transfer of data from other external environments. Most existing GIS based fire spread models account for fire propagation in the direction of prevailing winds (or defined fire channels) as opposed to full fire spread in all directions. The purpose of this study is to illustrate the role of GIS in fire management through the development of a fully integrated, predictive, wind driven, surface fire model. The fire model developed in this study models both the risk of fire occurring (fire behaviour model), and the propagation of a fire in case of an ignition incident (fire spread model), hence full integration of fire modelling in a GIS environment. The fire behaviour model is based on prevailing meteorological conditions, the type of vegetation in an area, and the topography. The spread of a fire in this model is determined by the transfer of heat energy and rate of spread of fire, and is developed based on the Cellular Automata (CA) modelling approach. This model considers the spread of fire in all directions instead of the forward wind direction only as is the case in most fire spread models. The fire behaviour model calculates fire intensity and rate of spread which are used in the fire spread model, hence demonstrating the full integration of fire modelling in GIS. No external data exchange with the model occurs except for acquisition of input data such as measured values of environmental conditions. v This cellular automata based fire spread model is developed in the ArcGIS ModelBuilder geoprocessing environment, and requires the development of a custom geoprocessing function tool to facilitate the fast and effective performance of the model. The test study area used in this research is the Kruger National Park because of frequent fire activity that occurs in the park, as a result of management activities and accidental fires, and also because these fires are recorded by park fire ecologists. Validation of the model is achieved by comparison of simulated fire areas after a certain period of time with known location of the fire at that particular time. This is achieved by the mapping of fire scars and active fire areas acquired from MODIS Terra and Aqua images, fire scars are also acquired from the Kruger National Park Scientific Services. Upon evaluation, the results of the fire model show successful simulation of fire area with respect to time. The implementation of the model within the ArcGIS environment is also performed successfully. The study thus concludes that GIS can be successfully used for the development of a fully integrated (embedded) fire model.
384

The flotation of pyrite using xanthate collecters

Dimou, Anna January 1986 (has links)
Bibliography: leaves 117-123. / The flotation properties of pyrite were found to be significantly influenced by variations in the pH. In acidic solutions the pyrite floatability is very high, and recoveries of 95% could be achieved using only a frother. A sharp decrease in floatability was observed in alkaline solutions, possibly due to the formation of hydrophilic ferric hydroxide. The addition of a xanthate collector improved the flotation properties of pyrite at all pHs. In acidic solutions the main effect observed was on the rate of pyrite recovery and on the grade of the concentrates. In alkaline solutions the addition of a xanthate collector improved the final recovery, the rate of flotation and the grades. Variations in the pH had no effect on the recovery of pyrite to which xanthate was added. There was, however, a continual decrease in the final grade of the concentrates with an increase in pH, due to the increase in the recovery of the gangue mineral.
385

A development study for a short range, low capacity digital microwave link

Watermeyer, Ivan R January 1987 (has links)
Includes bibliographical references. / A specific request for development of a short-range, low capacity digital microwave transmission system has been received from the South African Dept. Posts and Telecommunications. The aim of this project is to initiate development work by determining the optimum system configuration and modulation technique to meet the design specifications. In addition, it is proposed to develop and construct an I.F. modulator/demodulator module using which simulation tests chosen modulation application may be performed in order to assess the scheme's feasibi1ity in this specific application.
386

Extensions to the data reconciliation procedure

Seager, Mark Thomas January 1996 (has links)
Bibliogaphy: leaves 148-155. / Data reconciliation is a method of improving the quality of data obtained from automated measurements in chemical plants. All measuring instruments are subject to error. These measurement errors degrade the quality of the data, resulting in inconsistencies in the material and energy balance calculations. Since important decisions are based on the measurements it is essential that the most accurate data possible, be presented. Data reconciliation attempts to minimize these measurement errors by fitting all the measurements to a least-squares model, constrained by the material and energy balance equations. The resulting set of reconciled measurements do not cause any inconsistencies in the balance equations and contain minimum measurement error. Two types of measurement error can occur; random noise and gross errors. If gross errors exist in the measurements they must be identified and removed before data reconciliation is applied to the system. The presence of gross errors invalidates the statistical basis of data reconciliation and corrupts the results obtained. Gross error detection is traditionally performed using statistical tests coupled with serial elimination search algorithms. The statistical 'tests are based on either the measurement adjustment performed by data reconciliation or the balance equations' residuals. A by-product of data reconciliation, obtained with very little additional effort, is the classification of the system variables. Unmeasured variables may be classified as either observable or unobservable. An unmeasured variable is said to be unobservable if a feasible change in its value is possible without being detected by the measurement instruments. Unmeasured variables which are not unobservable are observable. Measured variables may be classified as either redundant, nonredundant or having a specified degree of redundancy. Nonredundant variables are those which upon deletion of the corresponding measurements, become unobservable. The remaining measured variables are redundant. Measured variables with a degree of redundancy equal to one, are redundant variables that retain their redundancy in the event of a failure in any one of the remaining measurement instruments.
387

An investigation into the performance of a power-of-two coefficient transversal equalizer in a 34Mbit/s QPSK digital radio during frequency-selective fading conditions

Archer, Brindsley Broughton January 1997 (has links)
Bibliography: leaves 82-91. / Under certain atmospheric conditions, multipath propagation can occur. The interaction of radio waves arriving at a receiver, having travelled via paths of differing length, results in the phenomenon of frequency-selective fading. This phenomenon manifests as a notch in the received spectrum and causes a severe degradation in the performance of a digital radio system. As the total power in the received bandwidth may be unaffected, the Automatic Gain Control is not able to correct for this distortion, and so other methods are required. The dissertation commences with a summary of the phenomenon of multipath as this provides the context for the investigations which follow. The adaptive equalizer was developed to combat the distortion introduced by frequency-selective fading. It achieves this by applying an estimate of the inverse of the distorting channel's transfer function. The theory on adaptive equalizers has been well established, and a summary of this theory is presented in the form of Wiener Filter theory and the Wiener-Hopf equations. An adaptive equalizer located in a 34MBit/s QPSK digital radio is required to operate at very high speed, and its digital hardware implementation is not a trivial task. In order to reduce the cost and complexity, a compromise was proposed. If the tap weights of the equalizer could be represented by power-of-two binary numbers, the equalizer circuitry can be dramatically simplified. The aim of the dissertation was to investigate the performance of this simplified equalizer structure and to determine whether a power-of-two equalizer was a viable consideration.
388

External nitrification in biological nutrient removal activated sludge systems

Moodley, Rajan January 1999 (has links)
Includes bibliography. / In conventional nitrification-denitrification biological excess phosphorous removal (NDBEPR) activated sludge systems, such as the UCT system for example, both nitrification and phosphorous uptake (P uptake) occur simultaneously in the, usually large, aerobic reactor. In the UCT system the nitrate load to the anoxic reactor is limited by the a-recycle (i.e. system constraint recycle from the aerobic to the anoxic reactor) and the internal aerobic nitrification performance. The latter process, is mediated by the nitrifiers having a slow growth rate of 0.45/d, governs the sludge age of the biological nutrient removal activated sludge (BNRAS) system and thus results in long (20 - 25 day) sludge ages and large aerobic mass fraction requirements to nitrify completely. However, if stable nitrification could be achieved outside the BNRAS external nitrification (EN) system then nitrification and the suspended solids sludge age become uncoupled allowing greater flexibility into the BNRAS system.
389

Corrosion of reinforcement in concrete : the effectiveness of organic corrosion inhibitors

Rylands, Thaabit January 1999 (has links)
Includes bibliographical references. / Reinforcement corrosion in concrete has presented engineers with the challenge of finding ways of prolonging the service life of structures built in aggressive environments. One method of increasing the durability of concrete in aggressive environments is the use of corrosion inhibitors. In this work, two organic corrosion inhibitors were tested to observe their effectiveness in decreasing the rate of corrosion or delaying the onset of corrosion. One of the inhibitors was a migrating corrosion inhibitor while the other was an admixed inhibitor. The corrosion rate of reinforcement in concrete specimens used in this evaluation, was measured using the Linear Polarisation Resistance method. The performance of the admixed inhibitor was also measured in aqueous phase tests. Results of the tests conducted indicate that the admixed inhibitor does delay the onset of corrosion. The Mel caused short to medium term inhibition when the chloride concentration was less than 1.5%.
390

Spatial information system for public transport

Dondo, Chiedza January 2003 (has links)
Bibliography: leaves 72-79. / One way of reducing traffic congestion is through the promotion of public transport over private cars. Many countries, South Africa included, have set up policies to prioritise this issue. In accordance with these policies, public transport service planners are working to improve public transport services. This requires the collection of data on public transport usage, public transport timetables and the location of the routes, stops and termini. This data needs to be managed and integrated for use in decision-making on public transport services planning. As some of the data is spatial in nature, a spatial information system is proposed as the best tool for capturing, storing and analysing the data.

Page generated in 0.1094 seconds