• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 129561
  • 18589
  • 11188
  • 8080
  • 6978
  • 6978
  • 6978
  • 6978
  • 6978
  • 6952
  • 5592
  • 2329
  • 1457
  • 1297
  • 527
  • Tagged with
  • 217389
  • 40687
  • 33383
  • 30118
  • 28869
  • 25740
  • 22517
  • 19196
  • 16626
  • 16188
  • 16103
  • 13243
  • 12885
  • 12858
  • 12801
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
421

ANALYSIS OF Q- LEARNING BASED GAME PLAYING AGENTS FOR ABSTRACT BOARD GAMES WITH INCREASING STATE-SPACE COMPLEXITY

Upadhyay, Indrima 03 August 2021 (has links)
No description available.
422

Development of a least cost energy supply model for the SADC region

Alfstad, Thomas January 2004 (has links)
Includes bibliographical references. / Energy plays a pivotal role in economic growth and improving livelihoods. Although better supply of energy does not automatically guarantee an acceleration of human development, it is a prerequisite for it. It is essential for preparation and conservation of food, for sanitation and for all productive activity. Finding effective means of providing safe, affordable and reliable energy services is therefore of critical importance to governments and organisations endeavouring to promote sustainable development. Energy also places excessive strain on investment capital in developing countries. It is not uncommon for an African country to spend over 30% of its development budget on the energy sector. Limiting the need for capital expenditure in the energy sector could therefore free up resources for other pressing needs. To address these issues, this dissertation develops an energy system model for the SADC region using the TIMES framework. The model is an optimisation tool designed to find least cost energy supply strategies. It has an individual representation of each country in the region, but allows them to trade in energy. This makes it possible to evaluate coordinated strategies and pooling of resources, and thus to identify solutions that benefit the region as a whole. Because of the uneven distribution of energy resources there is significant scope for cost reductions through trade and cooperative efforts, if appropriate strategies are developed. Short country profiles that describe each country's energy sector were compiled from the data available in the public domain and are presented. It was found that energy statistics for the region are generally poor, especially on the demand side, and only available at an aggregated level. Due to data constraints the model does not include a detailed description of the demand side. It targets the electricity supply sector and focuses on the expansion of the regional generation and transmission infrastructure. The analysis is scenario based and examines the impact of changes in economic growth, discount rate and trade policies. The results from the scenarios are distilled into a robust expansion plan that is sufficient to sustain economic growth at a rate equal to that estimated in the short to medium term by the World Bank. The plan is presented in some detail along with the corresponding investment capital requirements. The analysis supports the hypothesis that increased trade can reduce the cost of energy supply in the SADC region.
423

Investigation of the stress corrosion cracking resistance of SAF2205 and AISI304 weldments for the marine environment application

Matjee, Mapula Regina 17 August 2021 (has links)
Stainless steels are used for many industrial applications because of their strength and fabrication characteristics. Stainless steel grades of SAF2205 and AISI304 can readily meet a wide range of design criteria of service life, maintenance, load and corrosion resistance.
424

Tender evaluation : a means of assessing the true value to the client.

Kipps, Shirwell Barry January 1984 (has links)
Includes bibliography. / The advent of large multidisciplinary projects has necessitated an in-depth evaluation of tenders to ensure that the tenderer awarded the contract has submitted the lowest acceptable evaluated tender sum and has convinced the evaluation team that, by adequate resourcing and programming, he has appreciated the technical implications. The objectives of this thesis were threefold: * to discuss the need for a new approach to tender evaluation; * to propose amendments to the traditional tender document to provide a basis for a detailed tender evaluation; and * to propose methods of evaluating the information received from the tenderers so that the tender most suitable from both financial and technical considerations is recommended to the client. An extensive literature survey revealed little relevant reference material and, as a result, the author's experience in the evaluation of tenders, together with input from engineers knowledgeable in this field, has formed the basis of this thesis. To obtain the information necessary for the evaluation phase, the tender document must be so structured as to provide the tenderer with sufficient detail to adequately assess the complexity of the project and to provide the evaluation team with sufficient pertinent information to adequately evaluate the tender.
425

The influence of different chemical treatments on the mechanical properties of hemp fibre-filled polymer composites

Mayembo, Evrard 18 August 2021 (has links)
The fluctuation of engineering and general-purpose polymer prices, rapid exhaustion of fossil fuel world-wide reserves and heightened awareness about environment have led the research community to explore the use of natural biodegradable raw materials as substitutes for manmade resources. Natural fibres are considered as substitutes for synthetic fibres in reinforced polymer matrix composites. Increased interest has been shown in natural fibres from plants such as cotton, jute, hemp as replacements for aramid, glass, and carbon fibres. This is due to their biodegradability, low cost, low density, and satisfactory strength to weight ratio. However, they present certain disadvantages compared to synthetic fibres which include high moisture sorption rates, low durability, and weak fibre/matrix bonding strength. The poor adhesion between natural fibres and polymer matrices leads to poor mechanical properties for natural fibre reinforced composites. Improvement of the fibre/matrix interface is required to increase the mechanical properties of the natural fibre filled polymer composite In this study, the influence of selected chemical treatments on the mechanical properties of hemp-filled epoxy composites was investigated. The aim of this study was to enhance fibre/matrix interface and hence the mechanical properties of hemp yarn-reinforced epoxy composites by modifying the chemical nature of a high crystallinity hemp yarn through chemical treatments such as alkalization, silanization (3-aminopropyltriethoxysilane) and a maleic anhydride treatment. The effectiveness of the chemical treatments was assessed by means of XRD, FTIR and TGA. Density measurements of as-received yarns (1.42-1.45 g cm-3 ) were within the range reported in the literature. Crystallinity measurements revealed the astreated yarns as having high crystallinity indices (87% weft and 84.7 warp yarns). The surface treatments used increased the crystallinity index only slightly. A decision was taken to use warp yarns (UTS = 799 MPa) rather than warp yarns (UTS = 503 MPa). Silane treatment reduced the tensile strength of yarns slightly (753 MPa) while the treatment of the fibres with maleic anhydride (562 MPa) and alkali treatment (518 MPa) had a much more significant effect on ultimate tensile strength. By contrast the modulus of the treated yarns all increased compared to the as-received yarns. Silanization was confirmed by energy dispersive X-ray spectroscopy while maleation was confirmed by the presence of characteristic absorbances in FTIR spectra. TGA revealed that silanization improved fibre thermal stability while maleic anhydride treatment did the opposite, possibly due to decarboxylation reactions. Four type of fibre/matrix interfaces, based on the treated and non-treated fibres, were generated through the production of the hemp reinforced epoxy composite plates. The results showed insignificant variations in the mechanical and thermal properties compare with the as-received hemp-filled epoxy composites which showed the high mechanical properties and thermal stability. The silanization and alkalization slightly decreased the properties of their respective properties although this was deemed statistically insignificant. The maleic anhydride treatment worsened the mechanical properties significantly. Scanning electron microscopy revealed appreciable fibre-matrix debonding which is indicative of a weak fibre/matrix interface. This was postulated as a reason for the lack of any significant reinforcement of the epoxy composites by maleic anhydride treated fibres. The tensile properties were also predicted and no statistically significant differences were observed although the experimental strengths values appeared to be lower than the predicted strengths. In general, the lack of appreciable improvement in mechanical properties of as-received fibres was concluded to be due to the initially high crystallinity of the as-received fibres. This provided little scope for further alkalization to change the surface significantly as little further removal of hemicellulose and lignin could occur.
426

The design, implementation and analysis of a wavelet-based video codec

Servais, Marc Paul January 1998 (has links)
Includes bibliographical references. / The Wavelet Transform has been shown to be highly effective in image coding applications. This thesis describes the development of a new wavelet-based video compression algorithm which is based on the 3D wavelet transform, and requires no complicated motion estimation techniques. The proposed codec processes a sequence of images in a group of frames (GOF) by first transforming the group spatially and temporally, in order to obtain a GOF of 3D approximation and detail coefficients. The codec uses selective prediction of temporal approximation coefficients in order to decorrelate transformed GOFs. Following this, a modified version of Said and Pearlman's image coding technique of Set Partitioning in Hierarchical Trees is used as a method for encoding the transformed GOF. The compression algorithm has been implemented in software, and tested on seven test sequences at different bit-rates. Experimental results indicate a significantly improved performance over MPEG 1 and 2 in terms of picture quality, for sequences filmed with a stationary camera. The codec also performs well on scenes filmed with a moving camera, provided that there is not a large degree of spatial detail present. In addition, the proposed codec has several attractive features. It performs well without entropy coding, and does not require any computationally-expensive motion estimation methods, such as those used by MPEG. Finally, a substantial advantage is that the encoder generates a bit-stream which allows for the progressive transmission of video, making it well-suited to use in video applications over digital networks.
427

The hydrated lime dissolution kinetics in acid mine drainage neutralization

Mgabhi, Senzo Mntukhona 19 August 2021 (has links)
Hydrated lime, Ca(OH)₂, has been rediscovered as an environmentally sustainable product, which could be of help in the remediation of acid mine drainage (AMD), especially in the AMD neutralization process. This is due to its ease of acquisition, affordable price and unique versatile properties such as reactivity and neutralization efficiency. AMD is an acidic wastewater containing high concentrations of sulphates and dissolved heavy metals mainly ferrous iron. The dissolution of Ca(OH)₂ in aqueous solution is complex, which make its kinetics in AMD neutralization difficult to understand. The aim of this study was therefore to understand the Ca(OH)₂ kinetics in simplified solutions such as de-ionized water and CH₃COOH. The neutralization process is an acid-base reaction; therefore, pH was used as a critical parameter in determining Ca(OH)₂ dissolution rate. The determination of the dissolution rate was attempted in two ways – measurement of dissolved calcium and determining change of particle size distribution. There were two methods of determining calcium assays investigated, that is EDTA-EBT titration method and OCPC spectrophotometric method. Both methods worked successfully for a Ca(OH)₂-H₂O system. The EDTA-EBT titration method worked better even at higher concentrations of calcium (up to 100 ppm) while the complexometric spectrophotometric method was consistent with Beer-Lambert Law for a narrow calcium concentration range of 1 to 2 ppm, when a small amount of magnesium was introduced. However, both methods failed in the presence of appreciable quantities of magnesium, sulphates and ferric ion. The investigation for particle characterization found that image analysis of SEM images was a better particle-size characterization option than laser diffraction measurement, which tended to cause blinding of the instrument window, but still yielded only qualitative results. There were four reactor configurations used, that is batch reactor for determining the effect of the hydrodynamics (stirring rate and powder addition) and three types of slurry CSTRs. The jacketed chemostat was found to be the optimal reactor configuration while the other two were used as base cases. The Ca(OH)₂ dissolution rate in de-ionized water decreased from 4.0×10⁻⁵ to 1.6×10⁻⁵ mol‧L⁻¹‧s⁻¹ when the temperature was increased from 26 °C to 42 °C. Correspondingly, the pH decreased with Ca(OH)₂ dissolution rate from 11.89 to 11.6. The dissolution rate expression was first order and consistent with the Nernst-Brunner Equation, with the dissolution rate constant of 2.34×10⁻³ s⁻¹ and the activation energy of 18.1 kJ mol ⁻¹ respectively. The overall Ca(OH)₂ dissolution rate in CH₃COOH solution decreased from 2.6×10⁻⁴ to 1.7×10⁻⁴ mol‧L⁻¹‧s⁻¹ when the temperature was increased from 25 °C to 44 °C. At constant ambient temperature (22°C), the Ca(OH)₂ dissolution rate increased with the decrease in pH from 12.1 to 4.38, then decreased with the decrease in pH from 4.38 to 3.5. Using pH to correlate dissolved calcium data and then to determine the rate of reaction, it was found that the dissolution rate is zeroth-order to hydrogen proton and first-order with respect to calcium concentration with the dissolution rate constant of 1.2×10⁻² s⁻¹ and the activation energy of 5.7kJ mol ⁻¹ respectively. These results confirmed that the dissolution of Ca(OH)₂ in DI water and the acetic acid solution is complex. The lower values of the activation energies (5.7 – 18.1 kJ mol ⁻¹), signifies that the kinetics of the Ca(OH)₂ dissolution are mass transfer controlled. Furthermore, these results were confirmed by the weak dependence of the dissolution rate to temperature. However, it was found that slurry CSTR is an efficient reactor system to study the effect of pH on the kinetics of hydrated lime at steady-state conditions.
428

Implementation of wide area protection system (WAPS) for electrical power system smart transmission grids

Tetteh, Bright 22 September 2021 (has links)
The planning, operation and control of the power system has been evolving since its inception. These changes are due to the advancement in science and technology, and changes in energy policy and customer demands. The envisioned power system - smart grid (SG) - is expected to have functional and operational capabilities that maximize the reliability, minimize generation deficit, and cost issues in the power system. However, many power systems in the world today still operate traditionally, with one-way communication and one-way power flow. Transitioning to a smart grid influences the protection schemes of the power system, as the smart grid is to leverage distributed energy resources (DERs) using distributed generation (DG) units and allow for bi-directional flow of power and information. Therefore, there is a need for advanced protection schemes. Wide-area protection (WAP) techniques are proposed as one of the solutions to solve the protection challenges in the smart grid due to their reliance on wide-area information instead of local information. This dissertation considered three WAP techniques which are differentiated based on the data used for faulted zone detection: (A) Positive sequence voltage magnitude (PSVM), (B) Gain in momentum (GIM) and (C) Sum of positive and zero sequence currents (SPZSC). The dissertation investigated their performances in terms of accuracy in detecting the faulted zones and the faulted lines, and fault clearing time. The investigation was done using three simulation platforms: MATLAB/Simulink, Real-Time (Software in the Loop (SIL)) and Hardware-in-the-Loop (HIL) implementation using Opal-RT and SEL-351A relay. The results show that, in terms of detecting the faulted zones, all the techniques investigated have 100% accuracy in all the 36 tested fault cases. However, in terms of identifying the faulted line in the faulted zone, the algorithms were not able to detect all the 36 tested cases accurately. In some cases, the adjacent line was detected instead of the actual faulted line. In those scenarios, the detected line and the faulted line present similar characteristics making the algorithms to detect the wrong line. For the faulted line detection accuracy, the algorithm (A) has an accuracy of 86%, (B) has an accuracy of 94% and (C) has an accuracy of 92%. The fault clearing times of the algorithms were similar for both the MATLAB/Simulink and realtime simulation without the actual control hardware which was the SEL-351A relay. When the simulation was done with the control hardware through Hardware-in-the-loop, a communication delay was introduced which increased the fault clearing times. The maximum fault clearing time for the techniques investigated through the HIL simulation are 404 ms, 256 ms, and 150 ms for the techniques (A), (B) and (C) respectively and this variation is due to the different fault detection methods used in the three algorithms. The fault clearing time includes communication between the Opal-RT real-time simulator and SEL-351A relay using RJ45 ethernet cable, these fault clearing times can change if a different communication medium is used. From the performance data presented, it is evident that these algorithms will perform better when used as backup protection since the common timer settings for backup protection schemes range from 1200 ms to 1800 ms, while primary protection is expected to respond almost instantaneously, that is, with no initial time delay.
429

An investigation into dynamical bandwidth management and bandwidth redistribution using a pool of cooperating interfacing gateways and a packet sniffer in mobile cloud computing

Shuuya, Lukas 29 September 2021 (has links)
Mobile communication devices are increasingly becoming an essential part of almost every aspect of our daily life. However, compared to conventional communication devices such as laptops, notebooks, and personal computers, mobile devices still lack in terms of resources such as processor, storage and network bandwidth. Mobile Cloud Computing is intended to augment the capabilities of mobile devices by moving selected workloads away from resource-limited mobile devices to resource-intensive servers hosted in the cloud. Services hosted in the cloud are accessed by mobile users on-demand via the Internet using standard thick or thin applications installed on their devices. Nowadays, users of mobile devices are no longer satisfied with best-effort service and demand QoS when accessing and using applications and services hosted in the cloud. The Internet was originally designed to provide best-effort delivery of data packets, with no guarantee on packet delivery. Quality of Service has been implemented successfully in provider and private networks since the Internet Engineering Task Force introduced the Integrated Services and Differentiated Services models. These models have their legacy but do not adequately address the Quality of Service needs in Mobile Cloud Computing where users are mobile, traffic differentiation is required per user, device or application, and packets are routed across several network domains which are independently administered. This study investigates QoS and bandwidth management in Mobile Cloud Computing and considers a scenario where a virtual test-bed made up of GNS3 network software emulator, Cisco IOS image, Wireshark packet sniffer, Solar-Putty, and Firefox web browser appliance is set up on a laptop virtualized with VMware Workstation 15 Pro. The virtual test-bed is in turn connected to the real world Internet via the host laptop's Ethernet Network Interface Card. Several virtual Firefox appliances are set up as endusers and generate traffic by launching web applications such as video streaming, file download and Internet browsing. The traffic generated by the end-users and bandwidth used is measured, monitored, and tracked using a Wireshark packet sniffer installed on all interfacing gateways that connect the end-users to the cloud. Each gateway aggregates the demand of connected hosts and delivers Quality of Service to connected users based on the Quality of Service policies and mechanisms embedded in the gateway. Analysis of the results shows that a packet sniffer deployed at a suitable point in the network can identify, measure and track traffic usage per user, device or application in real-time. The study has also demonstrated that when deployed in the gateway connecting users to the cloud, it provides network-wide monitoring and traffic statistics collected can be fed to other functional components of the gateway where a dynamical bandwidth management scheme can be applied to instantaneously allocate and redistribute bandwidth to target users as they roam around the network from one location to another. This approach is however limited and ensuring end-to-end Quality of Service requires mechanisms and policies to be extended across all network layers along the traffic path between the user and the cloud in order to guarantee a consistent treatment of traffic.
430

Design of a backend system to integrate health information systems – case study: ministry of health and social services (MoHSS)-Namibia

Shoopala, Anna-Liisa 29 September 2021 (has links)
Information systems are the key to institution organization and decision making. In the health care field, there is a lot of data flow, from the patient demographic information (through the electronic medical records), the patient's medication dispersal methods called pharmaceutical data, laboratory data to hospital organization information such bed allocation. Healthcare information system is a system that manages, store, transmit and display healthcare data. Most of the healthcare data in Namibia are unstructured, there is a heterogeneous environment in which different health information systems are distributed in different departments [1][2]. A lot of data is generated but never used in decision-making due to the fragmentation. The integration of these systems would create a flood of big data into a centralized database. With information technology and new generation networks becoming a called for innovations in every day's operations, the adaptations of accessing big data through information applications and systems in an integrated way will facilitate the performances of practical work in health care. The aim of this dissertation is to find a way in which these vertical Health Information System can be integrated into a unified system. A prototype of a back-end system is used to illustrate how the present healthcare systems that are in place with the Ministry of Health and Social Service facilities in Namibia, can be integrated to promote a more unified system usage. The system uses other prototypes of subsystems that represent the current systems to illustrate how they operate and, in the end, how the integration can improve service delivery in the ministry. The proposed system is expected to benefit the ministry in its daily operations as it enables instant authorized access to data without passing through middlemen. It will improve and preserve data integrity by eliminating multiple handling of data through a single data admission point. With one entry point to the systems, manual work will be reduced hence also reducing cost. Generally, it will ensure efficiency and then increase the quality of service provided.

Page generated in 0.1442 seconds