• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 130345
  • 18589
  • 11377
  • 8080
  • 6978
  • 6978
  • 6978
  • 6978
  • 6978
  • 6952
  • 5619
  • 2333
  • 1457
  • 1297
  • 531
  • Tagged with
  • 218686
  • 40995
  • 34283
  • 30288
  • 28912
  • 25781
  • 22692
  • 19196
  • 17512
  • 16188
  • 16107
  • 13769
  • 13742
  • 13325
  • 13008
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

Notes on steam-electric power engineering, with special reference to the civil engineer

Olivier, Henry 22 November 2016 (has links)
No description available.
412

An investigation of colour rendering preferences by means of synthetic spectra

Naude, David Eduard Hugo 15 December 2016 (has links)
No description available.
413

Using machine learning techniques in developing an autonomous network orchestration scheme in 5g networks

Mohamad, Anfar Mohamad Rimas 11 February 2021 (has links)
Network Orchestrators are the brains of 5G networks. The orchestrator is responsible for the orchestration and management of Network Function Virtualisation Infrastructure (NFVI), understanding network services on NFVI and software resources. The International Telecommunication Union (ITU) have categorized three main 5G network services for the orchestration. So called, Enhanced Mobile Broadband (eMBB), Ultra-reliable and Low-latency Communications (uRLLC) and Massive Machine Type Communications (mMTC). Categorizing the network is achieved in 5G by a method called network slicing. In the future, a device connecting to a 5G network will be in one of three slices (eMBB, uRLLC and mMTC) based on network characteristics. The focus of this dissertation goes to the eMBB slice. Normally day-today internet users will use the eMBB slices. Thus, all the daily internet access such as watching YouTube videos, making Skype video calls, calling via WhatsApp, downloading files, listening to online radio and whatnot will happen via eMBB slice. However, this approach neglects the importance of the web application a user is using in the eMBB slice. For example, a family doctor may give first aid assistance via a Skype video call in an emergency situation. Thus the call of the doctor, in this case, should be prioritized over other normal daily web tasks. Thus, there is a requirement of prioritizing usual web-tasks in certain scenarios which eMBB slice neglects. It is possible to detect websites or web plications with modern-day technologies. Hence, these type of website detection algorithms can be improved to detect web-tasks (Skype voice calling, Skype video calling, etc...) to provide a separate slice within eMBB slices upon doctor's request. The goal of this study is to identifying web-tasks by capturing the network data packets flowing in and out of the system and perform an application-based classification by using machine learning techniques. After the classification, data was fed to the 5G Orchestrator or to the 5G Core. The Orchestrator will allocate a number of Network Function Virtual Machines to provide best quality of service (QoS) based on generated slice information. iv In this research, a Website Task Finger Printing (WTFP) algorithm is introduced to identify web traffic (such as identifying if a user is watching a video on Facebook, rather than just detecting the website that they are viewing). Possible applications of the developed algorithm vary from 5G ultra slicing to network security. This study delves deeper into Website Finger Printing (WFP). Traditional papers only describe how to identify websites by using statistical analysis, whereas this study shows how we can identify what task a user is performing rather than just which website they are currently visiting. The identifier captures the inbound and outbound data and then uses the packet length histogram as the main feature. After that, application-based features were extracted by using heuristic logical filters to prepare a feature vector for the Machine Learning (ML) algorithm. A trained Multi-layer Perceptron (MLP) based Artificial Neural Network (ANN) was selected as the classifier after comparing results with Support Vector Machine (SVM), Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN). The MLP algorithm was able to classify website tasks with 95.50% accuracy. After classification, the classified class was sent to the 5G Orchestrator, then it refers to programed Network Service Descriptor and based on our specifications generates a new slice by using Network Slice Engine (NSE). After that, it monitorsthe present bitrate of the slice by using Zabbix. Next, the Orchestrator either increase or decrease the bitrate to give the optimum Quality of Service (QoS) by using Auto Scaling Engine (ASE). The algorithm also used to generate specific QoS by using Open5G Core. Therefore, this study shows that it is possible to allocate slices based on webtasks in 5G Mobile network thus proposing to investigate further; to enable web-task based slicing for the future mobile networks.
414

The effect of the surface condition of Aluminium ingot (AA3003) during roll bonding with clad Aluminium alloy (AA4045) to form an Aluminium brazing material

Mutsakatira, Innocent 16 February 2021 (has links)
Hulamin is the leading producer of aluminium products in South Africa. One of the products made at Hulamin is the aluminium brazing sheet. The aluminium brazing sheet is made from two aluminium alloys, AA3003 and AA4045. The main alloying element in the 3XXX series alloy is manganese and the main alloying element in the 4XXX series alloy is silicon. An aluminium brazing sheet is manufactured during an industrial process called “accumulative roll bonding”, where AA4045 is termed “the clad” and AA3003 “the core”. The two materials are stacked together with the core sandwiched between two clad layers. Before the materials are stacked together, they undergo surface preparation. At Hulamin, the surface roughness of the core is kept at 10 µm and the surface roughness of the clad at 1 µm. After surface preparation, the stacked material is put into a hot rolling mill, where it undergoes reduction through several passes until it reaches the desired gauge. The aim of this project is to determine the effect of the surface roughness of both the clad and the core on the quality of the bond after roll bonding. While the relevant literature specifies that an increase in surface roughness increases bond strength, the current set surface finishes being implemented at Hulamin have been obtained through trial and error, with no validated experimental work to support them. This research aims to find the optimum surface finish in order to streamline the process of surface preparation. A design matrix was constructed based on the surface finish being used at Hulamin, where the core was at 10 µm and the clad at 1 µm. Fourteen surface conditions were formulated and three tests were performed on each surface condition. The samples were manually ground on different grit papers to an average surface roughness of 0.5 µm, 1 µm and 3 µm for the clad and 7 µm, 10 µm, 15 µm and 25 µm for the core. Simulation of the hot rolling at the University of Cape Town's (UCT's) Centre for Materials Engineering (CME) laboratory was achieved using plane strain compression testing (PSC) on the Gleeble 3800. The PSC sample geometry of 30 mm x 50 mm x 10 mm was achieved by stacking a 5 mm sample from the clad liner plate and a 5 mm thick sample from the as-cast core material. To simulate the hot roll bonding the tests were run at 450 ºC at a strain rate of 1.5 s-1 . The test parameters were obtained from the Hulamin mill log data. In order to assess the strength of the bond, post PSC test, tensile shear testing was performed on specimens wire-cut from the gauge of the deformed PSC sample. The tensile shear specimens were designed according to ASTM D3165. The tensile shear tests were performed on a Zwick Universal Testing machine, in conjuction with single-camera Digital Image Correllation (DIC). The purpose of the DIC was to monitor the strain localisation at the interface. The tensile test was run at 0.0012 mm/min at room temperature. The shear test results confirmed that surface roughness played a major role in the bond strength formed between these two dissimilar materials. It was found that the Hulamin benchmark surface preparation, set at 10 µm and 1µm, could be improved by increasing the surface roughness of the core to 15µm while keeping the clad surface finish constant. The rolling direction (RD) of the specimen was cut, mounted and polished for microstructural feature characterisation, using light microscopy and scanning electron microscopy (SEM) with backscattered electron (BE) imaging. In order to characterise the bond further, energydispersive X-ray spectroscopy (EDS) was performed across the interface of the samples to show the diffusion of Si. Microstructural analysis revealed that a poor bond resulted in the presence of large voids, while a high integrity metalurgical bond contained very small voids. Also, a good metallurgical bond allowed for the diffusion of Si across the bond, although these results were qualitative because diffusion of Si across the interface is largely time- and temperature-dependent. Combined strain and microstructural results showed that finer surface roughnesses yielded poorer bonds because of minimal frictional force and that rougher surface finishes also yielded poorer bonds, owing to larger troughs on the surface of the material that led to void formation at the interfaces, which in turn caused sites of delamination. There had to be an optimum surface finish that existed between the two alloys where the finish would obtain a metallurgical bond that was of optimum strength. Should this optimum finish be exceeded, the strain level would inevitably increase during tensile shear testing, with the induced voids increasing in size and Si diffusion across the interface decreasing, thereby indicating a compromise in the quality of the bond. It was found that the Hulamin benchmark surface preparation, set at 10 µm and 1µm, could be improved by increasing the surface roughness of the core to 15µm while keeping the clad surface finish constant. The findings of this research could be of significant value to Hulamin in the improvement of the quality and cost of the end product under consideration.
415

Assessment of a Shredding Technology of Waste Printed Circuit Boards in preparation for Ammonia-based Copper leaching

Prestele, Marc Patrick 24 February 2021 (has links)
The electronic waste (e-waste) stream grows at a global annual rate of 3-5%, with an expected 50 Mt to be discarded worldwide in 2020 alone. These large amounts of e-waste pose considerable environmental and health problems while also presenting socio-economic opportunities to most nations, especially to developing countries such as South Africa. E-waste presents a specifically unique challenge to developing nations as they suffer the challenges associated with e-waste, but do not have sufficient waste volumes to adopt business models used in developed countries to harness the economic opportunities presented by the growth of this waste streams. Recycling of e-waste requires huge capital and operating costs to run integrated recycling facilities and developing countries generally lack this funding. Furthermore, developing countries suffer from inadequate infrastructure, absent legislation and lacking capital investment which are necessary for the processing of e-waste regardless of it being regarded as a secondary resource or waste. Printed circuit boards (PCBs) are a valuable fraction of e-waste, made up of tightly laminated metal-polymer composites containing several base and precious metals which makes them attractive to recyclers. Hydrometallurgy is a widely explored technology that allows for scalable operations for recovering metals from PCBs. However, for it to be effectively employed, the metals in PCBs need to be liberated or be accessible to leach agents. To date, this still heavily relies on energy-intensive pulverisation prior to the leaching and subsequent metal recovery stages. This paper explores the structure of the PCB, developing an understanding of how the structural design of the board translates to the difficulty in liberating or exposing the metals for leaching. The paper goes further to test and compare metal liberation techniques as well as compares energy consumption and costs associated with the techniques; with the view to identify a low energy and low capital investment method that would be suitable for adoption by small scale recyclers typical of those operating in South Africa. The structural design of the PCBs was explored through an intensive literature survey and conducting a case study of the PCB manufacturing process of a local company as well as running tensile tests, drop weight impact tests and three-point bending tests on a batch of custom-made PCBs supplied by the local company. The metal liberation methods tested included the use of an industrial grab shredder to size reduce and delaminate the PCBs, use of a planetary ball mill and some instances including precursors such as freezing the PCBs in liquid nitrogen or soaking the boards in NaOH to remove the upper- and lowermost epoxy layers. The effectiveness of each method was then evaluated using a diagnostic ammoniacal leach test in which the extent of copper dissolution from the PCB is used as an indicator of the performance of the liberation method. Results on the structural design of the PCBs showed that it would be suitable to use size reduction mechanisms that are based on impact stresses as the fibreglass and epoxy could absorb all other stresses at high intensity without failing. In general, all treated or untreated PCBs underwent a maximum of six shredding passes, with results generally producing poor recoveries, not exceeding 27.5%. “Untreated” PCBs, referring to PCBs that only have undergone shredding in the industrial grab shredder, showed increasingly iv higher copper recoveries with consecutively shredding cycles. The 6th cycle produced the highest copper recoveries of 6.80g (23.5%) after 72 hrs. PCBs that had been soaked in NaOH and undergone six passes through the industrial grab shredder recovered a maximum of 27.5%. Interestingly, using a similar process but only shredding the PCBs in four passes showed similar results at 26.14% Cu recovery. Shredding the PCBs in four passes and subsequently milling them for 60 min (without NaOH treatment) showed lower Cu recoveries at 13.29% and this was not improved by extending the milling time to 120 min. This showed that the NaOH treatment was more effective in exposing the outer layers of copper relative to the shredding and milling. It can be seen that apart from size reduction there is delamination of some of the shredded PCB pieces. However, this delamination is not always complete and Cu metal can still be seen covered by fibreglass and hence inaccessible to leach agents. It is concluded that the combination of the shredding and NaOH method has potential and it is recommended to incorporate a 2nd NaOH stage to further delaminate the inner layers of the PCB exposing the copper
416

Three-phase five limb transformer responses to geomagnetically induced currents

Murwira, Talent Tafadzwa 14 September 2021 (has links)
Geomagnetically induced currents (GIC) are quasi-DC currents that result from space weather events arising from the sun. The sun ejects hot plasma in a concept termed ‘coronal mass ejections' which is directed towards the earth. This plasma interferes with the magnetic field of the magnetosphere and ionosphere, and the magnetic field is subsequently distorted. The distortions in these regions results in the variation of potential on the earth's surface and distortions in the earth's magnetic field. The potential difference between two points on the earth's surface leads to the flow of direct current (DC) of very low frequency in the range 0.001 ~ 0.1 Hz. Geomagnetically induced currents enter into the power system through grounded neutrals of power transformers. The potential effects of GIC on transformers are asymmetrical saturation, increased harmonics, noise, magnetization current, hot spot temperature rise and reactive power consumption. Transformer responses to GIC was investigated in this research focussing on a three-phase fivelimb (3p5L) transformer. Practical tests and simulations were conducted on 15 kVA, 380/380 V, and 3p5L transformers. The results were extended to large power transformers in FEM using equivalent circuit parameters to show the response of grid-level transformers. A review of literature on the thresholds of GIC that can initiate damage in power transformers was also done and it was noted that small magnitudes of DC may cause saturation and harmonics to be generated in power transformers which may lead to gradual failure of power transformers conducting GIC. Two distinct methods of measuring power were used to measure reactive power consumed by the transformers under DC injection. The conventional method and the General Power Theory were used and the results show that the conventional method of measuring power underestimates reactive power consumed by transformers under the influence of DC injections. It may mislead system planners in calculating the reactive power reserves required to mitigate the effects of GIC on the power system.
417

Design and Implementation of an RFI Direction Finding System for SKA Applications

Gowans, James Zekkai Middlemost 04 August 2021 (has links)
For radio astronomy telescopes to be able to perform observations of weak signals from space, they need to operate in a radio-quiet environment. Any radio frequency interference (RFI) will interfere with the ability of the telescope to collect data. With the proliferation of electrical and electronic devices, RFI management is one of the major challenges facing radio astronomy reserves. This thesis details the design, construction and testing of a system which is able to find the direction which a source of interference is coming from. User requirements for the system are captured, and one of the key requirement of the system is the ability to direction-find two classes of RFI: weak narrow-band continuous signals, and strong impulsive signals. Both of these classes of signals pose problems for radio telescopes. The primary focus of the thesis is implementing the algorithms to direction find those signals, and to evaluate whether the algorithms perform as expected on real RFI sources in the field. An analysis of various prior direction finding techniques is done from the existing literature to select the most suitable technique for this system. A combination of phase interferometry and time difference of arrival is selected, due to their suitability for the classes of signals, the operating environment and the hardware that will be used. Simulations are done showing how the system should operate and to highlight potential challenges. A key challenge is around phase ambiguity, and special attention is paid to mitigating this. After design and simulation, a full system is implemented containing a number of subsystems linked together. A four element deformed circular antenna array and RF front end pick up signals from the environment. These signals are digitised together in phase by fast analogue to digital converters (ADCs). The output of the ADCs goes into a field programmable gate array (FPGA) on a ROACH development board which does high speed DSP including Fourier transforms, spectrum cross correlations, accumulations, power detections and time domain capturing. The output of the Digital Signal Processing (DSP) done on the FGPA is received by a computer running a Python application which performs the final angle of arrival calculations in real time. The application has a mathematical model of the antenna array which it combines with the received baseline time difference or phase shift measurements to ascertain the direction of the signal source. When designing and building the system, emphasis is put in making it flexible and reconfigurable, allowing it to be used with arbitrary array configurations or frequency ranges. The system is first put together and tested in the lab using signal generators, noise sources and impulse generators. These signals are fed into the ROACH board to simulate an RF environment and hence ensure that the design is working as expected. Next, the system is made portable and taken out for field trials. The field trials demonstrate that the system is able to provide accurate tracks for a number of different RFI sources, both impulsive and narrowband. It is able to maintain a track over the full 360◦ field of view as required.
418

An investigation into the performance and problems of first-year engineering students at the University of Cape Town

Jawitz, Jeff January 1992 (has links)
Bibliography: leaves 203-208. / The first- and second-year results of the 1989 engineering student intake were analysed and revealed that matriculants from Black Education Departments performed significantly worse in the first year than those from White Education Departments. Matric point scores were found to be good predictors for White Education Department matriculants, but less so for Black Education Department matriculants, with matric Physical Science a better predictor than matric Maths, for both first- and second- year courses. Using interviews and a survey of students, a set of academic and non-academic problems experienced by first-year engineering students were identified with black students found to have experienced a particular set of problems to a greater degree than white students. The data produced a portrait of the interaction between first-year engineering students and the academic and social systems of the university. The dominant feature that emerged was one of distance between the individual students and elements of the university environment, including staff, fellow students and the academic material. Factors from the student's personal and educational background that appeared to accentuate this experience of distance were identified. Recommendations to the Engineering Faculty were compiled on the basis of this analysis together with student suggestions for improving the first-year engineering programme.
419

Selected technology options for sanitation provision to developing communities in urban South Africa

Peters, Craig Russell January 1993 (has links)
Includes bibliography. / In 1990 the Water. Research Commission initiated an evaluation of sanitation entitled "Technical, socio-economic and environmental evaluation of sanitation systems for developing urban areas in South Africa". The research project was undertaken jointly by Palmer Development Group and the University of Cape Town. The project culminated in 26 reports submitted under the collective title "Urban Sanitation Evaluation" in December 1992. This thesis is based principally on the research work that the Water Research Group, Department of Civil Engineering, University of Cape Town, contributed to the project and, in particular, the overview document entitled " Technology options for sanitation provision to developing Communities" (PDG/UCT Document AS, 1992). Since 1986, after the removal of legal restrictions on urbanisation, a high rate of population movement to the cities and towns commenced with housing of the urban poor a focal point. The three essential aspects of housing are location in reasonable proximity to work, provision of services, in particular water supply and sanitation, and the house or shelter. Water supply and sanitation are basic health requirements. This thesis investigates selected technology options for sanitation provision to developing communities in urban South Africa.
420

Value chain diversification in the sugar industry using quantitative economic forecasting models

Ghafeer, Amna 10 August 2021 (has links)
The South African sugar industry is facing increasing pressure from global sugar markets where the price of sugar is significantly lower than in domestic markets, as well as from the implementation of the health levy which has resulted in beverage manufacturers replacing sugar with non-taxable sweeteners. To maintain the industry infrastructure and to increase the demand for sugar, a diversification route for sucrose is needed. Most of the studies focused on identifying a diversification solution for bioproducts are survey or experienced based and so, one of the main aims of this study was to use mathematical modelling of industrial manufacturing data to identify one single industry to explore sucrosebased chemicals. Datasets published by Statistics South Africa, The World Bank, Trading Economics and by the Organization for Economic Cooperation and Development were considered, from which the monthly manufacturing industries' sales data published by Statistics South Africa was selected for model building. Seven different types of models were considered, including the Naïve method, simple and weighted moving averages, simple exponential smoothing, Holt's method, Holt-Winters' method and Auto-Regressive Integrated Moving Average (ARIMA) models. Each type of model was analysed in the context of the eight industries' data, from which ARIMA models were identified as those which were broad enough to cater for the varying degrees of trends and seasonality in the data without oversimplifying the data's behaviour. The other seven were not suitable either because their narrow applicability was not suitable to most of the datasets at hand or because they would provide an oversimplified model which would not be robust for future datapoints. The models were then built using training and test data splits with the auto.arima function in R Studio. From these, selection matrices were constructed to evaluate the industries' forecasts on sales growth and revenue generating potential, the results of which identified the beverages' industry to the best option for investment. One of the objectives of the study was to identify a sucrose-based chemical for investment that is not highly commercialized in order to widen the range of investment options available. To this end, only four of the less commercialized chemicals explored showed significant advancement based on published research and patents, namely caprolactam, dodecanedioic acid, adipic acid and muconic acid. However, all four chemicals would feature mainly in the textiles industry, which the model identifies as not being a high growth industry and thus would limit the revenue generating potential. The main beverage constituents of common drinks were then explored, from which nonnutritive sweeteners were chosen based on their wide applicability. From the six sweeteners considered, sucralose is the most widely used sweetener with the least number of reported serious health risks; this is thought to compensate for sucralose being a mid-price range product. Sucralose would also allow the sugar industry to leverage beverage manufacturers' replacement of sugar with sweeteners to comply with the Health Protection Levy. The techno-economic analysis performed for the selected synthetic sucralose production process proved profitable in the first year of operation, as did a refined configuration using a lower ethyl acetate flow rate. This is largely due to the retail price of sucralose being close to 8 times the purchase cost of the most expensive raw material used. Although this profitability analysis is promising, further investigation into the fixed capital costs involved should be done prior to the sugar industry investing in sucralose. Recommendations for further work to improve the profitability of this scenario include the consideration of forming a strategic partnership with key players in the beverages' industry, exploring alternative production routes, and using other time series models to validate the results achieved here.

Page generated in 0.1212 seconds