Spelling suggestions: "subject:"engineering."" "subject:"ingineering.""
601 |
Oxides in the dehydration of Magnesium Chloride HexahydrateKashani-Nejad, Sina January 2005 (has links)
No description available.
|
602 |
Modeling of Slag Entraining Funnel Formation ('Vortex') during liquid metal transfer operationsSankaranarayanan, Ramani January 1994 (has links)
Note:
|
603 |
Designing and developing a robust automated log file analysis framework for debugging complex system failureVan Balla, Tyrone Jade 29 June 2022 (has links)
As engineering and computer systems become larger and more complex, additional challenges around the development, management and maintenance of these systems materialize. While these systems afford greater flexibility and capability, debugging failures that occur during the operation of these systems has become more challenging. One such system is the MeerKAT Radio Telescope's Correlator Beamformer (CBF), the signal processing powerhouse of the radio telescope. The majority of software and hardware systems generate log files detailing system operation during runtime. These log files have long been the go-to source of information for engineers when debugging system failures. As these systems become increasingly complex, the log files generated have exploded in both volume and complexity as log messages are recorded for all interacting parts of a system. Manually using log files for debugging system failures is no longer feasible. Recent studies have explored data-driven, automated log file analysis techniques that aim to address this challenge and have focused on two major aspects: log parsing, in which unstructured, free-form text log files are transformed into a structured dataset by extracting a set of event templates that describe the various log messages; and log file analysis, in which data-driven techniques are applied to this structured dataset to model the system behaviour and identify failures. Previous work is yet to address the combination of these two aspects to realize an end-to-end framework for performing automated log file analysis. The objective of this dissertation is to design and develop a robust, end-to-end Automated Log File Analysis Framework capable of analysing log files generated by the MeerKAT CBF to assist in system debugging. The Data Miner, Inference Engine and the complete framework are the major subsystems developed in this dissertation. State-of-the-art, data-driven approaches to log parsing were considered and the best performing approaches were incorporated into the Data Miner. The Inference Engine implements an LSTM-based multi-class classifier that models the system behaviour and uses this to perform anomaly detection to identify failures from log files. The complete framework links these two components together in a software pipeline capable of ingesting unstructured log files and outputting assistive system debugging information. The performance and operation of the framework and its subcomponents is evaluated for correctness on a publicly available, labelled dataset consisting of log files from the Hadoop Distributed File System (HDFS). Given the absence of a labelled dataset, the applicability and usefulness of the framework in the context of the MeerKAT CBF is subjectively evaluated through a case study. The framework is able to correctly model system behaviour from log files, but anomaly detection performance is greatly impacted by the nature and quality of the log files available for tuning and training the framework. When analysing log files, the framework is able to identify anomalous events quickly, even when large log files are considered. While the design of the framework primarily considered the MeerKAT CBF, a robust and generalisable end-to-end framework for automated log file analysis was ultimately developed.
|
604 |
An Information-Potential Approach to Airborne LiDAR Point Cloud Sampling and AssessmentDamkjer, Kristian 01 January 2020 (has links) (PDF)
In the last decade, airborne laser scanning (ALS) systems have evolved to provide increasingly high-fidelity topographic mapping data. Point clouds and derivative models now rival photogrammetrically-derived equivalents. Yet, despite technological advancement and widespread adoption of light detection and ranging (LiDAR) data, sampling guidance and data quality (DQ) assessment remains an open area of research due to the volumetric and irregularly sampled nature of point clouds and the persistent influence of assumptions from early point scanning LiDAR systems on assessment methods. This dissertation makes several contributions to the research area by considering point cloud sampling strategies and DQ assessment from an information potential perspective. First, a method is developed to estimate the quantifiable information content of each point in a cloud based on localized analysis of structure and attribution. This salience measure is leveraged to significantly reduce the population of points in a cloud while minimizing information content loss to demonstrate the importance of structure and attribution to the information potential of the cloud. Next, a method is developed to efficiently perform stratified sampling under constraints that preserve specific reconstruction guarantees. The developed approach leverages the previously established salience findings to provide general guidance for efficient sampling that maximizes the information potential of point clouds and derivative levels of detail (LODs). Third, current point cloud sample spacing and density DQ assessment methods are evaluated to surface potential biases. Alternative methods are developed that efficiently measure both metrics while mitigating the discovered biases. Finally, an initial treatment of additional factors perceived as remaining gaps in the current LiDAR DQ assessment landscape is presented. Several proposed assessments directly follow from the methods developed to support sample spacing and density assessment. Initial direction is provided for addressing the remaining identified factors.
|
605 |
Evaluating Different Multi-Criteria Decision Methods for the Comparison and Investigation of Public Transport ProjectsPapathanasiou, Shameez Patel 14 April 2023 (has links) (PDF)
There is a need for affordable, reliable, and safe public transport in South Africa. In Cape Town, the most popular modes of public transport are rail, bus, Bus Rapid Transit (BRT) and minibus taxis. At this stage, the various modes are not integrated and, in some instances, are running in parallel. Many research papers have focused on comparing the capital costs and benefits of public transport investments and the results often exclude the effects of criteria that are not easily monetised. In South Africa, the Cost-Benefit Analysis (CBA) is often used to evaluate public transport projects, whereas, in this research, Multi-Criteria Decision Analysis (MCDA) methods were investigated and used. The objective of this research was to evaluate the MCDA methods available and establish it as an alternative or supplementary method, or tool, that public transport planners could use when evaluating public transport projects. In order to test the MCDA methods, Cape Town's existing public transport was used as a case study with each mode assumed to be operating exclusively. Therefore, the five scenarios analysed are: Rail (MetroRail); Bus (Golden Arrow); BRT (MyCiTi) Minibus Taxis, and Integrated Public Transport System (theoretical). These modes were evaluated against a number of criteria including economic, social, and environmental impacts. Qualitative methods were focused on, incorporating quantitative methods, in order to gain indepth insight into public transport management and operations, as well as the costs and benefits involved, both direct and indirect. Research on public transport practices, locally, nationally, and internationally was performed. From this, alternatives for the case study, as well as the assessment criteria, were established. The research also included investigating multi-criteria analysis methods, ultimately leading to the methods chosen for the analysis. In order to perform the analyses using the alternatives and assessment criteria, the criteria needed to be weighted. The scenarios were analysed using an UNWEIGHTED viewpoint, where each criterion was equally weighted; WEIGHTEDs viewpoint, where each criterion was weighted by key players (specialists) in the public transport discipline; WEIGHTEDp viewpoint, where each criterion was weighted by the general public who have used public transport in Cape Town. As this may lead to differing results, aggregation methods were also included in the research. As mentioned, the integration of this investigation involves optimising the method in which public transport projects are being evaluated by establishing a multi-criteria analyses method which is reliable, simple, and capable of including a variety of criteria, both monetary, qualitative, and quantitative. A variety of comparative evaluation methods exist. Within this, as mentioned, the popular methods for public transport appraisal are Cost-Benefit Analysis and a variety of Multi-Criteria Decision Analysis methods. Cost-Benefit Analysis (CBA) is the most used evaluation method for assessing infrastructural investments. In the transport field, it is the basic tool in most countries (Beria et al., 2012). The CBA is based on the monetisation and inter-temporal discount. Money is the measure unit used as common numeracy to translate all costs and benefits associated to an investment or a policy. Once all relevant effects of an investment are quantified, the concept of inter-temporal discount is used to translate future costs and benefits to present day by means of a social discount rate. In this way, the future can be compared with the present (Beria et al., 2012). CBA weighs the pros and cons of a project in a rational and systematic process. It inherently requires the creation and evaluation of at least two options, “do it or not”, plus it requires an evaluation at several different scales (nothing, minimum and all, as the least requirements) (OECD, 2006; EC, 2008; Ninan, 2008 as cited in Jones et al., 2014). Costs generally associated with a cost-benefit analysis include those related to construction and future maintenance, such as capital, major rehabilitation and annual maintenance costs over the life-cycle of the project. Other considerations include discounting of future costs and benefits, dealing with opportunity costs, inflation, avoidance of double counting, avoidance of sunk costs, dealing with joint costs and dealing with the sensitivity analysis (Kentucky Transportation Center, 2016). The limitations often associated with CBA includes omitting costs or key benefits, as well as measuring factors like travel time savings and safety improvements, which are not easily monetised (Kentucky Transportation Center, 2016). In an attempt to mitigate the weaknesses of the CBA, Multi-Criteria Decision Analysis (MCDA) methods were investigated. Generally speaking, a multiple criteria decision problem is a scenario in which having defined a set of actions/solutions (Do nothing / Upgrade Rail / Additional buses etc.) and a consistent family of criteria (Cost / Accessibility / Safety etc.), the Decision Maker (DM) tends to determine the best subset of actions and solutions according to the criteria (choice problem), divide the solutions into subsets representing specific classes of solutions according to the concrete classification rules (sorting problem) or rank the actions and solutions from best to worst, according to the criteria (ranking problem) (Zak, 2010). As previously mentioned, there are many MCDA methods available, Macharis & Bernardini (2015) performed an investigation to establish the most commonly used methods used for transport project analysis. The top three most popular methods are AHP/ANP (Analytic Hierarchy/Network Process) – often used in combination with another method, such as the Evaluation of Mixed Data method (EVAMIX), TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) and Fuzzy Set – often used as a part another method, such as the Simple Additive Weighting Method (SAW, also known as Weighted Sum). The SAW method appeals to the school of thought of unified scores across alternatives, applying weighting and sums the result per alternative. The EVAMIX method appeals to the second school of thought and takes it one step further. After the unification of scores, the alternatives are compared pairwise (Vanderschuren & Frieslaar, 2008). In order to compare the outcomes of different methods without the use of specialised software, to make the method accessible, the SAW method and EVAMIX method was used in conjunction with the AHP method, therefore, appealing to both schools of thought. The AHP method, as developed by Saaty (1980) is a helpful tool for managing qualitative and quantitative criteria involved in decision-making. As stated in the name, it is based on a hierarchical structure (Taherdoost, 2017). The AHP method also develops a linear additive model, but in its standard format, uses procedures for deriving weights and the scores achieved by alternatives which are based, respectively, on pair-wise comparisons between criteria and / or options (Department of Communities and Local Government, 2009). The fundamental input to the AHP method is the decision makers' answers to a series of questions in the general form, ‘How important is criterion A relative to criterion B?' These pair-wise comparisons can be used to establish the weights for criteria and the performance scores for the options on the different criteria (Department of Communities and Local Government, 2009). The SAW method, also known as weighted linear combination, weighted summation or scoring methods is a simple and often used multi attribute decision technique. The method is based on the weighted average. An evaluation score is calculated for each alternative by multiplying the scaled value given to the alternative of that attribute with the weights of relative importance directly assigned by the decision maker, followed by summing of the products for all criteria. The advantage of the method is that it is a proportional linear transformation of the raw data which means the relative order of magnitude of the standardised scores remains equal (Afshari et al., 2010) The EVAMIX method was first introduced by Voogd (1982, 1983) and developed by Nijkamp et al. (1990), and Martel and Matarazzo (2005) as cited in Tuş Işık & Aytaç Adalı, (2016). A key component of the method is that it includes and combines both ordinal and cardinal, beneficial and non-beneficial data within the same evaluation matrix, hence the name. The EVAMIX method makes different computations to the data in the evaluation matrix depending on whether it is ordinal or cardinal (Hajkowicz & Higgins, 2008, as cited in Tuş Işık & Aytaç Adalı, 2016). The EVAMIX is a simple decision support tool, it requires pairwise comparison of alternatives, for each pair of alternatives, a dominance score for the ordinal and cardinal criteria are calculated. Then these dominance scores are combined into an overall dominance score of each alternative (Hinloopen et al., 2004, as cited in Tuş Işık & Aytaç Adalı, 2016). Finally, the alternatives are ranked based on the appraisal scores (Chatterjee & Chakraborty, 2013, as cited in Tuş Işık & Aytaç Adalı, 2016). The two chosen MCDA methods rank the alternatives, however the results of these rankings may not be the same, because of the different assumptions made in each method as well as the difference in criteria weights between the weighted and unweighted analyses. In this case, the aggregation of the methods may be needed. In this paper, it is proposed that the Borda and Copeland methods are used, as well as the Average Ranking Procedure. The Average Ranking Procedure ranks the alternatives by their mean values as opposed to the Borda and Copeland Method which rank alternatives by voting (Cheng & Saskatchewen, 2000). As mentioned, there are many ways that public transport projects are evaluated and part of the reason that a structured methodology is not used, is due to the complex nature of public transport. The potential impacts are directly related to the range (e.g., economic, financial, environmental, social, direct/indirect) and affected groups (users, non-users, as well as government and private operators) (Ferreira & Lake, 2002). For the sake of this thesis, a multiactor multi-criteria analysis was adopted and the three views were analysed (specialist, academic and transport users). Using the existing public transport in Cape Town as a case study, the following scenarios were analysed: Existing rail (MetroRail); Existing bus service (Golden Arrow); Existing BRT service (MyCiTi); Existing minibus taxi service and Theoretical integrated public transport system. It should be noted that for the theoretical integrated transport system, it was assumed that the existing rail, BRT and bus service continued operating and the minibus taxis would operate as feeders to the rest of the system. The services would not operate in parallel. In addition to this, it was also assumed that the BRT system would not expand and instead the funds available would be used to upgrade the existing public transportation along the proposed routes. The above scenarios were evaluated against a set of criteria. To establish the criteria, the most important criteria were identified by evaluating official statements and government documents to establish what the focus is regarding public transport in South Africa. The criteria were as follows: Cost, Land-Use, Affordability for Users, Accessibility, Estimated Speed, Convenience & Reliability, Environmental Effects, and Safety & Security. Two methods of MCDA were used with three alternate weightings as previously described, specialist, general public and academic (unbiased). The AHP method used to establish weightings was simple to use for both the planners/engineers and the general public, as the consistency ratio was under 10% it can, therefore, be concluded that the general public were consistent in their answers thereby understanding the questions and the survey method. The general public rated ‘Accessibility' as the top criteria, whereas the specialists in the private sector and public sector agreed that ‘Safety & Security' is the top criteria, which is the second most important criteria to the general public. Tied with ‘Safety & Security' for the second most important criteria, the general public also voted for ‘Affordability', the private sector specialists agreed, whereas the public sector rated ‘Accessibility' as the second most important criteria. In third place, the general public, as well as the specialists in the public sector agreed that ‘Cost' is important, whereas the private sector rated ‘Accessibility' as the third most important criteria. While the three perspectives differed in ranking, it can be seen that the top four criteria across the board, in no particular order were ‘Accessibility', ‘Affordability', ‘Safety & Security' and ‘Cost'. On the other end of the scale, the lowest weighted criteria were seen to be ‘Speed' for the general public and ‘Environmental' for engineers in both the public and private sector. The engineers in both the public and private sector agree that ‘Speed' is the second least important criteria and conversely, the general public has ‘Environmental' listed as the second least important criteria. All three perspectives agree that ‘Convenience & Reliability' was the third least important criteria. Therefore, it can be seen that the bottom three criteria, in no particular order were ‘Speed', ‘Environmental' and ‘Convenience & Reliability'. The SAW method using the specialist weighting (public and private combined) and the general public weighting, resulted in the same conclusion. The theoretical integrated transport system was the best choice, and the BRT system was considered the least favourable. The academic perspective resulted in minibus taxis being the best choice and the worst choice coincided with the specialist and general public perspective, i.e. BRT. The EVAMIX method results differ slightly between the three perspectives, however, all three agreed that the theoretical integrated transport system was the best alternative. The specialist perspective resulted in the trains being the worst option and the general public and academic perspective resulted in the BRT being the least favourable option. The results were aggregated using three aggregation methods. These methods resulted in the same three rankings with the theoretical integrated system being the best option for investment and the BRT being least desirable option. It should be noted that this evaluation was based on a theoretical approach to the integrated transport system and once the system is designed, further evaluation using accurate data, should be performed. This may change the outcome. In conclusion, both methods of MCDA were implemented with feasible results and, therefore, both methods of MCDA are easily applicable to the evaluation of public transport projects. It is recommended that, as far as possible, primary data be collected when implementing public transport evaluations. It is also recommended to evaluate public transport projects over the lifecycle of the chosen project. Generally, public transport projects are evaluated by or by the order of the City of Cape Town or Western Cape Government and should this be the case, access to more accurate data should be achievable. It is further recommended that should an integrated transport system be considered, the analysis is re-evaluated with the detailed design of the integrated transport system which would provide more precise data and may change the results.
|
606 |
Development of an Integrated thermal hydrolysis process - Anaerobic Digestion (THPAD) ModelOlando, Alexander 11 April 2023 (has links) (PDF)
Historically, anaerobic digestion is one of the most common processes used to treat sludge generated from wastewater treatment plant (WWTP) processes. However, with the exponential increase in populations, which implies an increase in WWTP loads, the amount of waste generated poses an imminent problem to the handling capacity of current anaerobic digesters. Subsequently, there has been a lot of research into various physical and chemical processes that would allow for a more efficient sludge handling mechanism. Studies have reported various advantages associated with digesting sludge at higher temperatures known as thermophilic temperatures. These advantages include increased sludge handling capacity, a higher degree of sludge biodegradability and subsequently increased methane production and better sludge dewatering characteristics implying cheaper sludge transportation costs just to mention a few. However, despite the advantages associated with thermal treatment, this technology has not yet been proven in a South African context. This project involved the development of an integrated thermal hydrolysis process (THP) and anaerobic digestion (AD) model capable of simulating these processes at elevated temperatures. A comparative desktop case study of the existing AD facility at the Cape Flats wastewater treatment works (CFWWTW) in Western Cape, South Africa was investigated following the City of Cape Town's (CCT) initiative to retrofit a THP unit to the anaerobic digesters to help deal with the increase in sludge handling capacity. A comparison was therefore carried out, investigating the base case scenario of maintaining the existing conventional mesophilic anaerobic digesters (MAD) and retrofitting a THP unit to the conventional anaerobic digesters (THPAD). A steady-state THP and AD model was developed and used in conjunction with an integrated dynamic THP and modified AD (termed as the Extended-UCTSDM3P) model for simulating both the conventional MAD and THPAD processes. This allowed for a comparison of results not only between the two processes, but also the two types of models. These models were then used to simulate the treatment of a mixture of primary sludge (PS) and waste activated sludge (WAS) at a ratio of 60:40 with the WAS obtained from a Nitrification Denitrification Biological Excess Phosphorus Removal (NDBEPR) activated sludge treatment. The AD models, therefore, accounted for the increased phosphorus concentration as a result of iv polyphosphates (PP) breakdown and consequently the possible precipitation of struvite (MgNH4PO4) from the AD liquor. The results showed that the THPAD configuration allowed the digesters to process 2.3 times more sludge than with the conventional mesophilic anaerobic digesters. Furthermore, the methane production in the THPAD was conservatively calculated to be 2.5 times higher than the MAD. This implied an increased potential for use of the methane gas as an alternative source of energy in the wastewater treatment plants. Given that no laboratory experiments were carried out, the results were based on theoretical scenarios and knowledge collected from an extensive literature review. However, given the capacity, flexibility and detail the model has been developed to, different scenarios in the anaerobic digestion process can be investigated and valuable practical insight extracted. Furthermore, through calibration with accurate meaningful data from a pilot or full-scale plant, the developed model is a tool that could be used in predicting digester performance.
|
607 |
Design of the aerobic hail reactor - towards improved energy efficiencyShaer, Gianluca Sasha Salvatore Ganter 21 April 2023 (has links) (PDF)
This dissertation presents the results of an investigation into the design of a novel low aspect ratio reactor, dubbed the HAIL (horizontal air-injected loop) reactor. Current industrial high cell density aerobic reactors for cultivation of bacteria and yeast are typically either stirred tank reactors (STR's), bubble column reactors (BCR's) or airlift reactors (ALR's). These systems can attain high mass transfer rates and short mixing times; however, their energy efficiency remains a concern. Many studies have attempted to further optimise these reactors, but they are ultimately limited by their high aspect ratios. These lead to large pressure heads that the air compressor needs to overcome on sparging, contributing significantly to energy costs. Low aspect ratio (LAR) reactors, such as the wave bag, orbital shaker and raceway reactors offer an alternative to these systems, providing superior energy efficiency for both mixing and aeration. However, each has core issues preventing their usage in high cell density aerobic culture. Their maximum mass transfer coefficient is typically too low to support high cell density cultures. Additionally, these reactors tend to have poor scalability, making them unfeasible for large scale industrial usage. To overcome these challenges, the HAIL reactor makes use of a tubular loop design. The anticipated benefit of the loop design was that it forces the air to travel the length of the reactor before leaving the system, enabling significant surface aeration and residence time in the reactor. These both impact the mass transfer coefficient. Additionally, the loops can be stacked upon one another, overcoming the scalability issue. The reactor would also be energy efficient based on its LAR. To establish target performance ranges, a literature review on the gas-liquid mass transfer coefficient, mixing time and efficiency of current low and high aspect ratio (HAR) reactors was conducted. This was supplemented with experimental results (including mass transfer coefficients, cell density and viscosity) from the fed-batch STR cultivation of Saccharomyces cerevisiae, an easy to work with highly aerobic yeast. A fed-batch feeding profile was developed for this. To better compare reactor performance, a term was introduced called the mass transfer energy efficiency, with units m3 ∙h -1 ∙W-1 , obtained via the quotient of the kLa and the power input per unit volume. The literature mass transfer energy efficiency ranges for the STR, BCR and ALR were found to be 0.022-0.236 m3 ∙h -1 ∙W-1 , 0.084-0.317 m3 ∙h -1 ∙W-1 and 0.142-0.493 m3 ∙h -1 ∙W-1 respectively, with maximum kLa values ranging up to 1000 h-1 depending on the power input. Mixing times for these systems differ depending on scale and configuration, ranging from below a minute up to 20 minutes. Experimental fed-batch and sterile water systems had efficiency ranges of 0.044-0.245 m3 ∙h -1 ∙W-1 and 0.059-0.285 m3 ∙h -1 ∙W-1 respectively, with a maximum kLa of 240 h-1 and 226 h-1 . Based on cellular growth results, the theoretical minimum kLa required was calculated as 372 h-1 . The most notable literature efficiencies for LAR reactors were held by the travelling loop, raceway, and wave reactors with ranges of 0.286- 0.295 m3 ∙h -1 ∙W-1 , 0.034-0.867 m3 ∙h -1 ∙W-1 , and 0.112-0.742 m3 ∙h -1 ∙W-1 . For the wave and travelling loop reactors, mixing times below a minute were attainable. A 6.2 L proof-of-concept and 31.4 L laboratory-scale prototype of the HAIL reactor were developed. In the proof-of-concept prototype, preliminary studies were carried out on the impact of sparger depth and angle on circulation time. Using the laboratory-scale system a range of sparger designs, including different angled jets, outlet areas and a circular sparger design, were investigated. The circular sparger design was found to be the ideal sparger type. A mixing time of 7-19 minutes depending on the power input was found for the 31.4 L configuration. The power efficiency range determined was 0.120- 0.281 m3 ∙h -1 ∙W-1 ; however, the calculation used to determine this is an underapproximation. The maximum kLa of 13.84 h-1 is an order of magnitude (between 10 and 100) lower than the values that can be obtained in HAR reactors for industrial aerobic culture. It was found that HAIL reactor performance did not change substantially with an increase in viscosity from 1 to 1.4 cP. The HAIL reactor did not compete with existing low and high aspect ratio reactors in its current configuration in terms of mass transfer. Additional research on the design is recommended to enhance gas - liquid contacting and associated mass transfer. These ongoing studies will enable the potential relevance and application of the novel reactor to be determined.
|
608 |
The Force Floor: Design and Development of a Low-Cost 3D Force Sensing Area Which Utilises Machine Learning to Estimate 3D GRF and CoP from Single-Axis LoadcellsStickells, Devin 30 July 2023 (has links) (PDF)
Along with motion capture tools, ground reaction force (GRF) sensors form the crux of objective biomechanical analysis. Advances in computer vision have significantly lowered the costs associated with 3D motion capture, but the same cannot be said of 3-axis force plates – the gold standard for GRF capture. If wholistic biomechanics analysis is to become more accessible, a more affordable method of 3D GRF measurement is needed. Single-axis loadcells are significantly cheaper than their 3-axis equivalents, though when axes are not mechanically isolated there is the possibility for crosstalk and the absorption of forces which cannot be measured, leading to a system that cannot be fully described analytically - and is possibly nonlinear in its behaviour. This research investigates the design and small-scale manufacture (to 20 units) of a low-cost force plate design that utilises a machine learning model to overcome these limitations and estimate 3D GRF and centre of pressure from a series of single-axis loadcells. A literature review was performed to understand and compare the relevant approaches to the core aspects of the project. An early proof of concept plate was built and tested along with a simple neural network to establish the feasibility of the idea. Following further investigation, it was discovered that the internal geometry of the plate played an integral role in its accuracy. To this end, the force plate was simulated, and an extensive hardware design process undertaken prior to the design of a full-scale prototype. It was subsequently hypothesised that the ease of repetition of the design could be aided by the development of an automated data creation rig, as well as the use of recently-developed machine learning techniques which reduce data dependency, such as Sim2Real transfer learning and physicsinformed residual networks. A data creation rig was built for purpose. Twenty prototype plates were built, with sixteen of them being interlinked to create the prototype Force Floor - a large force sensing area. The performance of a subset of these plates and their corresponding models was tested against an Advanced Mechanical Technology Inc. (AMTI) BMS6001200 force plate, with the best obtaining average measurement disagreements in the X-, Y- and Z-directions of 1.23, 1.08, and 1.11 percent of the full-scale force respectively (with full-scale deflections of 600 N, 600 N and 2000 N respectively). Analysis of the project's results was encouraging as far as the viability of this design and approach for use in the production of an affordable 3-axis force plate is concerned.
|
609 |
Investigating the performance requirements for proprietary concrete repair materials with respect to durability and cracking resistanceVukindu, Brian 17 September 2021 (has links)
The premature deterioration of recently constructed concrete structures leads to the need for remedial measures to reinstate their safety and/or serviceability. Bonded concrete overlays (BCOs) are the most widely used concrete repair technique. The premature failure of these overlays, often manifested by cracking and/or debonding, is common despite their widespread use. There are many repair standards, codes and technical guidelines for BCOs. The performance requirements for BCOs stated in these standards vary. This makes the specification of repair materials difficult. This problem is further compounded by the existence of many proprietary concrete repair materials. The objective of this study was to investigate the performance requirements for proprietary repair mortars on cracking resistance and durability with respect to EN 1504-3:2005. This was achieved through an investigation of the mechanical, durability and transport properties of proprietary repair mortars in the hardened state. The mechanical properties that were tested comprised: compressive strength, tensile strength, elastic modulus, tensile relaxation, restrained shrinkage cracking and drying shrinkage. Durability index tests of OPI, CCI WSI were also done. Twelve proprietary repair mortars were tested in the laboratory. Their chemical and physical characteristics based on the aforementioned material properties were determined. The mortars under investigation exhibited significant differences in their physical properties and chemical composition. A review of the existing performance criteria, as stipulated in EN 1504-3:2005, was also conducted to determine if the repair mortars under investigation conform to the requirements of this code. From the test results it has been noted that the tested proprietary repair materials achieved the compressive strengths as stated by the standard EN 1503-4:2005. 11 of the tested repair materials were categorised as “structural” with only mix P2 being a “non-structural” repair mortar. These results also confirmed the specifications/categorisation from the manufacturers. Mixes PS, PFS, SA, S1, S2, G1, PF1, G2, P1, PF2 and A were categorised as high strength mortars to be used for structural repairs. Mix P2, having a low compressive strength is to be used as a cosmetic repair mortar. Furthermore, it was observed that a high compressive and tensile strength of the overlay does not necessarily translate into a high bond strength. The proprietary repair mortars exhibited low permeability. A review of the EN 1504-3:2005 showed that this code does not specify important crack-determining material parameters such as elastic modulus, tensile relaxation and shrinkage despite the critical role they play in the cracking performance of repair mortars. Further research into the microstructural properties of the proprietary repair materials is recommended to give additional insights into the causes of their different physical properties. This should be combined with on-site observation and testing to identify any potentially problematic macro-scale issues associated with repair mortars, particularly in relation to moisture transmission and retention. Understanding these factors amongst others, are essential to prevent damage to repaired structures by the use of incompatible repair materials.
|
610 |
Interference Management of Inband Underlay Device-toDevice Communication in 5G Cellular NetworksBoamah, Sharon Ampomaa 29 July 2021 (has links)
The explosive growth of data traffic demands, emanating from smart mobile devices and bandwidth-consuming applications on the cellular network poses the need to drastically modify the cellular network architecture. A challenge faced by the network operators is the inability of the finite spectral resources to support the growing data traffic. The Next Generation Network (NGN) is expected to meet defined requirements such as massively connecting billions of devices with heterogeneous applications and services through enhanced mobile broadband networks, which provides higher data rates with improved network reliability and availability, lower end-to-end latency and increased energy efficiency. Device-to-Device (D2D) communication is one of the several emerging technologies that has been proposed to support NGN in meeting these aforementioned requirements. D2D communication leverages the proximity of users to provide direct communication with or without traversing the base station. Hence, the integration of D2D communication into cellular networks provides potential gains in terms of throughput, energy efficiency, network capacity and spectrum efficiency. D2D communication underlaying a cellular network provides efficient utilisation of the scarce spectral resources, however, there is an introduction of interference emanating from the reuse of cellular channels by D2D pairs. Hence, this dissertation focuses on the technical challenge with regards to interference management in underlay D2D communication. In order to tackle this challenge to be able to exploit the potentials of D2D communication, there is the need to answer some important research questions concerning the problem. Thus, the study aims to find out how cellular channels can be efficiently allocated to D2D pairs for reuse as an underlay to cellular network, and how mode selection and power control approaches influence the degree of interference caused by D2D pairs to cellular users. Also, the research study continues to determine how the quality of D2D communication can be maintained with factors such as bad channel quality or increased distance. In addressing these research questions, resource management techniques of mode selection, power control, relay selection and channel allocation are applied to minimise the interference caused by D2D pairs when reusing cellular channels to guarantee the Quality of Service (QoS) of cellular users, while optimally improving the number of permitted D2D pairs to reuse channels. The concept of Open loop power control scheme is examined in D2D communication underlaying cellular network. The performance of the fractional open loop power control components on SINR is studied. The simulation results portrayed that the conventional open loop power control method provides increased compensation for the path loss with higher D2D transmit power when compared with the fractional open loop power control method. Furthermore, the problem of channel allocation to minimise interference is modelled in two system model scenarios, consisting of cellular users coexisting with D2D pairs with or without relay assistance. The channel allocation problem is solved as an assignment problem by using a proposed heuristic channel allocation, random channel allocation, Kuhn-Munkres (KM) and Gale-Shapley (GS) algorithms. A comparative performance evaluation for the algorithms are carried out in the two system model scenarios, and the results indicated that D2D communication with relay assistance outperformed the conventional D2D communication without relay assistance. This concludes that the introduction of relay-assisted D2D communication can improve the quality of a network while utilising the available spectral resources without additional infrastructure deployment costs. The research work can be extended to apply an effective relay selection approach for a user mobility scenario.
|
Page generated in 0.1377 seconds