• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 64
  • 64
  • 64
  • 64
  • 64
  • 64
  • 64
  • 64
  • 64
  • 61
  • 42
  • 22
  • 17
  • 16
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Coupling Radio Frequency Energy Via the Embedded Rebar Cage in a Reinforced Concrete Structure for the Purpose of Concrete Degradation Sensing

Campiz, Ryan 01 January 2018 (has links)
This study focuses on utilizing an energy harvesting system in which a dedicated Radio Frequency (RF) power source transmits RF power via rebar in a reinforced concrete column. The RF power is received and decoupled by a receiver, and is then rectified, boosted, and stored as electrical energy in a supercapacitor, later to be used to make measurements, process data, and communicate to the source via rebar. Two design attempts are presented in this study: (a) one uses single line conduction at 2.4 GHz for RF power transfer; (b) the other uses a more conventional two-line conduction at 8.0 kHz for RF power transfer. Both designs were unsuccessful: (a) the 2.4 GHz attempt demonstrated that no detectable RF power propagated through the concrete medium; (b) the 8.0 kHz attempt demonstrated that too much of the RF power was attenuated through the concrete medium for the energy harvesting circuitry work properly. A potential third design approach is posited in the conclusion of this study. In addition to investigating power transfer designs, a study on the energy harvesting circuitry was performed. A Two-Stage Dickson Multiplier was utilized in conjunction with a Texas Instruments BQ25504 Ultra-Low Power Energy Harvesting Circuit. For these two components to function best, it was shown that the BQ25504’s input filtering capacitor needed to be on the same order of magnitude as the charging capacitors of the Two-Stage Dickson Multiplier, otherwise, if the filtering capacitor was comparatively too large, it would short the output of the Two-Stage Dickson Multiplier. With that said, the lowest power input observed was at 7.83 dBm, but with lower input powers expected to be achievable. Nevertheless, since the second design attempt showed power losses were too significant, it was deemed that at present, unless the power transfer design were improved, then contemporary commercial off the shelf energy harvesting approaches are insufficient.
42

A Study of Walkway Safety and Evaluation of Tribological Test Equipment

Baker, Henry Thomas 01 January 2014 (has links)
A walkway tribometer measures the coefficient of friction between flooring material and a test foot. The value of the coefficient of friction is an indicator as to whether the flooring surface is slippery and has a propensity to cause slip and falls. This study determined that one style of tribometer, an XL Tribometer, mimics the heel-to-floor interaction of the human heel strike. High speed video footage revealed that the test foot strikes the surface and rotates so that full engagement occurs before sliding thus mimicking the affect of a human ankle. The test foot accelerates forward as would be expected during a human slip event. The manufacturer’s reported impact speed of 11 in/s, when set to the operating pressure of 25psi, was found to be much lower than measured speeds of three calibrated tribometers. Three XL tribometers were tested and provided a range of impact speeds from 17.4 to 22.7 in/s (n=540) when set to the operating pressure of 25 psi. The pressure setting was found to have a significant effect on the impact speed while the mast angle had an insignificant affect. A review of human walking studies revealed a range of pedestrian heel impact speeds on the order of 19.4 to 45.3 in/s during normal human ambulation activities. These tribometers fell on the low side of this speed range. A sensitivity study showed that the measured value of the coefficient of friction tends to decrease with a higher impact speed. This COF decrease was on the order of 0.02 and below the machine resolution and considered inconsequential within the walkway safety community.
43

Soil Improvement Using Microbial Induced Calcite Precipitation and Surfactant Induced Soil Strengthening

Davies, Matthew P. 01 January 2018 (has links)
Microbially induced calcite precipitation (MICP) has been used for a number of years as a technique for the improvement of various geological materials. MICP has been used in a limited capacity in organic rich soils with varying degrees of success. Investigators hypothesized that microbially-induced cementation could be improved in organic soils by using a surfactant. Varying amounts of Sodium Dodecyl Sulfate (SDS) were added to soils of varying organic content and a mixing procedure was used to treat these soils via MICP. Treated specimens were tested for unconfined compressive strength (UCS). Results appeared to show direct relationships between SDS content and treated specimen strength although significant variability was present in the data. In addition, results also indicated that while addition of SDS during MICP treatment strengthens soil, the strengthening is likely from the formation of a calcium dodecyl sulfate (CDS) complex in which the CDS surrounds the soil in a matrix, and formation of MICP-induced calcite has very little to do with overall soil performance. As such, a new method for stabilizing loose soils dubbed ‘Surfactant-induced soil stabilization’ (SISS) was further explored by treating additional soil specimens. Samples treated using this technique showed increases in strength when compared to untreated specimens. In addition, preliminary data indicated that SISS treated specimens were insoluble. The SISS technique presents a number of advantages when compared to traditional soil stabilization techniques. In particular it should be relatively low-cost and simple to administer since its only components are SDS and calcium chloride. Additionally, these constituents are relatively more sustainable than chemicals associated with more-traditional loose soil stabilization techniques.
44

The operational and safety effects of heavy duty vehicles platooning

Alzahrani, Ahmed 01 January 2019 (has links)
Abstract Although researchers have studied the effects of platooning, most of the work done so far has focused on fuel consumption. There are a few studies that have targeted the impact of platooning on the highway operations and safety. This thesis focuses on the impact of heavy-duty vehicles (HDVs) platooning on highway characteristics. Specifically, this study aims at evaluating the effects of platooning of HDVs on capacity, safety, and CO2 emissions. This study is based on a hypothetical model that was created using the VISSIM software. VISSIM is a powerful simulation software designed to mimic the field traffic flow conditions. For model validity, the model outputs were compared with recommended values from guidelines such as the Highway Capacity Manual (HCM) (Transportation Research Board, 2016). VISSIM was used to obtain the simulation results regarding capacity. However, in addition to VISSIM, two other software packages were used to obtain outputs that cannot be assessed in VISSIM. MOVES and SSAM are two simulation software packages that were used for emission and safety metrics, respectively. Both software packages depended on input from VISSIM for analysis. It was found that with the presence of HDVs in the model, the capacity, the emission of CO2, and the safety of the roadway would improve positively. A capacity of 4200 PCE/h/ln could be achieved when there are enough HDVs in platoons. Furthermore, more than 3% of the traffic flow emission of CO2 reduction is possible when 100% of the HDVs used in the model are in platoons. In addition to that, a reduction of more than 75% of the total number of conflicts might be obtained. Furthermore, with the analysis of the full factorial method and the Design of Experiment (DOE) conducted by using Excel and Minitab respectively, it was possible to investigate the impact of the platoons’ factors on the highway parameters. Most of these factors affect the parameters significantly. However, the change in the desired speed was found to insignificantly affect the highway parameters, due to the high penetration rate. Keywords: VISSIM, MOVES, SSAM, COM-interface, HDVs, Platooning, Number of Conflicts
45

Testing COULWAVE for use in modeling cross-shore sand transport and beach profile evolution

Cooper, Patrick Michael 01 January 2019 (has links)
Realistic, reliable, and effective modeling of cross-shore sediment transport is not present in the current literature. Building that model requires the accurate recreation of breaking wave processes in the nearshore. To develop that first step for an as-yet-to-be-designed model, multiple phase-resolving wave transformation algorithms are reviewed for in-depth investigation. The COULWAVE model is selected for robust testing. Testing of the COULWAVE model shows that, although capable of recreating realistic results, it does not adequately describe major wave characteristics in the surf zone, across a wide range of conditions, to warrant use in a future cross-shore sediment transport model.
46

Enhancing the Existing Microscopic Simulation Modeling Practice for Express Lane Facilities

Machumu, Kelvin S 01 January 2017 (has links)
The implementation of managed lanes (MLs), also known as dynamically priced express lanes, to improve freeway traffic flow and personal throughput is on the rise. Congestion pricing is increasingly becoming a common strategy for congestion management, often requiring microscopic simulation during both planning and operational stages. VISSIM is a recognized microscopic simulation software used for analyzing the performance of managed lanes (MLs). This thesis addressed two important microscopic simulation issues that affect the evaluation results of MLs. One of the microscopic simulation issues that has not yet been addressed by previous studies is the required minimum managed lane routing decision (MLRD) distance upstream of the ingress point of MLs. Decision distance is an optimal upstream distance prior to the ingress at which drivers decide to use MLs and change lanes to orient on a side of MLs ingress. To answer this question, this study used a VISSIM model simulating I-295 proposed MLs in Jacksonville, Florida, United States (U.S), varying the MLRD point at regular intervals from 500 feet to 7,000 feet for different levels of service (LOS) input. Three measures of effectiveness (MOEs) - speed, the number of vehicles changing lanes, and following distance - were used for the analysis. These MOEs were measured in the 500 feet zone prior to the ingress. The results indicate that as the LOS deteriorates, speed decreases, the number of vehicles changing lanes increases, and the following distance decreases. When the LOS is constant, the increase in the MLRD distance from the ingress point was associated with the increase in the speed at the 500 feet zone prior to the ingress, less number of lane changes, and the increase in following vehicle gap. However, the MOEs approached constant values after reaching a certain MLRD distance. LOS D was used to determine the minimum MLRD distance to the ingress of the MLs. The determined minimum MLRD distances were 4,000 and 3,000 feet for 6 and 3 lane segments prior to the ingress point, respectively. Another issue addressed in this thesis is the managed lane evaluation (MLE) outputs, which include speed, travel time, density, and tolls. In computing the performance measures, the existing VISSIM managed lane evaluation (EVMLE) tool is designed to use the section starting at the point when vehicles are assigned to use MLs, also known as the MLRD point, which is located upstream of the ingress. The longer the MLRD distance from the ingress, the more the EVMLE tool uses the traffic conditions of the MLs traffic before entering the ML in its computations. This study evaluates the impact of the MLRD distance on the EVMLE outputs and presents a proposed algorithm that addresses the EVMLE shortcomings. In order to examine the influence of the MLRD distance on the outputs of the above-mentioned two algorithms, simulation scenarios of varying MLRD distances from 500 ft to 7,000 feet from the ingress were created. For demonstration purposes, only the speed was used to represent other performance measures. The analysis of variance (ANOVA) test was performed to determine whether there was a significant difference in the speed results with the change in the MLRD distance. According to the ANOVA results, the EVMLE tool produced ML speeds that are MLRD dependent, yielding lower speeds with an increased MLRD distance. On the other hand, the ML speed results from the proposed algorithm were fairly constant, regardless of the MLRD distance.
47

The Effect of Corrosion Defects on the Failure of Oil and Gas Transmission Pipelines: A Finite Element Modeling Study

Orasheva, Jennet 01 January 2017 (has links)
The transportation of oil and gas and their products through the pipelines is a safe and economically efficient way, when compared with other methods of transportation, such as tankers, railroad, trucks, etc. Although pipelines are usually well-designed, during construction and later in service, pipelines are subjected to a variety of risks. Eventually, some sections may experience corrosion which can affect the integrity of pipeline, which poses a risk in high-pressure operations. Specifically, in pipelines with long history of operation, the size and location of the corrosion defects need to be determined so that pressure levels can be kept at safe levels, or alternatively, a decision to repair or replace the pipe section can be made. To make this decision, there are several assessment techniques available to engineers, such as ASME B31G, MB31G, DNV-RP, software code called RSTRENG. These assessment techniques help engineers predict the remaining strength of the wall in a pipe section with a corrosion defect. The corrosion assessment codes in the United States, Canada and Europe are based on ASME-B31G criterion for the evaluation of corrosion defects, established based on full-scale burst experiments on pipes containing longitudinal machined grooves, initially conducted in 1960s. Because actual corrosion defects have more complex geometries than machined grooves, an in-depth study to validate the effectiveness of these techniques is necessary. This study is motivated by this need. The current study was conducted in several stages, starting with the deformation behavior of pipe steels. In Phase 1, true-stress-true plastic strain data from the literature for X42 and X60 steel specimens were used to evaluate how well four commonly used constitutive equations, namely, those developed by Hollomon, Swift, Ludwik and Voce, fit the experimental data. Results showed that all equations provided acceptable fits. For simplicity, the Hollomon equation was selected to be used in the rest of the study. In Phase 2, a preliminary finite element modeling (FEM) study was conducted to compare two failure criteria, stress-based or strain-based, performed better. By using data from the literature for X42 and X60 pipe steels, experimental burst pressure data were compared with predicted burst pressure data, estimated based on the two failure criteria. Based on this preliminary analysis, the stress-based criterion was chosen for further FEM studies. In Phase 3, failure data from real corrosion pits in X52 pipe steels with detailed profiles were used to develop a FEM scheme, which included a simplified representation of the defect. Comparison of actual and predicted burst pressures indicated a good fit, with a coefficient of determination (R2) level of 0.959. In Phase 4, burst pressure levels were estimated for real corrosion pits for the experiments from the same study as in Phase 3, but only with corrosion pit depths and length and without corrosion widths. Widths were estimated from the data used in Phase 3, by using an empirical equation as a function of pit length. There was significant error between experimental and predicted burst pressure. Errors in Phases 3 and 4 were compared statistically. Results showed that there is a statistically significant difference in the error when the width of the corrosion pit is unknown. This finding is significant because none of the assessment techniques in the literature takes width into consideration. Subsequently, a parametric study was performed on three defect geometries from the same study in Phase 3. The pit depths and lengths were held constant but widths were changed systematically. In all cases, the effect of the pit width on burst pressure was confirmed. In Phase 5, the three assessment techniques, ASME B31G, MB31-G and DNV-RP were evaluated by using experimental test results for X52 pipe. Synthetic data for deeper pits were developed by FEM and used along with experimental data in this phase. Two types of the error were distinguished to classify defects. Type I errors (α) and Type II errors (β) were defined using Level 0 evaluation method. Results showed that although ASME B31G is the most conservative technique, it is more reliable for short defects than MB31G and DNV-RP. The least conservative technique was DNV-RP but it yielded β error, i.e., the method predicted a safe operating pressure and pipe section would fail. Therefore, DNV-RP is not recommended for assessment of steel pipes, specifically for X52 pipes.
48

Characterization of Tensile Deformation in AZ91 Mg Alloy Castings

Unal, Ogun 01 January 2016 (has links)
Tensile deformation characteristics of cast aluminum alloys have been investigated extensively. Cast Mg alloys have remained mostly neglected by researchers, despite their potential for weight savings. This present study is motivated by this gap in the literature and consists of two stages; in Stage 1, analysis of tensile data gathered from literature were reanalyzed, and in Stage 2, data generated from tensile testing of 60 specimens of AZ91 Mg alloy castings in both T4 and T6 conditions were analyzed to characterize work hardening behavior. In Stage 1, more than 1600 data were collected from the literature for various Mg alloy families. After plotting these data in yield strength-elongation charts, highest points were identified and interpreted as the maximum ductility, i.e., ductility potential (eFmax). The trend in maximum points indicated a linear relationship with yield strength (σY), expressed as; eF(max) = 41.8 - 0.106σY (1) This ductility potential equation can be used as a metric to compare elongation obtained from tensile specimens to measure the structural quality of Mg alloy castings. Moreover, results indicated that ductility potential was not affected by heat treatment, grain size (within 30-120 μm), casting geometry, size, the type of casting process nor chemical composition. In Phase 2, AZ91 cast Mg alloy specimens in T4 and T6 conditions were tested in tension to obtain stress-strain data for each specimen. Fits of four constitutive equations, namely, the Hollomon, Voce, Ludwik and Swift, to true stress-true plastic strain data in the elastoplastic region were characterized for the specimens with highest elongation values for T4 and T6 specimens. The coefficient of determination, R2, values for all equations were in excess of 0.99, suggesting that all four equations provide excellent fits to tensile data in both conditions. The change in work hardening rate with true stress was investigated for all specimens by using Kocks-Mecking (KM) plots. It was determined that work hardening behavior of Mg alloy castings in T4 and T6 is distinctly different. In T4 specimens, there is a plateau in work hardening rate at approximately E/25 which was observed in all specimens. The presence of this plateau is consistent with results given in the literature for pure Mg. However, this plateau was not observed in any of the T6 specimens. The reasons for the absence of the plateau in T6 specimens are unknown at this time. In both T4 and T6 specimens, the KM work hardening model in which work hardening rate changes linearly with true stress was found to be applicable. This is the first time that KM model was found to be valid for Mg alloys. Moreover in all specimens, there was a sudden drop in work hardening rate just prior to final fracture. This drop was first hypothesized to be due to structural defects in specimens, which was subsequently validated via fractography. Structural defects were found in all specimens whose fracture surfaces were investigated, indicating low to medium levels of quality. The quality index method, originally developed for cast aluminum alloys as the ratio of elongation to ductility potential, was found not to be applicable to Mg alloys, at least in its original form. This is due to the fact that work hardening behavior of cast aluminum alloys follows the KM model and there is no plateau where work hardening rate is constant. Hence the work hardening behavior of cast aluminum alloys and AZ91 specimens in T6 condition was similar. However the plateau of constant work hardening rate had a strong effect on elongation in T4 specimens. Therefore quality index analysis, which is supposed to be independent of alloy condition, did show that T4 and T6 specimens had different quality index levels. This finding contradicted the result from Stage 1 that aging has no effect on ductility potential. However because of the presence of structural defects in all specimens, quality index levels were low (0.30-0.45). Therefore it is unclear at this point whether the work hardening behavior of T4 and T6 specimens would still be different if elongation values were in the proximity of the ductility potential line. More research is needed to characterize work hardening behavior of cast Mg alloys in the absence of major structural defects and also address other questions raised in this study.
49

Neural Network Based Control of Integrated Recycle Heat Exchanger Superheaters in Circulating Fluidized Bed Boilers

Biruk, David D 01 January 2013 (has links)
The focus of this thesis is the development and implementation of a neural network model predictive controller to be used for controlling the integrated recycle heat exchanger (Intrex) in a 300MW circulating fluidized bed (CFB) boiler. Discussion of the development of the controller will include data collection and preprocessing, controller design and controller tuning. The controller will be programmed directly into the plant distributed control system (DCS) and does not require the continuous use of any third party software. The intrexes serve as the loop seal in the CFB as well as intermediate and finishing superheaters. Heat is transferred to the steam in the intrex superheaters from the circulating ash which can vary in consistency, quantity and quality. Fuel composition can have a large impact on the ash quality and in turn, on intrex performance. Variations in MW load and airflow settings will also impact intrex performance due to their impact on the quantity of ash circulating in the CFB. Insufficient intrex heat transfer will result in low main steam temperature while excessive heat transfer will result in high superheat attemperator sprays and/or loss of unit efficiency. This controller will automatically adjust to optimize intrex ash flow to compensate for changes in the other ash properties by controlling intrex air flows. The controller will allow the operator to enter a target intrex steam temperature increase which will cause all of the intrex air flows to adjust simultaneously to achieve the target temperature. The result will be stable main steam temperature and in turn stable and reliable operation of the CFB.
50

Factors Affecting Storm Characteristics in the Battery and Vicinity

Kay, Shannon A 01 January 2014 (has links)
Tropical cyclones (TCs) Irene and Sandy caused major damages in back to back years to the most densely populated city in the United States stunning the residents with storms linked to seemingly impossible probabilities. Such activity has raised questions about the effect of non-stationary aspects within atmospheric circulation on storm behavior and some assumptions inherent in previous hazard studies of the New York City (NYC) area. This study analyzes statistical aspects of hazard quantification for this area related to this non-stationarity and statistical characterization. In particular this study investigates the presence of multiple populations of storms, it also tests current assumptions inherent in these previous studies which produce surge hazards which differ significantly and it investigates a natural relationship between storm characteristics and large scale climate variations through Empirical Orthogonal Functions (EOF) of the sea surface pressure. The findings of this study show that there is a statistically significant influence of climate variability on storm frequency, intensity and direction within the Battery and vicinity (BAV, Battery Park and surrounding region). Variations in large-scale atmospheric pressure patterns as well as sea surface temperature appear to be significantly affecting the surge hazard for this region. This study also shows there is a statistically significant relationship between storm heading and intensity as well as the presence of multiple populations of storms driven by different atmospheric states that behave with alternate characteristics. These multiple populations appear to be significantly influencing the overall average of storm behavior causing inaccurate assumptions in hazard quantification which leads to misestimation in risks.

Page generated in 0.0921 seconds