• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 167
  • 112
  • 41
  • 26
  • 19
  • 10
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 445
  • 445
  • 150
  • 89
  • 81
  • 77
  • 77
  • 69
  • 64
  • 63
  • 60
  • 41
  • 39
  • 38
  • 38
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Service Life Modeling of Virginia Bridge Decks

Williamson, Gregory Scott 09 April 2007 (has links)
A model to determine the time to the End of Functional Service Life (EFSL) for concrete bridge decks in Virginia was developed. The service life of Virginia bridge decks is controlled by chloride-induced corrosion of the reinforcing steel. Monte Carlo resampling techniques were used to integrate the statistical nature of the input variables into the model. This is an improvement on previous deterministic models in that the effect of highly variable input parameters is reflected in the service life estimations. The model predicts the time required for corrosion to initiate on 2% of the reinforcing steel in a bridge deck and then a corrosion propagation time period, determined from empirical data, is added to estimate the EFSL for a given bridge deck or set of bridge decks. Data from 36 Virginia bridge decks was collected in order to validate the service life model as well as to investigate the effect of bridge deck construction specification changes. The bridge decks were separated into three distinct groups: 10 bare steel reinforcement decks â 0.47 water/cement (w/c), 16 Epoxy-Coated Reinforcement (ECR) decks â 0.45 w/c, and 10 ECR decks â 0.45 w/(c+pozzolan). Using chloride titration data and cover depth measurements from the sampled bridge decks and chloride corrosion initiation values determined from the literature for bare steel, service life estimates were made for the three sets of bridge decks. The influence of the epoxy coating on corrosion initiation was disregarded in order to allow direct comparisons between the three sets as well as to provide conservative service life estimates. The model was validated by comparing measured deterioration values for the bare steel decks to the estimated values from the model. A comparison was then made between the three bridge deck sets and it was determined that bridge decks constructed with a 0.45 w/(c+p) will provide the longest service life followed by the 0.47 w/c decks and the 0.45 w/c decks, respectively. From this it can be inferred that the addition of pozzolan to the concrete mix will improve the long-term durability of a bridge deck while a reduction in w/c appears to be of no benefit. / Ph. D.
162

Understanding Practical Limitations of Lead Certified Point of Use (POU) Filters

Rouillier, Rusty Jordan 27 July 2020 (has links)
There has been a recent increase in the adoption of point-of-use (POU) household water filters as an alternative to untreated tap water or bottled water. POU filters certified for lead removal have recently been distributed by the hundreds of thousands in communities amid water lead crises, as a temporary solution to protect consumers from elevated water lead levels. This thesis rigorously examines the efficacy of POU lead certified filters in removing lead under a wide range of conditions, and evaluates premature clogging due to iron and associated impacts on the cost analysis of using filters instead of bottled water. In testing ten brands of POU devices against up to four different waters for lead removal, most devices consistently removed lead to below the 5 µg/L FDA bottled water standard. However, several failures were documented, including manufacturing flaws, premature clogging, and inconsistency between duplicate filters. When waters containing more difficult to treat lead particulates were synthesized, treated water often had lead concentrations greater than the 5 µg/L bottled water standard and sometimes were even over the 15 µg/L EPA action level. In some cases, less than 50% of the particulate lead was removed by the filter, thereby replicating some problems with these devices identified in the field. While POUs usually reduced water lead concentrations by at least 80%, a combination of manufacturing issues and difficult to treat waters can cause treated water to exceed expectations. Consumers often purchase POU devices to remove particles and lead in waters that also contain high iron, prompting studies to examine the role of iron on filter performance. When we exposed two brands of pour-through POUs to waters with both high lead and iron, lead removal performance was generally not compromised, as treated water typically had lead concentrations less than 5 µg/L. One case was observed in which lead passed through a set of filters at high levels in association with iron, confirming expectations that in some waters iron could cause formation of lead particulates that are difficult to remove. High levels of iron sometimes rapidly clogged the POU filters, preventing them from reaching their rated capacity and increasing operational costs and time to filter water. Specifically, 50% (3/6) of the filters tested clogged prematurely at an iron concentration of 0.37 mg/L, 66% (4/6) at 1 mg/L and 100% (6/6) at 20 mg/L. A cost analysis for POUs vs. bottled water demonstrated that in waters with higher iron, store-brand bottled water was often the more cost-effective option, especially when iron levels were significantly higher than the EPA Secondary Maximum Contaminant Level (0.3 mg/L). The lower costs of bottled water in these situations was even more apparent if consumer time was factored into the analysis. / Master of Science / There has been a recent increase in the use of household water filters as an alternative to tap water or bottled water. Filters that are certified for lead removal have recently been distributed by the hundreds of thousands in communities amid water lead crises, as a temporary solution to protect consumers from elevated water lead levels. This thesis rigorously examines the effectiveness of these filters under a wide range of conditions. When tested against up to four different waters for lead removal, most filters consistently reduced lead to below the concentrations allowed in bottled water. In cases where the filters did not perform as expected, several filter failure modes were identified, including manufacturing flaws, filter clogging, and inconsistency between duplicate filters. In addition to these failures, when a water that contained particulate lead that was difficult to filter, as little as 50% of the lead was removed. While household filters often significantly reduce water lead concentrations, a combination of manufacturing issues and difficult to treat waters can cause poor performance. In many cases, consumers purchase filters to remove particles or lead in waters that also contain iron, which caused us to investigate the effect of iron on filter performance. When two brands of pour-through filters were tested against waters with both lead and iron, lead removal performance was generally not compromised. One exceptional case was observed where both high levels of lead and iron passed through the filters, leading us to believe that iron in some waters could create conditions where lead is more difficult to remove. In many cases, the presence of iron caused filters to dramatically slow down or clog. Premature clogging due to iron prevented filters from reaching their rated capacity and, in doing so, significantly increased cost and filter times. A cost analysis for filters vs. bottled water demonstrated that in waters with higher iron, store-brand bottled water was often the more cost-effective option, especially in waters with higher levels of iron. The lower costs of bottled water in these situations was even more apparent if consumer time was factored into the analysis.
163

Computational Cost Analysis of Large-Scale Agent-Based Epidemic Simulations

Kamal, Tariq 21 September 2016 (has links)
Agent-based epidemic simulation (ABES) is a powerful and realistic approach for studying the impacts of disease dynamics and complex interventions on the spread of an infection in the population. Among many ABES systems, EpiSimdemics comes closest to the popular agent-based epidemic simulation systems developed by Eubank, Longini, Ferguson, and Parker. EpiSimdemics is a general framework that can model many reaction-diffusion processes besides the Susceptible-Exposed-Infectious-Recovered (SEIR) models. This model allows the study of complex systems as they interact, thus enabling researchers to model and observe the socio-technical trends and forces. Pandemic planning at the world level requires simulation of over 6 billion agents, where each agent has a unique set of demographics, daily activities, and behaviors. Moreover, the stochastic nature of epidemic models, the uncertainty in the initial conditions, and the variability of reactions require the computation of several replicates of a simulation for a meaningful study. Given the hard timelines to respond, running many replicates (15-25) of several configurations (10-100) (of these compute-heavy simulations) can only be possible on high-performance clusters (HPC). These agent-based epidemic simulations are irregular and show poor execution performance on high-performance clusters due to the evolutionary nature of their workload, large irregular communication and load imbalance. For increased utilization of HPC clusters, the simulation needs to be scalable. Many challenges arise when improving the performance of agent-based epidemic simulations on high-performance clusters. Firstly, large-scale graph-structured computation is central to the processing of these simulations, where the star-motif quality nodes (natural graphs) create large computational imbalances and communication hotspots. Secondly, the computation is performed by classes of tasks that are separated by global synchronization. The non-overlapping computations cause idle times, which introduce the load balancing and cost estimation challenges. Thirdly, the computation is overlapped with communication, which is difficult to measure using simple methods, thus making the cost estimation very challenging. Finally, the simulations are iterative and the workload (computation and communication) may change through iterations, as a result introducing load imbalances. This dissertation focuses on developing a cost estimation model and load balancing schemes to increase the runtime efficiency of agent-based epidemic simulations on high-performance clusters. While developing the cost model and load balancing schemes, we perform the static and dynamic load analysis of such simulations. We also statically quantified the computational and communication workloads in EpiSimdemics. We designed, developed and evaluated a cost model for estimating the execution cost of large-scale parallel agent-based epidemic simulations (and more generally for all constrained producer-consumer parallel algorithms). This cost model uses computational imbalances and communication latencies, and enables the cost estimation of those applications where the computation is performed by classes of tasks, separated by synchronization. It enables the performance analysis of parallel applications by computing its execution times on a number of partitions. Our evaluations show that the model is helpful in performance prediction, resource allocation and evaluation of load balancing schemes. As part of load balancing algorithms, we adopted the Metis library for partitioning bipartite graphs. We have also developed lower-overhead custom schemes called Colocation and MetColoc. We performed an evaluation of Metis, Colocation, and MetColoc. Our analysis showed that the MetColoc schemes gives a performance similar to Metis, but with half the partitioning overhead (runtime and memory). On the other hand, the Colocation scheme achieves a similar performance to Metis on a larger number of partitions, but at extremely lower partitioning overhead. Moreover, the memory requirements of Colocation scheme does not increase as we create more partitions. We have also performed the dynamic load analysis of agent-based epidemic simulations. For this, we studied the individual and joint effects of three disease parameter (transmissiblity, infection period and incubation period). We quantified the effects using an analytical equation with separate constants for SIS, SIR and SI disease models. The metric that we have developed in this work is useful for cost estimation of constrained producer-consumer algorithms, however, it has some limitations. The applicability of the metric is application, machine and data-specific. In the future, we plan to extend the metric to increase its applicability to a larger set of machine architectures, applications, and datasets. / Ph. D.
164

Calculation of the actual cost of engine maintenance

Ezik, Oguz. January 2003 (has links)
Thesis (M.S.)--Air Force Institute of Technology, 2003. / Title from title screen (viewed July 1, 2004). "March 2003." Vita. "AFIT/GOR/ENS/03-06." "ADA412960"--URL. Includes bibliographical references (p. 87-90). Also issued in paper format.
165

Economic Performance Assessment of Three Renovated Multi-Family Houses with Different HVAC Systems

Khadra, Alaa January 2018 (has links)
Since the building sector is responsible for 40% of the energy consumption and 36% of CO2 emissions in the EU, the reduction of energy use has become a priority in this sector. The EU has adopted several policies to improve energy efficiency. One of these policies aims to achieve energy efficient renovations in at least 3% of buildings owned and occupied by governments annually. In Sweden, a large part of existing buildings was built between 1965 and 1974, a period commonly referred to as ‘miljonprogrammet’. Stora Tunabyggen AB, the public housing company in Borlänge municipality, begun a renovation project in the Tjärna Ängar neighborhood within the municipality with the greatest share of its buildings stock from this period. The pilot project started in 2015. The aim of this project was to renovate three buildings with similar measures, that is, by adding 150 mm attic insulation, replacing windows with higher performing ones (U-value 1 W/m ²K), by adding 50 mm of insulation to the infill walls and by the installation of flowreducing taps. The essential difference between the three renovation packages is the HVAC systems. The selected HVAC systems are (1) exhaust air heat pump, (2) mechanical ventilation with heat recovery and (3) exhaust ventilation. Life cycle cost analysis was conducted for the three building and sensitivity analysis for different values of discount rate and energy price escalation was performed. The study found that the house with exhaust ventilation has the lowest life cycle cost and the highest energy cost. The house with exhaust air heat pump has 3% higher life cycle cost and 18% lower energy use at 3% discount rate and 3% energy price escalation. The study found that mechanical ventilation with heat recovery is not profitable, although it saves energy. The sensitivity analysis has shown that the possible increment of price energy and lower discount rate give higher value for the future costs in life cycle cost analysis. This lead to the main finding of this thesis, which is that exhaust air heat pump is the best choice for the owner according to the available data and the assessed parameters.
166

RELIABILITY AND COST ANALYSIS OF POWER DISTRIBUTION SYSTEMS SUBJECTED TO TORNADO HAZARD

Braik, Abdullah Mousa Darwish 29 January 2019 (has links)
No description available.
167

LIFE-CYCLE COST ANALYSIS OF REINFORCED CONCRETE BRIDGES REHABILITATED WITH CFRP

Smith, Jeffrey L. 01 January 2015 (has links)
The deterioration of highway bridges and structures and the cost of repairing, rehabilitating, or replacing deteriorated structures is a major issue for bridge owners. An aging infrastructure as well as the need to upgrade structural capacity for heavier trucks adds to problem. Life-cycle cost analysis (LCCA) is a useful tool for determining when the deployment of fiber-reinforced polymer (FRP) composite components is an economically viable alternative for rehabilitating deteriorated concrete bridges. The use of LCCA in bridge design and rehabilitation has been limited. The use of LCCA for bridges on a project level basis has often been limited to the non-routine design of major bridges where the life-cycle cost model is customized. LCCA has historically been deterministic. The deterministic analysis uses discrete values for inputs and is fairly simple and easy to do. It does not give any indication of risk, i.e. the probability that the input values used in the analysis and the resulting life-cycle cost will actually occur. Probabilistic analysis accounts for uncertainty and variability in input variables. It requires more effort than a deterministic analysis because probability distribution functions are required, random sampling is used, and a large number of iterations of the life-cycle cost calculations are carried out. The data needed is often not available. The significance of this study lies in its identification of the parameters that had the most influence on life-cycle costs of concrete bridge and how those parameters interacted. The parameters are: (1) Time to construct the new bridge; (2) traffic volume under bridge (when applicable); (3) value of time for cars; and (4) delay time under the bridge during new bridge construction (when applicable). Using these parameters the analyst can now “simulate” a probabilistic analysis by using the deterministic approach and reducing the number of iterations. This study also extended the use of LCCA to bridge rehabilitations and to bridges with low traffic volumes. A large number of bridges in the United States have low traffic volumes. For the highway bridge considered in the parametric study, rehabilitation using FRP had a lower life-cycle cost when compared to the new bridge alternative.
168

Biomass Potential for Heat, Electricity and Vehicle Fuel in Sweden

Hagström, Peter January 2006 (has links)
The main objective of this thesis was to determine how far a biomass quantity, equal to the potential produced within the Swedish borders, could cover the present energy needs inSwedenwith respect to economic and ecological circumstances. Three scenarios were studied where the available biomass was converted to heat, electricity and vehicle fuel. Three different amounts of biomass supply were studied for each scenario: 1) potential biomass amounts derived from forestry, non-forest land, forest industry and community; 2) the same amounts as in Case 1, plus the potential biomass amounts derived from agriculture; 3) the same amounts as in Case 1, plus 50% of the potential pulpwood quantity. For evaluating the economic and ecological circumstances of using biomass in the Swedish energy system, the scenarios were complemented with energy, cost and emergy analysis. The scenarios indicated that it may be possible to produce 170.2 PJ (47.3 TWh) per year of electricity from the biomass amounts in Case 2. From the same amount of biomass, the maximum annual production of hydrogen was 241.5 PJ (67.1 TWh) per year or 197.2 PJ (54.8 TWh) per year of methanol. The energy analysis showed that the ratio of energy output to energy input for large-scale applications ranged from 1.9 at electric power generation by gasification of straw to 40 at district heating generation by combustion of recovered wood. The cost of electricity at gasification ranged from 7.95 to 22.58 €/GJ. The cost of vehicle work generated by using hydrogen produced from forestry biomass in novel fuel cells was economically competitive compared to today’s propulsion systems. However, the cost of vehicle work generated by using methanol produced from forestry biomass in combustion engines was rather higher compared to use of petrol in petrol engines. The emergy analysis indicated that the only biomass assortment studied with a larger emergy flow from the local environment, in relation to the emergy flow invested from society after conversion, was fuel wood from non-forest land. However, even use of this biomass assortment for production of heat, electricity or vehicle fuels had smaller yields of emergy output in relation to emergy invested from society compared to alternative conversion processes; thus, the net contribution of emergy generated to the economy was smaller compared to these alternative conversion processes. / <p>QC 20120217</p>
169

Nákladová analýza léčby výdutí břišní aorty ve Fakultní nemocnici Olomouc / Cost Analysis of Treatment of Abdominal Aortic Aneurysms in the Olomouc Hospital

Radmacher, Erich January 2011 (has links)
The aneurysma of abdominal aorta is a pathological amplification of the diameter of this artery. It is a serious illness which affects 2 -- 6 % men and 1 -- 2 % women over 60. In the case of a rupture there is the mortality of 80 -- 90 %. If the aneurysma is diagnosed in time it is necessary to solve this state with an adequate treatment. The surgical treatment consists in substitution of the afflected part with a vessel replacement. Thanks to the development of medicinal technologies the aorta aneurysma is more and more often treated by the help of stentgrapth by which the afflicted part of the aorta is set aside from the circulation. The theoretical (the first) part of this work deals with the issues of the aneurysma of abdominal aorta, and it also describes the methods of its treatment. Then the work describes costs analyses used in the medical service. The practical part of the work is dedicated to the cost analysis of the treatment by means of a cost minimalization method. The work processes data of a group of patients being treated during a certain period of time in the Olomouc University Hospital in the Department of Vessel Surgery and in the Department of Interventional Radiology. The aim of this work is to evaluate and compare objectively the costs of abdominal aorta aneurysma treatment by individual methods, and to compare the results with foreign studies.
170

Návrh komunikační strategie jazykové školy Jipka Butovice / Proposed communication strategy for Jipka Butovice language school

Marešová, Veronika January 2011 (has links)
The objective of the diploma thesis is to draw up the proposal for a communication strategy for the Jipka Butovice language school. A comprehensive analysis of marketing communication costs through the partial breakdown of costs for individual communication disciplines is applied as the working method. The partial objective is to create a comprehensive overview of the structure and the development of marketing costs within the monitored period. By analyzing the characteristics of the language school students, their satisfaction and loyalty and by analysing the discounts offered, the satisfaction factors and defining patterns of customer behaviour are further determined. The output is a summary of the analysis conclusion and a subsequent series of recommendations for company management which forms the basis for strategic decision making and future planning. The role of the communication proposal derived from the relationship of the conclusions drawn from the selected analysis is to increase the level of customer relationship management and to effectively allocate the costs for various marketing communication tools for the language school.

Page generated in 0.0861 seconds