• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • Tagged with
  • 23
  • 23
  • 23
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Market Acceptance of Renewable Energy Technologies for Power Generation

Elizabeth A Wachs (9181997) 29 July 2020 (has links)
The perception of climate change as an emergency has provided the primary impetus to a transition from conventional fossil-based energy sources to renewables. The use of renewable energy sources is essential to sustainable development, since it is the only way that quality of life can remain high while greenhouse gas emissions are cut. Still, at the time of writing, renewables contribute a small part of the total primary energy use worldwide. Much research has gone into understanding barriers to the full-scale adoption of renewable energy sources. Still, many of the tools used have focused primarily on optimal paths, which are useful in the long-term but problematic in non-equilibrium markets. In the shorter term, behavior is thought to be more governed by existing institutions and commitments until those frameworks can be changed. This means that understanding people's attitudes towards renewables is key towards understanding how adoption will take place and how best to incentivize such action. Particularly, decisions are made by investors, who serve as intermediaries between what customers/public want and the existing institutions (what is possible). Understanding their responses to the current state of affairs as well as perturbations in the form of policy changes is important in order to effect change or make sure that policies will work as intended. <br> <br> First, the shifting demand landscape is considered, specifically in Indiana cities. Heating is shrinking as a driver of primary energy use over time due to climate change, while transport increases relatively. Electricity demand continues to increase, and the potential for electrification of transport can add to this potential. This led to a focus on the electricity sector for further work. Noticing that adoption lags public support led to a comparison of levelized cost of electricity and net present value metrics for 18 dominant technologies in two power markets in the US. Capacity markets and solar renewable energy credits lead to differences between cost and net present value in PJM, making natural gas the most attractive technology there. Noting the difference in electricity price between the two markets also provides a caution regarding the employment of carbon pricing in PJM, since that is an additional cost to the consumer who is already paying twice to fossil based generation in that region, once for energy provision and once for reliability. <br> <br> Individual technologies represent only part of the question, however, since generation capacity is added to bolster existing supplies. In order to study the portfolio, historical risk is considered along with levelized costs to identify optimal portfolios in CAISO and PJM. Then electricity is treated as a social good, and a sustainability profile was built for each technology balancing current equity and risks to future generations. This allowed quantification and identification of barriers to market acceptance of renewables, but it also led to a recognition of where useful metrics are still lacking. For example the use of land provides an important barrier to the adoption of renewables, and is a potent potential barrier for future acceptance. It is not well understood, however, which led to a critical review of existing technologies. <br> <br> The work in this dissertation provides one of the first mixed methods attempts to assess energy demand for cities including the end use of cooling. It provides a simple model that demonstrates the importance of capacity markets in determining the profitability of different energy technologies. It provides a guide to the emerging issue of land use by energy systems, a key consideration for the study of the food-energy-water nexus. It is the first use of portfolio optimization for sustainability studies. This is an important methodological tool since it allows a comprehensive sustainability analysis while providing a sense of the difference between immediate and future risks. The tool also allows users to diagnose which technologies are incentivized and which are deterred by market factors, as well as the strength of the deterrence. This is helpful for policy makers in understanding how incentives should be structured.
12

IMPROVING NUTRIENT TRANSPORT SIMULATION IN SWAT BY DEVELOPING A REACH-SCALE WATER QUALITY MODEL

Femeena Pandara Valappil (6703574) 02 August 2019 (has links)
<p>Ecohydrological models are extensively used to evaluate land use, land management and climate change impacts on hydrology and in-stream water quality conditions. The scale at which these models operate influences the complexity of processes incorporated within the models. For instance, a large scale hydrological model such as Soil and Water Assessment Tool (SWAT) that runs on a daily scale may ignore the sub-daily scale in-stream processes. The key processes affecting in-stream solute transport such as advection, dispersion and transient storage (dead zone) exchange can have considerable effect on the predicted stream solute concentrations, especially for localized studies. To represent realistic field conditions, it is therefore required to modify the in-stream water quality algorithms of SWAT by including these additional processes. Existing reach-scale solute transport models like OTIS (One-dimensional Transport with Inflow and Storage) considers these processes but excludes the actual biochemical reactions occurring in the stream and models nutrient uptake using an empirical first-order decay equation. Alternatively, comprehensive stream water quality models like QUAL2E (The Enhanced Stream Water Quality Model) incorporates actual biochemical reactions but neglects the transient storage exchange component which is crucial is predicting the peak and timing of solute concentrations. In this study, these two popular models (OTIS and QUAL2E) are merged to integrate all essential solute transport processes into a single in-stream water quality model known as ‘Enhanced OTIS model’. A generalized model with an improved graphical user interface was developed on MATLAB platform that performed reasonably well for both experimental data and previously published data (R<sup>2</sup>=0.76). To incorporate this model into large-scale hydrological models, it was necessary to find an alternative to estimate transient storage parameters, which are otherwise derived through calibration using experimental tracer tests. Through a meta-analysis approach, simple regression models were therefore developed for dispersion coefficient (D), storage zone area (A<sub>s</sub>) and storage exchange coefficient (α) by relating them to easily obtainable hydraulic characteristics such as discharge, velocity, flow width and flow depth. For experimental data from two study sites, breakthrough curves and storage potential of conservative tracers were predicted with good accuracy (R<sup>2</sup>>0.5) by using the new regression equations. These equations were hence recommended as a tool for obtaining preliminary and approximate estimates of D, A<sub>s</sub> and α when reach-specific calibration is unfeasible. </p> <p> </p> <p>The existing water quality module in SWAT was replaced with the newly developed ‘Enhanced OTIS model’ along with the regression equations for storage parameters. Water quality predictions using the modified SWAT model (Mir-SWAT) for a study catchment in Germany showed that the improvements in process representation yields better results for dissolved oxygen (DO), phosphate and Chlorophyll-a. While the existing model simulated extreme low values of DO, Mir-SWAT improved these values with a 0.11 increase in R<sup>2</sup> value between modeled and measured values. No major improvement was observed for nitrate loads but modeled phosphate peak loads were reduced to be much closer to measured values with Mir-SWAT model. A qualitative analysis on Chl-<i>a</i> concentrations also indicated that average and maximum monthly Chl-<i>a</i> values were better predicted with Mir-SWAT when compared to SWAT model, especially for winter months. The newly developed in-stream water quality model is expected to act as a stand alone model or coupled with larger models to improve the representation of solute transport processes and nutrient uptake in these models. The improvements made to SWAT model will increase the model confidence and widen its extent of applicability to short-term and localized studies that require understanding of fine-scale solute transport dynamics. </p>
13

A framework for domestic supply chain analysis of critical materials in the United States: an economic input-output-based approach

Miriam Chrisandra Stevens (11272506) 13 August 2021 (has links)
The increasing demand for mineral-based resources that face supply risks calls for managing the supply chains for these resources at the regional level. Cobalt is a widely used cathode material in lithium-ion batteries, which form the major portion of batteries used for renewable energy storage - a necessary technology for electrifying mobility and overcoming the challenge of intermittency, thus making renewable energy more reliable and energy generation more sustainable. This necessitates understanding cobalt's supply risks and for the Untied States, identifying sources of cobalt available for future use via recycling or mining. These needs are addressed in this work using single and multiregional input-output (MRIO) analysis in combination with graph theory. An MRIO-based approach is developed to obtain the trade network of cobalt and offer a more expedient way to identify potential critical material sources embodied in commodities made domestically. Commodities containing cobalt were disaggregated from two input-output (IO) models and the trade structure of cobalt at the national and state level was observed and compared. The significance of identified key sectors is measured according to several criteria and differences in sectors highlighted in the national versus subnational networks suggests that analysis at the two regional aggregation levels provides alternative insights. Results from mining the IO networks for cobalt highlight the geographical distribution of its use and industries to further investigate as potential sources for secondary feedstock.
14

Dynamic Behavior Of Water And Air Chemistry In Indoor Pool Facilities

Lester Ting Chung Lee (11495881) 22 November 2021 (has links)
<p>Swimming is the second most common form of recreational activity in the U.S. Swimming pool water and air quality should be maintained to allow swimmers, pool employees, and spectators to use the pool facility safely. One of the major concerns regarding the health of swimmers and other pool users is the formation of disinfection by-products (DBPs) in swimming pools. Previous research has shown that volatile DBPs can adversely affect the human respiratory system. DBPs are formed by reactions between chlorine and other compounds that are present in water, most of which are introduced by swimmers, including many that contain reduced nitrogen. Some of the DBPs formed in pools are volatile, and their transfer to the gas phase in pool facilities is promoted by mixing near the air/water interface, caused by swimming and pool features.</p> <p><a>Swimming pool water treatment processes can play significant roles in governing water and air quality.</a> Thus, it is reasonable to hypothesize that water and air quality in a swimming pool facility can be improved by renewing or enhancing one or more components of water treatment.</p> <p>The first phase of the study was designed to identify and quantify changes in water and air quality that are associated with changes in water treatment at a chlorinated indoor pool facility. Reductions of aqueous NCl<sub>3 </sub>concentration were observed following the use of secondary oxidizer with its activator. This inclusion also resulted in significant decreases in the concentrations of cyanogen chloride (CNCl) and dichloroacetonitrile (CNCHCl<sub>2</sub>) in pool water. The concentration of urea, a compound that is common in swimming pools and that functions as an important precursor to NCl<sub>3</sub> formation, as well as a marker compound for introduction of contaminants by swimmers, was also reduced after the addition of activator.</p> <p>The second phase of this study involved field measurements to characterize and quantify the dynamic behavior of indoor air quality (IAQ) in indoor swimming pool facilities, particularly as related to volatile compounds that are transferred from swimming pool water to air. Measurements of water and air quality were conducted before, during, and after periods of heavy use at several indoor pool facilities. The results of a series of measurements at different swimming pool facilities allowed for examination of the effects of swimmers on liquid-phase DBPs and gas-phase NCl<sub>3</sub>. Liquid-phase NCl<sub>3</sub> concentrations were observed to gradually increase during periods of high swimmer numbers (<i>e.g.</i>, swimming meets), while liquid-phase CHCl<sub>3</sub> concentration was nearly constant in the same period. Concentrations of urea displayed a steady increase each day during these periods of intensive use. In general, the highest urea concentrations were measured near the end of each swimming meet. </p> <p>Measurements of IAQ dynamics during phase 2 of the study demonstrated the effects of swimmers on the concentrations of gas-phase NCl<sub>3 </sub>and CO<sub>2</sub>, especially during swimming meets. The measured gas-phase NCl<sub>3</sub> concentration often exceeded the suggested upper limits of 300 µg/m<sup>3</sup> or 500 µg/m<sup>3 </sup>during swimming meets, especially during and immediately after warm-up periods, when the largest numbers of swimmers were in the pool. Peak gas-phase NCl<sub>3</sub> concentrations were observed when large numbers of swimmers were present in the pools; measured gas-phase concentrations were as high as 1400 µg/m<sup>3</sup>.<sup> </sup>Concentrations of gas-phase NCl<sub>3</sub> rarely reached above 300 µg/m<sup>3</sup> during regular hours of operation. Furthermore, the types of swimmers were shown to affect the transfer of volatile compounds, such as NCl<sub>3</sub>, from water to air<sub> </sub>in pool facilities. In general, adult competition swimmers promoted more rapid transfer of these compounds than youth competition swimmers or adult recreational swimmers. The measured gas-phase CO<sub>2</sub> concentration often exceeded 1000 ppm<sub>v</sub> during swimming meets, whereas the gas-phase CO<sub>2</sub> concentration during periods of non-use of the pool tended to be close to the background (ambient) CO<sub>2</sub> concentration or slightly more than 400 ppm<sub>v</sub>. This phenomenon was largely attributed to the activity of swimmers (mixing of water and respiratory activity) and the normal respiratory activity of spectators. </p> <p>IAQ models for gas-phase NCl<sub>3</sub> and CO<sub>2</sub> were developed to relate the characteristics of the indoor pool environment to measurements of IAQ dynamics. Several assumptions were made to develop these models. Specifically, pool water and indoor air were assumed to be well-mixed. The reactions that were responsible for the formation and decay of the target compounds were neglected. Two-film theory was used to simulate the net mass-transfer rate of volatile compounds from the liquid phase to the gas phase. Advective transport into and out of the air space of the pool were accounted for. The IAQ model was able to simulate the dynamic behavior of gas-phase NCl<sub>3</sub> during regular operating hours. Predictions of gas-phase NCl<sub>3</sub> dynamics were generally less accurate during periods of intensive pool use; however, the model did yield predictions of behavior that were qualitatively correct. Strengths of the model include that it accounts for the factors that are believed to have the greatest influence on IAQ dynamics and is simple to use. Model weaknesses include that the model did not account liquid-phase reactions that are responsible for formation and decay of the target compounds. The IAQ model for NCl<sub>3</sub> dynamics could still be a useful tool to form the basis for recommendations regarding the design and operation of indoor pool facilities so as to optimize IAQ.</p><p>Measurements of CO<sub>2</sub> dynamics indicated qualitatively similar dynamic behavior as NCl<sub>3</sub>. Because of this, it was hypothesized that CO<sub>2</sub> may represent a surrogate for NCl<sub>3</sub> for monitoring and control of IAQ dynamics. To examine this issue in more detail, a conceptually similar model of CO<sub>2 </sub>dynamics was developed and applied. The model was developed to allow for an assessment of the relative contributions of liquid®gas transfer and respiration by swimmers and spectators to CO<sub>2</sub> dynamics. The results of this modeling effort indicated that the similarity of CO<sub>2</sub> transfer behavior to NCl<sub>3</sub> may allow use of CO<sub>2</sub> as a surrogate during periods with few to no spectators in the pool; however, when large numbers of spectators are present, the behavior of CO<sub>2</sub> dynamics may not be representative of NCl<sub>3</sub> dynamics because of spectator respiration.</p><p></p> <br>
15

A Study of Additive manufacturing Consumption, Emission, and Overall Impact With a Focus on Fused Deposition Modeling

Timothy Simon (9746375) 28 July 2021 (has links)
<p>Additive manufacturing (AM) can be an advantageous substitute to various traditional manufacturing techniques. Due to the ability to rapidly create products, AM has been traditionally used to prototype more efficiently. As the industry has progressed, however, use cases have gone beyond prototyping into production of complex parts with unique geometries. Amongst the most popular of AM processes is fused deposition modeling (FDM). FDM fabricates products through an extrusion technique where plastic filament is heated to the glass transition temperature and extruded layer by layer onto a build platform to construct the desired part. The purpose of this research is to elaborate on the potential of this technology, while considering environmental impact as it becomes more widespread throughout industry, research, and academia.</p> <p>Although AM consumes resources more conservatively than traditional methodologies, it is not free from having environmental impacts. Several studies have shown that additive manufacturing can affect human and environmental health by emitting particles of a dynamic size range into the surrounding environment during a print. To begin this study, chapters investigate emission profiles and characterization of emissions from FDM 3D printers with the intention of developing a better understanding of the impact from such devices. Background work is done to confirm the occurrence of particle emission from FDM using acrylonitrile butadiene styrene (ABS) plastic filament. An aluminum bodied 3D printer is enclosed in a chamber and placed in a Class 1 cleanroom where measurements are conducted using high temporal resolution electrical low-pressure impactor (ELPI), scanning mobility particle sizer (SMPS), and optical particle sizer (OPS), which combined measure particles of a size range 6-500nm. Tests were done using the NIST standard test part and a honeycomb infill cube. Results from this study show that particle emissions are closely related to filament residence time in the extruder while less related to extruding speed. An initial spike of particle concentration is observed immediately after printing, which is likely a result of the long time required to heat the extruder and bed to the desired temperature. Upon conclusion of this study, it is theorized that particles may be formed through vapor condensation and coagulation after being released into the surrounding environment.</p> <p>With confirmation of FDM ultrafine particle emission at notable concentrations, an effort was consequently placed on diagnosing the primary cause of emission and energy consumption based on developed hypotheses. Experimental data suggests that particle emission is mainly the result of condensing and agglomerating semi-volatile organic compounds. The initial emission spike occurs when there is dripping of semi-liquid filament from the heated nozzle and/or residue left in the nozzle between prints; this supports the previously stated hypothesis regarding residence time. However, the study shows that while printing speed and material flow influence particle emission rate, the effects from these factors are relatively insignificant. Power profile analysis indicates that print bed heating and component temperature maintaining are the leading contributors to energy consumption for FDM printers, making time the primary variable driving energy input.</p> <p>To better understand the severity of FDM emissions, further investigation is necessary to diligence the makeup of the process output flows. By collecting exhaust discharge from a Makerbot Replicator 2x printing ABS filament and diffusing it through a type 1 water solution, we are able to investigate the chemical makeup of these compounds. Additional exploration is done by performing a filament wash to investigate emissions that may already be present before extrusion. Using solid phase micro-extraction, contaminants are studied using gas chromatography mass spectrometry (GCMS) thermal desorption. Characterization of the collected emission offers more comprehensive knowledge of the environmental and human health impacts of this AM process.</p> <p>Classification of the environmental performance of various manufacturing technologies can be achieved by analyzing their input and output material, as well as energy flows. The unit process life cycle inventory (UPLCI) is a proficient approach to developing reusable models capable of calculating these flows. The UPLCI models can be connected to estimate the total material and energy consumption of, and emissions from, product manufacturing based on a process plan. The final chapter focuses on using the knowledge gained from this work in developing UPLCI model methodology for FDM, and applying it further to the second most widely used AM process: stereolithography (SLA). The model created for the FDM study considers material input/output flows from ABS plastic filament. Energy input/output flows come from the running printer, step motors, heated build plate, and heated extruder. SLA also fabricates parts layer by layer, but by the use of a photosensitive liquid resin which solidifies when cured under the exposure of ultraviolet light. Model material input/output flows are sourced from the photosensitive liquid resin, while energy input/output flows are generated from (i) the projector used as the ultraviolet light source and (ii) the step motors. As shown in this work, energy flow is mostly time dependent; material flows, on the other hand, rely more on the nature of the fabrication process. While a focus on FDM is asserted throughout this study, the developed UPLCI models show how conclusions drawn from this work can be applied to different forms of AM processes in future work.</p>
16

Optimisation of permeable reactive barrier systems for the remediation of contaminated groundwater

Painter, Brett D. M. January 2005 (has links)
Permeable reactive barriers (PRBs) are one of the leading technologies being developed in the search for alternatives to the pump-and-treat method for the remediation of contaminated groundwater. A new optimising design methodology is proposed to aid decision-makers in finding minimum cost PRB designs for remediation problems in the presence of input uncertainty. The unique aspects of the proposed methodology are considered to be: design enhancements to improve the hydraulic performance of PRB systems; elimination of a time-consuming simulation model by determination of approximating functions relating design variables and performance measures for fully penetrating PRB systems; a versatile, spreadsheet-based optimisation model that locates minimum cost PRB designs using Excel's standard non-linear solver; and the incorporation of realistic input variability and uncertainty into the optimisation process via sensitivity analysis, scenario analysis and factorial analysis. The design methodology is developed in the context of the remediation of nitrate contamination due to current concerns with nitrate in New Zealand. Three-dimensional computer modelling identified significant variation in capture and residence time, caused by up-gradient funnels and/or a gate hydraulic conductivity that is significantly different from the surrounding aquifer. The unique design enhancements to control this variation are considered to be the customised down-gradient gate face and emplacement of funnels and side walls deeper than the gate. The use of velocity equalisation walls and manipulation of a PRB's hydraulic conductivity within certain bounds were also found to provide some control over variation in capture and residence time. Accurate functional relationships between PRB design variables and PRB performance measures were shown to be achievable for fully penetrating systems. The chosen design variables were gate length, gate width, funnel width and the reactive material proportion. The chosen performance measures were edge residence, centreline residence and capture width. A method for laboratory characterisation of reactive and non-reactive material combinations was shown to produce data points that could realistically be part of smooth polynomial interpolation functions. The use of smooth approximating functions to characterise PRB inputs and determine PRB performance enabled the creation of an efficient spreadsheet model that ran more quickly and accurately with Excel's standard non-linear solver than with the LGO global solver or Evolver genetic-algorithm based solver. The PRB optimisation model will run on a standard computer and only takes a couple of minutes per optimisation run. Significant variation is expected in inputs to PRB design, particularly in aquifer and plume characteristics. Not all of this variation is quantifiable without significant expenditure. Stochastic models that include parameter variability have historically been difficult to apply to realistic remediation design due to their size and complexity. Scenario and factorial analysis are proposed as an efficient alternative for quantifying the effects of input variability on optimal PRB design. Scenario analysis is especially recommended when high quality input information is available and variation is not expected in many input parameters. Factorial analysis is recommended for most other situations as it separates out the effects of multiple input parameters at multiple levels without an excessive number of experimental runs.
17

LINKING INFANT LOCOMOTION DYNAMICS WITH FLOOR DUST RESUSPENSION AND EXPOSURE

Neeraja Balasubrahmaniam (8802989) 07 May 2020 (has links)
<p>Infant exposure to the microbial and allergenic content of indoor floor dust has been shown to play a significant role in both the development of, and protection against, allergies and asthma later in life. Resuspension of floor dust during infant locomotion induces a vertical transport of particles to the breathing zone, leading to inhalation exposure to a concentrated cloud of coarse (> 1μm) and fine (≤ 1μm) particles. Resuspension, and subsequent exposure, during periods of active infant locomotion is likely influenced by gait parameters. This dependence has been little explored to date and may play a significant role in floor dust resuspension and exposure associated with forms of locomotion specific to infants. This study explores associations between infant locomotion dynamics and floor dust resuspension and exposure in the indoor environment. Infant gait parameters for walking and physiological characteristics expected to influence dust resuspension and exposure were identified, including: contact frequency (steps min<sup>-1</sup>), contact area per step (m<sup>2</sup>), locomotion speed (m s<sup>-1</sup>), breathing zone height (cm), and time-resolved locomotion profiles. Gait parameter datasets for standard gait experiments were collected for infants in three age groups: 12, 15, and 19 months-old (m/o). The gait parameters were integrated with an indoor dust resuspension model through a Monte Carlo framework to predict how age-dependent variations in locomotion affect the resuspension mass emission rate (mg h<sup>-1</sup>) for five particle size fractions from 0.3 to 10 μm. Eddy diffusivity coefficients (m<sup>2</sup> s<sup>-1</sup>) were estimated for each age group and used in a particle transport model to determine the vertical particle concentration profile above the floor.</p><p>Probability density functions of contact frequency, contact area, locomotion speed, breathing zone height, and size-resolved resuspension mass emission rates were determined for infants in each group. Infant standard gait contact frequencies were generally in the range of 100 to 300 steps min<sup>-1 </sup>and increased with age, with median values of 186 steps min<sup>-1 </sup>for 12 m/o, 207 steps min<sup>-1</sup> for 15 m/o, and 246.2 steps min<sup>-1</sup> for 19 m/o infants. Similarly, locomotion speed increased with age, from 67.3 cm s<sup>-1 </sup>at 12 m/o to 118.83 cm s<sup>-1</sup> at 19 m/o, as did the breathing zone height, which varied between 60 and 85 cm. Resuspension mass emission rates increased with both infant age and particle size. A 19 m/o infant will resuspend comparably more particles from the same indoor settled dust deposit compared to a 15 m/o or 12 m/o infant. Age-dependent variations in the resuspension mass emission rate and eddy diffusivity coefficient drove changes in the vertical particle concentration profile within the resuspended particle cloud. For all particle size fractions, there is an average of a 6% increase in the resuspended particle concentration at a height of 1 m from the floor for a 19 m/o compared to a 12 m/o infant. Time-resolved locomotion profiles were obtained for infants in natural gait during free play establish the transient nature of walking-induced particle resuspension and associated exposures for infants, with variable periods of active locomotion, no motion, and impulsive falls. This study demonstrates that floor dust resuspension and exposure can be influenced by the nature of infant locomotion patterns, which vary with age and are distinctly different from those for adults.</p>
18

THE GAME CHANGER: ANALYTICAL METHODS FOR ENERGY DEMAND PREDICTION UNDER CLIMATE CHANGE

Debora Maia Silva (10688724) 22 April 2021 (has links)
<div>Accurate prediction of electricity demand is a critical step in balancing the grid. Many factors influence electricity demand. Among these factors, climate variability has been the most pressing one in recent times, challenging the resilient operation of the grid, especially during climatic extremes. In this dissertation, fundamental challenges related to accurate characterization of the climate-energy nexus are presented in Chapters 2--4, as described below. </div><div><br></div><div>Chapter 2 explores the cost of neglecting the role of humidity in predicting summer-time residential electricity consumption. Analysis of electricity demand in the CONUS region demonstrates that even though surface temperature---the most widely used metric for characterising heat stress---is an important factor, it is not sufficient for accurately characterizing cooling demand. The chapter proceeds to show significant underestimations of the climate sensitivity of demand, both in the observational space as well as under climate change. Specifically, the analysis reveals underestimations as high as 10-15% across CONUS, especially in high energy consuming states such as California and Texas. </div><div><br></div><div>Chapter 3 takes a critical look at one of the most widely used metrics, namely, the Cooling Degree Days (CDD), often calculated with an arbitrary set point temperature of 65F or 18.3C, ignoring possible variations due to different patterns of electricity consumption across different regions and climate zones. In this chapter, updated values are derived based on historical electricity consumption data across the country at the state level. Chapter 3 analysis demonstrates significant variation, as high as +-25%, between derived set point variables and the conventional value of 65F. Moreover, the CDD calculation is extended to account for the role of humidity, in the light of lessons learnt in the previous chapter. Our results reveal that under climate change scenarios, the air-temperature based CDD underestimates thermal comfort by as much as ~22%.</div><div><br></div><div>The predictive analytics conducted in Chapter 2 and Chapter 3 revealed a significant challenge in characterizing the climate-demand nexuses: the ability to capture the variability at the upper tails. Chapter 4 explores this specific challenge, with the specific goal of developing an algorithm to increase prediction accuracy at the higher quantiles of the demand distributions. Specifically, Chapter 4 presents a data-centric approach at the utility level (as opposed to the state-level analyses in the previous chapters), focusing on high-energy consuming states of California and Texas. The developed algorithm shows a general improvement of 7% in the mean prediction accuracy and an improvement of 15% for the 90th quantile predictions.</div>
19

Application of diffusion laws to composting: theory, implications, and experimental testing

Chapman, P. D. January 2008 (has links)
Understanding the fundamentals of composting science from a pragmatic perspective of necessity involves mixtures of different sizes and types of particles in constantly changing environmental conditions, in particular temperature. The complexity of composting is affected by this environmental variation. With so much "noise" in the system, a question arises as to the need to understand the detail of this complexity as understanding any part of composting with more precision than this level of noise is not likely to result in greater understanding of the system. Yet some compost piles generate offensive odours while others don‟t and science should be able to explain this difference. A driver for this research was greater understanding of potential odour, which is assumed to arise from the anaerobic core of a composting particle. It follows that the size of this anaerobic core could be used as an indicator of odour potential. A first step in this understanding is the need to determine which parts of a composting particle are aerobic, from which the anaerobic proportion can be determined by difference. To this end, this thesis uses a finite volume method of analysis to determine the distribution of oxygen at sub-particle scales. Diffusion laws were used to determine the thickness of each finite volume. The resulting model, called micro-environment analysis, was applied to a composting particle to enable determination of onion ring type volumes of compost (called micro-environments) containing substrates (further subdivided into substrate fractions) whose concentrations could be determined to high precision by the application of first-order degradation kinetics to each of these finite volumes. Determination of the oxygen concentration at a micro-environment's inner boundary was achieved by using the Stępniewski equation. The Stępniewski model was derived originally for application to soil aeration and enables each micro-environment to have its own oxygen uptake rate and diffusion coefficient. This first version of micro-environment analysis was derived from the simpler solution to diffusion laws, based on the assumption of non-diffusible substrate. It was tested against three sets of experimental data with two different substrates: Particle size trials using dog sausage as substrate – where the peak composting rate was successfully predicted, as a function of particle size. Temperature trials using pig faeces and a range of particle sizes – the results showed the potential of micro-environment analysis to identify intriguing temperature effects, in particular, a different temperature effect (Q10) and fraction proportion was indicated for each substrate fraction. Smaller particle sizes, and possibly outward diffusion of substrate confounded a clear experimental signal. Diffusion into a pile trials which showed that the time course of particles deeper in the pile could be predicted by the physics of oxygen distribution. A fully computed prediction would need an added level of computational complexity in micro-environment analysis, arising from there being two intertwined phases, gas phase and substrate (particle) phase. Each phase needs its own micro-environment calculations which can not be done in isolation from each other. Unexplainable parts of the composting time course are likely to be partly explained by the outward diffusion of substrate towards the inward-moving oxygen front. Although the possibility of alternative electron acceptors can not be discounted as a partial explanation. To test the theory, a new experimental reactor was developed using calorimetry. With an absolute sensitivity of 0.132 J hr-1 L-1 and a measurement frequency of 30 minutes, the reactor was able to detect the energy required to humidify the input air, and "see" when composting begins to decline as oxygen is consumed. Optimisation of the aeration pumping frequency using the evidence from the data was strikingly apparent immediately after setting the optimum frequency. Micro-environment analysis provides a framework by which several physical effects can be incorporated into compost science.
20

Unstable equilibrium : modelling waves and turbulence in water flow

Connell, R. J. January 2008 (has links)
This thesis develops a one-dimensional version of a new data driven model of turbulence that uses the KL expansion to provide a spectral solution of the turbulent flow field based on analysis of Particle Image Velocimetry (PIV) turbulent data. The analysis derives a 2nd order random field over the whole flow domain that gives better turbulence properties in areas of non-uniform flow and where flow separates than the present models that are based on the Navier-Stokes Equations. These latter models need assumptions to decrease the number of calculations to enable them to run on present day computers or super-computers. These assumptions reduce the accuracy of these models. The improved flow field is gained at the expense of the model not being generic. Therefore the new data driven model can only be used for the flow situation of the data as the analysis shows that the kernel of the turbulent flow field of undular hydraulic jump could not be related to the surface waves, a key feature of the jump. The kernel developed has two parts, called the outer and inner parts. A comparison shows that the ratio of outer kernel to inner kernel primarily reflects the ratio of turbulent production to turbulent dissipation. The outer part, with a larger correlation length, reflects the larger structures of the flow that contain most of the turbulent energy production. The inner part reflects the smaller structures that contain most turbulent energy dissipation. The new data driven model can use a kernel with changing variance and/or regression coefficient over the domain, necessitating the use of both numerical and analytical methods. The model allows the use of a two-part regression coefficient kernel, the solution being the addition of the result from each part of the kernel. This research highlighted the need to assess the size of the structures calculated by the models based on the Navier-Stokes equations to validate these models. At present most studies use mean velocities and the turbulent fluctuations to validate a models performance. As the new data driven model gives better turbulence properties, it could be used in complicated flow situations, such as a rock groyne to give better assessment of the forces and pressures in the water flow resulting from turbulence fluctuations for the design of such structures. Further development to make the model usable includes; solving the numerical problem associated with the double kernel, reducing the number of modes required, obtaining a solution for the kernel of two-dimensional and three-dimensional flows, including the change in correlation length with time as presently the model gives instant realisations of the flow field and finally including third and fourth order statistics to improve the data driven model velocity field from having Gaussian distribution properties. As the third and fourth order statistics are Reynolds Number dependent this will enable the model to be applied to PIV data from physical scale models. In summary, this new data driven model is complementary to models based on the Navier-Stokes equations by providing better results in complicated design situations. Further research to develop the new model is viewed as an important step forward in the analysis of river control structures such as rock groynes that are prevalent on New Zealand Rivers protecting large cities.

Page generated in 0.1451 seconds