• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6254
  • 2535
  • 1596
  • 356
  • 356
  • 356
  • 356
  • 356
  • 354
  • 229
  • 189
  • 39
  • 26
  • 19
  • 17
  • Tagged with
  • 15253
  • 15253
  • 3680
  • 2076
  • 2058
  • 1908
  • 1835
  • 1535
  • 1535
  • 1217
  • 1118
  • 956
  • 707
  • 689
  • 666
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Measuring the Effectiveness of Advanced Traveler Information Systems (ATIS)

Hu, Hyejung 19 December 2008 (has links)
The objective of this study was to develop valid methodologies for addressing several limitations of the current Advanced Traveler Information Systems (ATIS) evaluation tools. This study was focused mainly on three enhancements. First, the queue propagation algorithm of the selected tool (DYNASMART-P) was modified to more realistically model traffic congestion. The author proposed the addition of transfer flow capacity and backward gated flow constraints for more accurately calculating transfer flow rate. Second, the study modeled the natural diversion behaviors of drivers who do not receive traveler information. Lastly, statistical models of user responses to traveler information were developed using binary and multinomial logit methods to understand and model the relationship between driversâ socio-economic characteristics and their responses to traveler information. Among these three enhancements, the first two (improved queue propagation and natural diversion behavior algorithms) were implemented in the enhanced model. The user behavior models, however, were not implemented because their predictive power was not acceptable due to limitations in the data set. The enhanced model was applied to two case studies: 1) verifying the capabilities of the model under a recurring bottleneck scenario on I-40 corridor in the Triangle region of North Carolina, and 2) demonstrating the capability of the enhanced model to measure the effectiveness of U-Transportation (similar to the Vehicle Infrastructure Integration [VII] program in the USA) which has been under development in Korea. The first case study results showed that the improved queue propagation algorithm simulated the bottleneck queue much closer to the real data than the original model. The simulation results also indicated that the actual diversion rate under recurring congestion in the study network was very low. The results of the second case study demonstrated that the enhanced model can evaluate the network impact of new advanced technology in flooding situations and can evaluate the effect of market penetration of the communication technology.
262

Operational Evaluation of In-Use Emissions and Fuel Consumption of B20 Biodiesel versus Petroleum Diesel-Fueled Onroad Heavy-duty Diesel Dump Trucks and Nonroad Construction Vehicles

KIM, KANGWOOK 18 December 2007 (has links)
Diesel vehicles contribute substantially to statewide emissions of NOx, an ozone precursor, and to particulate matter. North Carolina Department of Transportation (NCDOT) is conducting a pilot study to demonstrate the use of B20 biodiesel fuel on approximately 1,000 vehicles in selected areas of the state; there are plans to extend the use of B20 fuel to a much larger number of vehicles in all 100 counties in North Carolina. Real-world in-use onroad and nonroad emissions of selected heavy-duty diesel vehicles, including those fueled with B20 biodiesel and petroleum diesel, were measured during normal duty cycles using a portable emissions measurement system (PEMS). Each vehicle was tested for one day on B20 biodiesel and for one day on petroleum diesel, for a total of 68 days of field measurements. The vehicles were operated by drivers assigned by NCDOT. Each test was conducted over the course of an entire workshift, and there were approximately 2 to 10 duty cycles per shift. Each duty cycle is comprised of a uniquely weighted combination of operating modes based on vehicle speed, acceleration, and typical modes of activities. Average emission rates on a mass per time basis varied substantially among the operating modes. Average fuel use and emissions rates increased 26 to 35 percent when vehicles were loaded versus unloaded. The use of B20 instead of petroleum diesel lead to a slight decrease (approximately 2 to 10 percent depending on the vehicle) in NO emission rate and significant decreases (approximately 10 to 30 percent depending on the vehicle) for opacity, HC, and CO, respectively. These trends are similar to nonroad vehicles. Factors that were responsible for the observed variability in fuel use and emissions include: operating mode, vehicle size, engine tier and size, vehicle weight, and fuel. In particular, emission rates were also found to decrease significantly when comparing newer, higher tier vehicles to older ones. Recommendations were made regarding operating strategies to reduce emissions, choice of fuel, and the need for future work to collect real-world duty cycle data for other vehicle types.
263

Development of a Gas Chromatography-Tandem Mass Spectrometry Method for the Simultaneous Analysis of 19 Taste and Odor Compounds.

Viswakumar, Anjali 29 April 2010 (has links)
An analytical method was developed to detect and quantify 19 compounds commonly associated with taste and odor (T&O) problems in drinking water. The method can be used by utilities during T&O episodes to quickly and reliably detect T&O compounds and determine their concentrations. Knowledge about the identity and concentration of T&O compounds will greatly aid utilities in the selection of appropriate treatment strategies. Head space solid phase microextraction (SPME) was used to concentrate T&O compounds, and gas chromatography (GC) followed by tandem mass spectrometry (MS/MS) was used to separate, detect, and quantify the T&O compounds. Method development included (1) determination of parent ion mass and retention time, (2) optimization of the ion trap MS/MS instrumental parameters, (3) development of calibration curves, and (4) determination of the method detection limits (MDLs) and limits of quantitation (LOQs) for each compound. For 12 of the 17 targeted T&O compounds with known odor threshold concentrations (OTCs), LOQs were below the OTC. This result suggests that the developed method is capable of detecting developing T&O problems for these 12 compounds and allow utilities to implement treatment strategies before consumers can detect objectionable tastes and odors in their water. The developed method was tested by analyzing three water samples from North Carolina ponds and lakes that experienced algae/cyanobacteria blooms. In addition, Raleigh source and tap water samples were analyzed. Of the 19 targeted T&O compounds, all but 2,4,6 tribromoanisole were detected in the collected samples. In the bloom samples, geosmin, β-ionone, and trans-2,cis-6-nonadienal most frequently occurred at concentrations that exceeded their OTCs, sometimes by a factor of >100. A comparison of results for non-filtered and filtered (0.45 µm membrane) samples suggests that many T&O compounds were predominantly present inside algae/cyanobacteria cells. Only geosmin and β-cyclocitral were present at measurable levels in Raleigh source and tap water. Concentrations in the raw and tap water samples were similar and at levels that were at or below the OTCs of geosmin and β-cyclocitral.
264

Using Modern Photogrammetric Techniques to Map Historical Shorelines and Analyze Shoreline Change Rates: Case Study on Bodie Island, North Carolina.

Zink, Jason Michael 27 December 2002 (has links)
The efficacy of coastal development regulations in North Carolina is dependent on accurately calculated shoreline erosion rates. North Carolina?s current methodology for regulatory erosion rate calculation does not take advantage of emerging GIS, photogrammetric, and engineering technologies. Traditionally, historical shoreline positions from a database created in the 1970s have been coupled with a modern shoreline position to calculate erosion rates. The photos from which these historical shorelines come were subject to errors of tilt, variable scale, lens distortion, and relief displacement. Most of these errors could be removed using modern photogrammetric methods. In this study, an effort was made to acquire and rectify, using digital image processing, prints of the original historical photography for Bodie Island, North Carolina. The photography was rectified using the latest available desktop photogrammetry technology. Digitized shorelines were then compared to shorelines of similar date created without the benefit of this modern technology. Uncertainty associated with shoreline positions was documented throughout the process. It was found that the newly created shorelines were significantly different than their counterparts created with analog means. Many factors caused this difference, including: choice of basemaps, number of tie points between photos, quality of ground control points, method of photo correction, and shoreline delineation technique. Using both linear regression and the endpoint method, a number of erosion rates were calculated with the available shorelines. Despite the differences in position of shorelines of the same date, some of the calculated erosion rates were not significantly different. Specifically, the rate found using all available shorelines prior to this study was very similar to the rate found using all shorelines created in this study. As a result of this and other factors, it was concluded that a complete reproduction of North Carolina?s historical shoreline database may not be warranted. The new rectification procedure does have obvious value, and should be utilized in those locations where there is no existing historical data, or where existing data is thought to be of poor quality. This would especially be the case near inlets or other historically unpopulated areas.
265

STRENGTHENING OF STEEL STRUCTURES WITH HIGH MODULUS CARBON FIBER REINFORCED POLYMERS (CFRP) MATERIALS: BOND AND DEVELOPMENT LENGTH STUDY

Stanford, Kirk Alan 28 April 2009 (has links)
Cost-effective solutions for the rehabilitation and strengthening of steel structures, such as steel bridges and steel monopole towers used for cellular phone antennas, are greatly needed by government transportation departments and industry. Rehabilitation is often required due to loss of cross-section from corrosion and/or changes of the demand or use of a structure. Current techniques for strengthening steel structures have several drawbacks including requiring heavy equipment for installation, their fatigue performance, in addition to the need for ongoing maintenance due to continued corrosion attack. The current research program proposed the use of a new high modulus carbon fiber reinforced polymer (CFRP) for strengthening of steel structures. This program includes extensive research to select the resin for wet lay-up of carbon fiber sheets and the adhesives for bonding of pre-cured laminate strips. The bond behavior of FRP materials to steel structures is quite different from that of concrete structures. Preliminary test results showed the occurrence of very high bond stresses for most strengthening applications due to the amount of strengthening required for developing the material for steel structures and bridges.
266

Evolutionary Algorithms to Aid Watershed Management

Dorn, Jason Liam 31 December 2004 (has links)
Watershed management is a complex process involving multiple uses, diverse stakeholders, and a variety of computer-based hydrologic and hydraulic simulation models. Exploring for efficient solutions and making decisions about the best integrated management strategies to implement can be improved through the use of quantitative systems analytic techniques. In addition to identifying mathematically optimal solutions, these techniques should also be able to consider issues that may not be properly represented in the models or may be in conflict with one another. As the complexities of the system models grow, contemporary heuristic search methods, including evolutionary algorithms (EAs), are becoming increasingly common in quantitative analysis of such challenging decision-making problems. More research is needed to enhance and extend the capabilities of these newer search methods to meet the growing challenges. Further, these new systems analytic capabilities are best made accessible to practitioners through a generic computational framework that integrates the system simulation models with the suite of search techniques. Therefore, the purpose of this research is to develop new EA-based system analytic methods for addressing integrated watershed management problems and a computational framework within which their capabilities are enabled for watershed management applications. EA-based methods to generate good alternative solutions and for multiobjective optimization have been developed and tested, and their performances compare well with those of other procedures. These new methods were also demonstrated through successful applications to realistic problems in watershed management. These techniques were integrated into and implemented within a new computer-based decision support framework that supports the integration of the user?s preferred watershed models, methods to perform uncertainty and/or sensitivity analyses thereon, and multiple state-of-the-art optimization heuristic search procedures to identify good management strategies that meet the problem-specific (e.g., fiscal or environmental) objectives and constraints. The design of the software framework is described with a demonstration of its capabilities via a case study involving several scenarios of a watershed management problem.
267

MODELING AND ANALYSIS OF NOX EMISSION TRADING TO ACHIEVE OZONE STANDARDS

Gillon, Dana Lee 14 May 1999 (has links)
<p>Emission trading programs are incentive-based policy instruments implemented to achieve environmental targets cost-effectively. In these programs, also known as transferable discharge permit (TDP), emission-reduction trading, and cap and trade programs, participants are required to meet established emission reductions goals through control measures or by acquiring TDPs from sources in the market that over-control. TDP programs encourage development and application of innovative control technologies and allow pollution sources more flexibility in complying with regulations. One potential drawback to a market-driven policy such as TDP is that the geographical distribution of emissions resulting from trades could locally degrade air quality if the market is not designed properly. Since such an outcome is generally undesirable, the ability for regulators to predict environmental impacts of trading prior to implementation is very important. The goal of this thesis is to present a general framework for using mathematical optimization to model and analyze different market design features for TDP programs including the potential use of trading restrictions to control the geographic distribution of permits. This framework will provide regulators with a way to identify effective market designs and implement more robust and reliable TDP programs. An important component of this framework is the use of Modeling to Generate Alternatives (MGA) to identify the range of trading outcomes that may occur in response to a TDP program.A case study using this framework was conducted for NOx emission trading in the Charlotte, North Carolina region. The study analyzed alternative trading outcomes generated using MGA, investigated limitations on source size and type in the trading program, and tested the use of zoning restrictions as a way to control the geographical distribution of permits. Trading outcomes were evaluated with respect to cost, air quality, robustness, and reliability. Results found that TDPs could be used to meet both emission limits of NOx and an ambient standard for ozone with all or a limited number of sources trading. Additionally, trading restrictions in the form of geographic zones were not particularly good at reducing local air quality impacts in the Charlotte region, although this result is believed to be (in part) attributable to the limited size of the trading region.<P>
268

Quantification of Variability and Uncertainty in Emission Factors and Emission Inventories

Bharvirkar, Ranjit 26 May 1999 (has links)
<p>The purpose of this research is to demonstrate a methodology for quantifying the variability and uncertainty in emission factors and emission inventories. Emission inventories are used for various policy-making purposes, such as characterization of temporal emission trends, emissions budgeting for regulatory and compliance purposes, and the prediction of ambient pollutant concentration using air quality models. Failure to account for variability and uncertainty in emission inventories may lead to erroneous conclusions regarding source apportionment, compliance with emission standards, emission trends, and the impact of emissions on air quality. Variability is the heterogeneity of values of a quantity with respect to time, space, or across a population while uncertainty arises due to lack of knowledge about the true value of a quantity. The sources of variability and uncertainty are distinct and hence variability and uncertainty affect policy- making in different ways. For example, variability in emissions arises from differences in operating conditions among different power plants. Uncertainty arises due to measurement errors, systematic errors, and random sampling errors. It is possible to reduce uncertainty by taking more accurate and precise measurements (i.e. reducing measurement error) or by taking a larger number of measurements (i.e. random sampling error). However, it is not possible to reduce variability. Therefore, in this research variability and uncertainty are treated separately. A methodology for simultaneous characterization of variability and uncertainty in emission and activity factors and their propagation through an emission inventory model is described. Variability was characterized using probability distributions developed on the basis of data analysis. The uncertainty due to random sampling error was characterized using parametric bootstrap simulation. A methodology for the quantification of variability and uncertainty in censored data sets containing below detection limit values was developed. This methodology is demonstrated for three case studies. In Case Study 1, the variability and uncertainty in the activity and emission factors for NO x emissions from selected coal-fired power plant systems was quantified based on data obtained from the U.S. Environmental Protection Agency. An illustrative partial probabilistic NO x emission inventory was developed for the state of North Carolina. In Case Study 2, the variability and uncertainty in the total short-term average emissions and in annual emissions of nine hazardous air pollutants (HAP) from a power plant was quantified by propagating the probability distributions for coal concentrations, boiler partitioning factors, and fabric filter partitioning factors through an emissions model. In Case Study 3, the effect of various levels of censoring on the variability and uncertainty in CO and HC emission factor data sets for diesel transit buses was studied. The main findings regarding the methodology demonstrated in this research include: (1) uncertainty due to random sampling error is substantial and in many cases was found to be of the same order of magnitude as the variability in the data set; and (2) the methodology developed for quantifying the variability and uncertainty in censored data sets is reasonably robust and accurate. The main insights obtained from the application of the methodology include: (1) the uncertainty in the total NO x emissions from selected power plants in North Carolina is ± 25 percent around the nominal value; (2) the uncertainty in the short-term average emissions of all HAPs from a power plant is substantially high in the upper percentiles (e.g., the width of the 95 percent confidence interval on the 95th percentile is 385 lb) than in the lower percentiles (e.g., the width of the 95 percent confidence interval on the median value is 60 lb) ; (3) the range of uncertainty in the annual average emissions is much wider than the range of variability in annual average emissions from one year to another; and (4) the uncertainty in the median value of censored CO and HC emission factor data sets increases as the level of censoring increases.<P>
269

MEASUREMENT, ANALYSIS, AND MODELING OF ON-ROAD VEHICLE EMISSIONS USING REMOTE SENSING

Unal, Alper 27 May 1999 (has links)
<p>The main objectives of this research are; to develop on-road emission factor estimates for carbon monoxide (CO) and hydrocarbon (HC) emissions; to collect traffic and vehicle parameters that might be important in explaining variability in vehicle emissions; to develop an empirical traffic-based model that can predict vehicle emissions based upon observable traffic and vehicle parameters. Remote sensing technology were employed to collect exhaust emissions data. Traffic parameters were collected using an area-wide traffic detector, MOBILIZER. During the measurements, license plates were also recorded to obtain information on vehicle parameters. Data were collected at two sites, having different road grades and site geometries, over 10 days of field work at the Research Triangle area of North Carolina. A total of 11,830 triggered measurement attempts were recorded. After post-processing, 7,056 emissions were kept in the data base as valid measurements. After combining with the traffic and license vehicle parameters, a data base has been developed. Exploratory analysis has been conducted to find variables that are important to explain the variability of the emission estimates. Statistical methods were used to compare the mean of the emissions estimates for different sub-populations. For example, multi-comparison analysis has been conducted to compare the mean emissions estimates from vehicles having different model years. This analysis showed that the mean emissions from older vehicles were statistically different than the mean emissions estimates from the recent model year vehicles.One of the contributions of the research was developing an empirical traffic-based emission estimation model. For this purpose, data collected during the study were used to develop a novel model which combines the Hierarchical Tree-Based Regression method and Ordinary Least Squares regression. The key findings from this research include: (1) the measured mean CO emission estimate for Research Triangle park area of North Carolina is estimated as 340 grams/gallon, whereas the mean HC emissions estimate is found to be as 47 grams/gallon (2) inter-vehicle variability in vehicle emissions can be as high as two orders-of-magnitude; (3) intra-vehicle variability is lower compared to the inter-vehicle variability; (4) some vehicle variables such as vehicle model year and vehicle type are important factors in explaining the inter-vehicle variability in emissions estimates; (5) emission estimation model developed in this research can be applied to estimate the emissions from on-road vehicles. <P>
270

A Quantitative Assessment of Air Pollutant Releases and Costs Associated with Increased Recycling in Urban and Rural Settings

Kusa, Jonathon Joseph 28 June 1999 (has links)
<p>Using a model to calculate the life cycle inventory of solid waste management alternatives, this study quantifies the cost effectiveness and marginal damage of several solid waste management strategies that involve recycling. Although findings from this study are not valid for any specific city, they are intended to provide decision-makers with a template upon which to base future case studies. The air emissions tracked in this study include carbon dioxide from fossil and biomass sources (CO2), nitrogen oxides (NOx), and sulfur oxides (SOx). The research was conducted in two parts. First, the maximum potential tons avoided and marginal avoidance cost resulting from expanding recycling programs for two settings, an urban and a rural area, are compared to emission control costs at a hypothetical coal-fired power plant. Second, the marginal damage associated with each recycling program expansion was calculated using published marginal damage functions. The study's findings indicate that although solid waste management (SWM) strategy upgrades are not as cost effective as additional coal-fired power plant controls for reducing the specified pollutants, marginal benefits are incurred by upgrading most SWM strategies to include drop-off recycling of waste material because its collection costs are relatively low.<P>

Page generated in 0.1006 seconds