• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 262
  • 159
  • 159
  • 159
  • 159
  • 159
  • 155
  • 56
  • 4
  • 4
  • Tagged with
  • 560
  • 560
  • 560
  • 58
  • 42
  • 33
  • 30
  • 30
  • 30
  • 23
  • 23
  • 21
  • 20
  • 20
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
521

An energy balance analysis of the climate sensitivity to variations in the rate of upwelling in the world oceans

January 1989 (has links)
The climate system of the Earth has been under investigation for many years, and the 'Green-House Effect' has introduced a sense of urgency into the effort. The globally averaged temperature of the Earth undergoes what is commonly referred to as natural fluctuations in the climate signal. One effort of climate modellers is to isolate the responses of particular climate forcings in order to better understand each effect. The use of energy balance climate models (EBM's) has been one of the major tools in this respect Studies conducted on the response of the environment to the 'Green-House Effect' predict a warming trend. After experiencing such a trend in the early 1900's, however, the globally averaged temperature of the Earth began to decrease in the 1940's and continued this trend for approximately 20 years before resuming its trend of increase. It will be shown that a reduction of $\approx$10% in the upwelling rate in the oceans could produce a decrease in the globally averaged temperature sufficient to explain this departure from the expected trend The analysis of paleoclimatic indicators has produced strong evidence that the orbital forcing with periods of approximately 21000, 41000 and 93000 years predicted by the Milankovitch Theory is the primary cause of the glacial cycles known to have occurred on the Earth. However, there is a dynamic interaction between the environment and the ice caps that is not completely understood at this time. The paleoclimatic indicators available for the last deglaciation are abundant and well preserved (relative to the evidence of previous glacial periods), and analysis of the evidence indicates that during the most recent deglaciation a pulsation in the polar front occurred on such a small time scale that Milankovitch forcing is ruled out as a possible cause. It will be shown that an abrupt shutdown in the deep-water formation process which feeds the upwelling in the oceans could produce an influence of appropriate magnitude and time-scale to be the source of the dynamic interaction responsible for this abrupt climatic event The process employed in the dimension reduction used in the formulation of lower-order EBM's will be illustrated through the development of the equations, pointing out the inherent assumptions which must be made when developing one- and two-dimensional models as they are required. One-, two- and three-dimensional energy balance models will be analyzed and the results of climate sensitivity to upwelling variations will be presented graphically for each case / acase@tulane.edu
522

An investigation of natural climate variability, sensitivity, and poleward flux using the COADS data set

January 1994 (has links)
Upwelling-diffusion climate models have shown that radiative forcing changes in the ocean surface temperature penetrate only very slowly into the intermediate ocean whereas changes in deep water formation and basin scale upwelling can affect the water temperatures at intermediate depths sizably and quickly. Hence, both mechanisms could be involved in the warming of oceans at intermediate depths that has been observed. Seeing how the heat flux at the ocean-air interface has varied through time would give us an idea of the degree to which it accounts for the rise in intermediate water temperatures and hence the lack of a marked greenhouse warming signal in the temperature record This study also looks at the overall poleward heat transport in the oceans as well as in individual ocean basins and their variation over the years To achieve these goals, the heat flux at the ocean-air interface was calculated for the 1946-1991 period using the Comprehensive Ocean Atmosphere Data Set (COADS). The heat flux was broken down into the four components of shortwave flux, longwave flux, latent, and sensible heat and individual components were calculated by the bulk parameterization method The magnitudes of the individual heat flux calculations were found to depend critically on the parameterization scheme adopted. There was however no difference in the temporal variation or spatial pattern of the individual flux components due to the parameterization scheme adopted. The net heat flux values, in turn, depended on the choice of parameterization schemes and therefore have a high uncertainty in comparison to the greenhouse radiative forcing signal that is expected to be hiding in the ocean There seems to have been a period of high heat flux into the ocean that was tapering off at the beginning of the record being analyzed. From the early 1960s the net heat flux seems to have increased till about 1980 and resumed its decreasing trend since then. One of the three net flux calculations carried out seems to be close to a zero global average for much of the period being analyzed, and therefore, likely to be the real scenario The poleward heat flux calculations show that the Pacific Ocean shows higher magnitudes in the mid latitudes compared to the Atlantic and Indian Oceans. There seems to be a signficant variation in the poleward heat transport in the individual ocean basins over the years with an apparent shift occurring around 1980. (Abstract shortened by UMI.) / acase@tulane.edu
523

On the statistical analysis of trend in tropospheric ozone levels

January 1997 (has links)
This paper is a study of methodology to investigate trends in ozone levels in urban areas. Three methods are studied; two parametric (NHPP, and GPD) and one nonparametric The nonhomogeneous Poisson process (NHPP) approach: This method is based on the idea that the number of exceedances over a high threshold follows a Poisson distribution. In this method the detection of trend is approached by estimating the intensity function of the process. The intensity function is estimated using parametric and nonparametric methods. A general parametric function over time for the rate of a NHPP is proposed in order to test for nonexponential patterns The Generalized Pareto Distribution approach (GPD): In this method the detection of trend in ozone is approached by considering that the magnitude of the exceedances over a high threshold follows a generalized Pareto distribution The nonparametric statistical approach: A test for trend and a nonparametric estimator of a trend parameter were studied. The asymptotic distribution of the estimator is also provided. The nonparametric estimator is compared with the least squares estimator The three methods are studied empirically using a Monte Carlo method. Some insight into how well these methods perform is obtained. Also, the use of each method is illustrated by examples with ozone data The results of this study show that NHPP performs very well as a method to detect ozone trends. The GPD and the nonparametric approaches had low power for detecting trends in simulation experiments / acase@tulane.edu
524

Urbanization of mesoscale models

January 2004 (has links)
This dissertation addresses two important aspects of urbanization of mesoscale models: urban short-wave radiation budget and anthropogenic heat emission in urban areas. An urban canopy radiation model was developed to take into account the diurnal variation of short-wave radiation, including the effects of surface shading and multiple reflections within urban canyons. This model is able to calculate the time-dependent effective albedo as well as the daily mean energy-weighted albedo for any urban domain. Monte Carlo style simulations for four typical urban land use categories indicate that the traditional nadir-view albedo overestimates the reflected short-wave radiation from a city by 11--26%. With the Monte Carlo ensemble method, this radiation model also can provide the statistical mean urban albedo for urban mesoscale modeling The effect of anthropogenic heating (Qf) was incorporated into the MM5 mesoscale atmospheric model. Several typical release cases of anthropogenic heat in the urban environment were considered. With respect to the multiple Planetary Boundary Layer (PBL) scheme options available within the MM5 modeling systems, we have enabled Q f within two commonly used PBL modules---Blackadar and Gayno-Seaman. They have different vertical mixing mechanisms in the daytime convective PBL. The Blackadar scheme was modified in this study to improve its performance during the morning transition. Case study simulations for Philadelphia and Atlanta were performed to investigate the impacts of anthropogenic heating on Urban Heat Island (UHI) development. Results suggest that anthropogenic heating plays an important role in the UHI formation, particularly during night and winter. In addition, anthropogenic heating was also found to have impacts on the nocturnal atmospheric PBL stability and PBL structure during morning transition / acase@tulane.edu
525

Applications of tree-structured regression for regional precipitation prediction

January 2000 (has links)
This thesis presents a Tree-Structured Regression (TSR) method to relate daily precipitation with a variety of free atmosphere variables. Historical data were used to identify distinct weather patterns associated with differing types of precipitation events. Models were developed using 67% of the data for training and the remaining data for model validation. Seasonal models were built for each of four U.S. sites; New Orleans Louisiana, San Antonio and Amarillo of Texas as well as San Francisco California. The average correlation by site between observed and simulated daily precipitation data series range from 0.69 to 0.79 for the training set, and 0.64 to 0.79 for the validation set. Relative humidity related variables were found to be the dominant variables in these TSR models. Output from an NCAR Climate System Model (CSM) transient simulation of climate change were then used to drive the TSR models for predicting precipitation characteristics under climate change. A preliminary screening of the GCM output variables for current climate, however, revealed significant problems for the New Orleans, San Antonio and Amarillo sites. Specifically, the CSM missed the annual trends in humidity for the grid cells containing these sites. CSM output for the San Francisco site was found to be much more reliable. Therefore, we present future precipitation estimates only for the San Francisco site. While both GCM and TSR predict very small change in overall annual precipitation, they differ significantly from season to season / acase@tulane.edu
526

A study of El Ninõ events along the British Columbia coast /

Robert, Marie January 1994 (has links)
No description available.
527

El Nino Southern Oscillation (ENSO) effects on hydro-ecological parameters in central Mexico

Peralta-Hernandez, Ana Rosa January 2001 (has links)
The impacts of El Nino Southern Oscillation (ENSO) on precipitation, reference evapotranspiration, and vegetation in a three-state region of central Mexico were investigated using daily weather data from 20 weather stations for the years 1970 through 1990, which included 5 El Nino years, 5 La Nina years, and 11 Neutral years. In addition, two years, 1997 (El Nino), and 1998 (La Nina) of 10-day NDVI composites were analyzed during the growing season (May-Oct) along with precipitation and reference evapotranspiration (ETo) over central Mexico. Regional precipitation trends were analyzed using the normalized rainfall departures. The interannual variation of vegetation cover was analyzed using the NDVI on 10-day and monthly bases. The Food and Agricultural Organization (FAO) Penman-Monteith method was used to calculate ETo. The dynamics of the soil water balance in central Mexico was evaluated according to the method proposed by Thornthwaite and Mather. Analyses indicate that driest conditions occurred within the northern part of the region and during neutral ENSO years. Rainfall amounts during El Nino and Neutral years were not statistically different however, La Nina years were about 30% wetter than N and EN years (0.05 level). The correlation coefficient between NDVI and precipitation was 0.79 in 1997, and 0.52 in 1998, in June and July, respectively. Negative correlation was found between NDVI and reference evapotranspiration during the rainy months of July and August. The spatio-temporal variability of NDVI showed that there was significant statistical difference in NDVI between regions, but not between years. Regional soil water balance determinations indicated that conditions were most favorable in the Southern part of the region for crop growth during La Nina years. In general, soil water deficits were reduced by about 50% during the growing season compared to the annual soil water deficits.
528

Spatial and seasonal variations along the US-Mexico border: An analysis with Landsat Thematic Mapper imagery

De Lira-Reyes, Gerardo, 1960- January 1997 (has links)
Research in global ecology has been concerned with the effect of vegetation removal in semi-arid regions including aspects such as plant succession and desertification and its impact on global change, specifically global warming. In addition, conditions along international borders often are presented as discontinuities in terms of vegetation and soil status. To better document these discontinuities in a semi-arid region, a multi-temporal study along the U.S.-Mexico border was conducted with a series of six Landsat Thematic Mapper (TM) images acquired over the 1992 growing season. Spatial and temporal variations across the border were analyzed with reflectance data. Spatial data was obtained from three different sampling size areas which included: the Parker Canyon grassland; the San Rafael Valley, a grassland combined with riparian areas and croplands; and the regional area along the Arizona-Sonora border including valleys and mountains, and diverse vegetation communities and soil conditions. These areas consisted of about 106 ha, 5,800 ha, and 738,000 ha, respectively, at each side of the border. Temporal data were obtained from the six TM images which were acquired in days of the year 162, 178, 194, 274, 306, and 322. Four remote sensing applications were considered for comparison studies on both sides of the border. These techniques included: (a) band comparisons, (b) albedo, derived from the discrete sensor band information, (c) vegetation indices, and (d) application of a linear mixing model. When comparing both sides of the border, significant differences were observed in most of the remote sensing techniques applied at the Parker Canyon area. Higher differences were found during the wet season with all of the applied techniques with the exception of albedo. The red band and albedo were the most important discriminants during the dry season. At the intermediate size, San Rafael Valley area, U.S.-Mexico differences followed the same pattern as Parker Canyon, but statistically, these differences were deemed insignificant. At the regional area, no differences were observed between the U.S. and Mexican side. The effect of pixel aggregation using the different remote sensing techniques and ground data from field campaigns in 1995 were also analyzed.
529

Advanced signal processing techniques for the analysis of solar radiometer data in the presence of temporally varying aerosol optical depths

Erxleben, Wayne Henry, 1963- January 1998 (has links)
Solar radiometers, which are used for remote sensing of atmospheric aerosols and absorbing gases, have traditionally been calibrated by the Langley method. Temporally variable conditions, however, can significantly bias the zero-airmass intercepts obtained by this method. In this dissertation, a number of new signal processing techniques are developed to better characterize aerosol variability and use it to obtain improved intercepts under a broad range of conditions. The techniques include (1) an extension of Forgan's method, using correlation between optical depths at different wavelengths to model temporal variations; (2) spectral/fractal analysis and filtering to identify systematic atmospheric variations and distinguish them from noise; and (3) error correction using correlation between results from different data sets. These techniques, along with some preliminary adjustments and an algorithm for estimating ozone content, are incorporated into an iterative processing scheme that both calibrates the instrument and provides improved estimates of each optically significant atmospheric constituent. Finally, the characterization of aerosol variability is further enhanced by analyzing data taken with a customized radiometer that measures diffuse skylight as well as direct sunlight.
530

Sodium laser guide star projection for adaptive optics

Jacobsen, Bruce Paul, 1964- January 1997 (has links)
In order to increase sky coverage, adaptive optics (AO) systems for large telescopes will require laser systems to provide artificial reference beacons. The most prominent method for creating an artificial beacon is to project laser light tuned to the 589nm, D2 line of sodium onto the mesospheric sodium atoms at an altitude of 90km. When correcting with AO, the best wavefront measurements are obtained when the image of the sodium beacon is as bright and sharp as possible. Blurring occurs due to spot elongation, as a result of sub-aperture displacement from the projector axis, and from diffraction and seeing effects on the projected beam. Mounting the projector in the center of the telescope minimizes the effect of elongation. Simulations were conducted that show that matching the beam waist to ∼2 times the atmospheric turbulence parameter r₀ minimizes the beacon size. For r₀ = 15cm and a 48cm projector, calculations show the optimum projected waist is 29cm. A prototype projector has been built and operated. Recent experiments have shown that this projector is capable of producing 0.75arcsec beacons under good seeing. In addition, spot elongation of 0.5arcsec was observed corresponding to a sodium layer thickness of 10km. The first experimental evidence for optical pumping in the mesospheric layer were obtained. They show a non-thermal profile for the sodium hyperfine structure (3.5:1 line ratio as opposed to 5:3) when projecting circularly polarized light. This profile indicates that the maximum return per watt is obtained by pumping the F = 2 level with a narrow bandwidth compared with pumping both F = 2 and F = 1 with a broad bandwidth. In addition, evidence shows a 30% increase in beacon brightness when pumping the sodium layer with circularly polarized light over linear. A projector for the 6.5m MMT conversion has been designed based on experience gained with the prototype. Analysis of the Strehl reduction due to wavefront reconstruction error shows a reduction in Strehl of < 1% for the optimal operating parameters at the MMT. This less than the fundamental limit of 0.79 for focus anisoplanatism.

Page generated in 0.1002 seconds