• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 265
  • 70
  • 60
  • 58
  • 27
  • 12
  • 10
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 599
  • 599
  • 113
  • 72
  • 61
  • 59
  • 59
  • 54
  • 53
  • 50
  • 49
  • 44
  • 36
  • 36
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

MONITORING AND ANALYSIS OF EROSION AND DEPOSITION IN THE DESERT KNOLLS WASH

Lamech, Samson Rajan 01 December 2015 (has links)
The goal of this Project was to monitor and measure ongoing changes in the geomorphology of one reach of The Desert Knolls Wash (DKW), an unstable ephemeral stream channel in Apple Valley, California. The DKW flows into the Mojave River just upstream of the Upper Mojave Narrows, a historic site that has been the focus of recorded human activity in the region since 1776. Two surveyed cross-sections were established for three periods of time between November 2012 and November 2014 which were to be re-measured after significant flows. However, owing to the persistent drought in the location, there were no significant changes observed. Aerial photos from 1938 to 2005 and historic photos from 1919 covering the DKW were studied to note the increase in urban density. The project has established baseline field measurements to document the magnitude and timing of the ongoing channel changes as well as predict what will happen over the next two decades if measures are not taken to stabilize the channel permanently.
52

A multiscale investigation of the role of variability in cross-sectional properties and side tributaries on flood routing

Barr, Jared Wendell 01 July 2012 (has links)
A multi-scale Monte Carlo simulation was performed on nine streams of increasing Horton order to investigate the role that variability in hydraulic geometry and resistance play in modifying a flood hydrograph. This study attempts to determine the potential to replace actual cross-sections along a stream reach with a prismatic channel that has mean cross-sectional properties. The primary finding of this work is that the flood routing model is less sensitive to variability in the channel geometry as the Horton order of the stream increases. It was also established that even though smaller streams are more sensitive to variability in hydraulic geometry and resistance, replacing cross-sections along the channel with a characteristic reach wise average cross-section, is still a suitable approximation. Finally a case study of applying this methodology to a natural river is performed with promising results.
53

Resistance of Polygonal Cross Sections of Lattice Wind Tower

Jia, Bicen January 2017 (has links)
Wind energy is one of the most efficient renewable energies. The most used wind towers are tubularand lattice wind towers. Parts of lattice are easier to transfer, especially in the inland areas. Also, it is easier to build higher lattice tower in order to have more efficient energy conversion in inland areas.However, most of the cross sections for lattice tower are tubular cross sections.This thesis represents the parametric study of polygonal cross section of lattice tower. It consists ofthe numerical analysis based on finite element method (ABAQUS) and analysis based on EN 1993-1-3. The objective of this thesis is to find regular patterns of parametric influences on polygonal crosssection, and to compare them against calculation based on EN 1993-1-3. Also, to find regular patternsof parametric influences on the stiffness of the bolts on the lips.
54

Tube bending with axial pull and internal pressure

Agarwal, Rohit 30 September 2004 (has links)
Tube bending is a widely used manufacturing process in the aerospace, automotive, and other industries. During tube bending, considerable in-plane distortion and thickness variation occurs. The thickness increases at the intrados (surface of tube in contact with the die) and it reduces at the extrados (outer surface of the tube). In some cases, when the bend die radius is small, wrinkling occurs at the intrados. In industry a mandrel is used to eliminate wrinkling and reduce distortion. However, in the case of a close bend die radius, use of a mandrel should be avoided as bending with the mandrel increases the thinning of the wall at the extrados, which is undesirable in the manufacturing operation. The present research focuses on additional loadings such as axial force and internal pressure which can be used to achieve better shape control and thickness distribution of the tube. Based on plasticity theories, an analytical model is developed to predict cross section distortion and thickness change of tubes under various loading conditions. Results from both the FEA and analytical model indicated that at the intrados the increase in thickness for bending with internal pressure and bending with combined axial pull and internal pressure was nearly the same. But in the case of bending with the combination of axial pull and internal pressure there was a significant reduction of thickness at the extrados. A parametric study was conducted for the case of bending with combined internal pressure and axial pull and it was seen that with proper selection of the pressure and axial pull wrinkling can be eliminated, thickness distribution around the tube can be optimized, and cross section distortion of the tube can be reduced. Predictions of the analytical model are in good agreement with finite element simulations and published experimental results. The model can be used to evaluate tooling and process design in tube bending.
55

Essays in Dynamic Macroeconometrics

Bañbura, Marta 26 June 2009 (has links)
The thesis contains four essays covering topics in the field of macroeconomic forecasting. The first two chapters consider factor models in the context of real-time forecasting with many indicators. Using a large number of predictors offers an opportunity to exploit a rich information set and is also considered to be a more robust approach in the presence of instabilities. On the other hand, it poses a challenge of how to extract the relevant information in a parsimonious way. Recent research shows that factor models provide an answer to this problem. The fundamental assumption underlying those models is that most of the co-movement of the variables in a given dataset can be summarized by only few latent variables, the factors. This assumption seems to be warranted in the case of macroeconomic and financial data. Important theoretical foundations for large factor models were laid by Forni, Hallin, Lippi and Reichlin (2000) and Stock and Watson (2002). Since then, different versions of factor models have been applied for forecasting, structural analysis or construction of economic activity indicators. Recently, Giannone, Reichlin and Small (2008) have used a factor model to produce projections of the U.S GDP in the presence of a real-time data flow. They propose a framework that can cope with large datasets characterised by staggered and nonsynchronous data releases (sometimes referred to as “ragged edge”). This is relevant as, in practice, important indicators like GDP are released with a substantial delay and, in the meantime, more timely variables can be used to assess the current state of the economy. The first chapter of the thesis entitled “A look into the factor model black box: publication lags and the role of hard and soft data in forecasting GDP” is based on joint work with Gerhard Rünstler and applies the framework of Giannone, Reichlin and Small (2008) to the case of euro area. In particular, we are interested in the role of “soft” and “hard” data in the GDP forecast and how it is related to their timeliness. The soft data include surveys and financial indicators and reflect market expectations. They are usually promptly available. In contrast, the hard indicators on real activity measure directly certain components of GDP (e.g. industrial production) and are published with a significant delay. We propose several measures in order to assess the role of individual or groups of series in the forecast while taking into account their respective publication lags. We find that surveys and financial data contain important information beyond the monthly real activity measures for the GDP forecasts, once their timeliness is properly accounted for. The second chapter entitled “Maximum likelihood estimation of large factor model on datasets with arbitrary pattern of missing data” is based on joint work with Michele Modugno. It proposes a methodology for the estimation of factor models on large cross-sections with a general pattern of missing data. In contrast to Giannone, Reichlin and Small (2008), we can handle datasets that are not only characterised by a “ragged edge”, but can include e.g. mixed frequency or short history indicators. The latter is particularly relevant for the euro area or other young economies, for which many series have been compiled only since recently. We adopt the maximum likelihood approach which, apart from the flexibility with regard to the pattern of missing data, is also more efficient and allows imposing restrictions on the parameters. Applied for small factor models by e.g. Geweke (1977), Sargent and Sims (1977) or Watson and Engle (1983), it has been shown by Doz, Giannone and Reichlin (2006) to be consistent, robust and computationally feasible also in the case of large cross-sections. To circumvent the computational complexity of a direct likelihood maximisation in the case of large cross-section, Doz, Giannone and Reichlin (2006) propose to use the iterative Expectation-Maximisation (EM) algorithm (used for the small model by Watson and Engle, 1983). Our contribution is to modify the EM steps to the case of missing data and to show how to augment the model, in order to account for the serial correlation of the idiosyncratic component. In addition, we derive the link between the unexpected part of a data release and the forecast revision and illustrate how this can be used to understand the sources of the latter in the case of simultaneous releases. We use this methodology for short-term forecasting and backdating of the euro area GDP on the basis of a large panel of monthly and quarterly data. In particular, we are able to examine the effect of quarterly variables and short history monthly series like the Purchasing Managers' surveys on the forecast. The third chapter is entitled “Large Bayesian VARs” and is based on joint work with Domenico Giannone and Lucrezia Reichlin. It proposes an alternative approach to factor models for dealing with the curse of dimensionality, namely Bayesian shrinkage. We study Vector Autoregressions (VARs) which have the advantage over factor models in that they allow structural analysis in a natural way. We consider systems including more than 100 variables. This is the first application in the literature to estimate a VAR of this size. Apart from the forecast considerations, as argued above, the size of the information set can be also relevant for the structural analysis, see e.g. Bernanke, Boivin and Eliasz (2005), Giannone and Reichlin (2006) or Christiano, Eichenbaum and Evans (1999) for a discussion. In addition, many problems may require the study of the dynamics of many variables: many countries, sectors or regions. While we use standard priors as proposed by Litterman (1986), an important novelty of the work is that we set the overall tightness of the prior in relation to the model size. In this we follow the recommendation by De Mol, Giannone and Reichlin (2008) who study the case of Bayesian regressions. They show that with increasing size of the model one should shrink more to avoid overfitting, but when data are collinear one is still able to extract the relevant sample information. We apply this principle in the case of VARs. We compare the large model with smaller systems in terms of forecasting performance and structural analysis of the effect of monetary policy shock. The results show that a standard Bayesian VAR model is an appropriate tool for large panels of data once the degree of shrinkage is set in relation to the model size. The fourth chapter entitled “Forecasting euro area inflation with wavelets: extracting information from real activity and money at different scales” proposes a framework for exploiting relationships between variables at different frequency bands in the context of forecasting. This work is motivated by the on-going debate whether money provides a reliable signal for the future price developments. The empirical evidence on the leading role of money for inflation in an out-of-sample forecast framework is not very strong, see e.g. Lenza (2006) or Fisher, Lenza, Pill and Reichlin (2008). At the same time, e.g. Gerlach (2003) or Assenmacher-Wesche and Gerlach (2007, 2008) argue that money and output could affect prices at different frequencies, however their analysis is performed in-sample. In this Chapter, it is investigated empirically which frequency bands and for which variables are the most relevant for the out-of-sample forecast of inflation when the information from prices, money and real activity is considered. To extract different frequency components from a series a wavelet transform is applied. It provides a simple and intuitive framework for band-pass filtering and allows a decomposition of series into different frequency bands. Its application in the multivariate out-of-sample forecast is novel in the literature. The results indicate that, indeed, different scales of money, prices and GDP can be relevant for the inflation forecast.
56

On a mathematical model of a bar with variable rectangular cross-section

Jaiani, George January 1998 (has links)
Generalizing an idea of I. Vekua [1] who, in order to construct theory of plates and shells, fields of displacements, strains and stresses of threedimensional theory of linear elasticity expands into the orthogonal Fourier-series by Legendre Polynomials with respect to the variable along thickness, and then leaves only first N + 1, N = 0, 1, ..., terms, in the bar model under consideration all above quantities have been expanded into orthogonal double Fourier-series by Legendre Polynomials with respect to the variables along thickness, and width of the bar, and then first (Nsub(3) + 1)(Nsub(2) + 1), Nsub(3), Nsub(2) = 0, 1,..., terms have been left. This case will be called (Nsub(3), Nsub(2)) approximation. Both in general (Nsub(3), Nsub(2)) and in particular (0,0) (1,0) cases of approximation, the question of wellposedness of initial and boundary value problems, existence and uniqueness of solutions have been investigated. The cases when variable cross-section turns into segments of straight line, and points have been also considered. Such bars will be called cusped bars (see also [2]).
57

Automated Enrichment of Single-Walled Carbon Nanotubes with Optical Studies of Enriched Samples

Canning, Griffin 13 May 2013 (has links)
The design and performance of an instrument is presented whose purpose is the extraction of samples highly enriched in one species of single-walled carbon nanotubes from density gradient ultracentrifugation. This instrument extracts high purity samples which are characterized by various optical studies. The samples are found to be enriched in just a few species of nanotubes, with the major limitation to enrichment being the separation, rather than extraction. The samples are then used in optical and microscopic studies which attempt to determine the first absorption coefficient (S1) of the (6,5) species of nanotube. Initial experiments give a value of 9.2 ± 2.6 cm2 C atom-1. Future work is proposed to improve upon the experiment in an attempt to reduce possible errors
58

Measurement of the Top Quark Pair Production Cross-section in the Dilepton Channel using Lepton plus Track Selection and Identified b-jets

Spreitzer, Teresa 01 April 2010 (has links)
Using 1.0 fb^{-1} of data collected by the Collider Detector at Fermilab (CDF) from Run II of the Fermilab Tevatron, we measure the top-antitop production cross-section in events with two leptons, significant missing transverse energy, and at least jets, at least one of which is identified as a b-jet. As the Run II dataset grows, more stringent tests of Standard Model predictions for the top quark sector are becoming possible. The dilepton channel, where both top quarks decay t-> W b ->l nu b, is of particular interest due to its high purity. Use of an isolated track as the second lepton significantly increases the dilepton acceptance, at the price of some increase in background, particularly from W + jets events where one of the jets is identified as a lepton. To control the increase in background we add to the event selection the requirement that at least one of the jets be identified as a b-jet, reducing the background contribution from all sources. Assuming a branching ratio of BR(W->l nu) = 10.8% and a top mass of m_top = 175 GeV/c^{2} the measured cross-section is sigma = (10.5 +/- 1.8 stat. +/- 0.8 syst. +/- 0.6 lumi.) pb.
59

Measurement of the Top Quark Pair Production Cross-section in the Dilepton Channel using Lepton plus Track Selection and Identified b-jets

Spreitzer, Teresa 01 April 2010 (has links)
Using 1.0 fb^{-1} of data collected by the Collider Detector at Fermilab (CDF) from Run II of the Fermilab Tevatron, we measure the top-antitop production cross-section in events with two leptons, significant missing transverse energy, and at least jets, at least one of which is identified as a b-jet. As the Run II dataset grows, more stringent tests of Standard Model predictions for the top quark sector are becoming possible. The dilepton channel, where both top quarks decay t-> W b ->l nu b, is of particular interest due to its high purity. Use of an isolated track as the second lepton significantly increases the dilepton acceptance, at the price of some increase in background, particularly from W + jets events where one of the jets is identified as a lepton. To control the increase in background we add to the event selection the requirement that at least one of the jets be identified as a b-jet, reducing the background contribution from all sources. Assuming a branching ratio of BR(W->l nu) = 10.8% and a top mass of m_top = 175 GeV/c^{2} the measured cross-section is sigma = (10.5 +/- 1.8 stat. +/- 0.8 syst. +/- 0.6 lumi.) pb.
60

Measurement of the exclusive ([Ni][Mi subíndex][Ro] -> [Mi][- elevat][Ro][Pi][+ elevat]) and inclusive ([Ni][Mi subíndex] N -> [Mi][- elevat] N' [Pi][+ elevat]) single pion [Ni] interaction cross section in a carbon target using the SciBar detector at the K2K experiment

Rodríguez Marrero, Ana Yaiza 18 May 2007 (has links)
No description available.

Page generated in 0.0621 seconds