• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 652
  • 270
  • 152
  • 149
  • 62
  • 56
  • 22
  • 20
  • 16
  • 13
  • 13
  • 10
  • 8
  • 8
  • 7
  • Tagged with
  • 1697
  • 612
  • 180
  • 171
  • 139
  • 131
  • 123
  • 96
  • 85
  • 84
  • 84
  • 82
  • 82
  • 73
  • 71
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Formgivningsförslag av ungdomsavdelningen på Eskilstuna stadsbibliotek

Gustafsson, Karin January 2008 (has links)
No description available.
172

Experimental Evaluation of Full Scale I-Section Reinforced Concrete Beams with CFRP-Shear Reinforcement

Aquino, Christian 01 January 2008 (has links)
Fiber reinforced polymer (FRP) systems have shown great promise in strengthening reinforced concrete structures. These systems are a viable option for use as external reinforcement because of their light weight, resistance to corrosion, and high strength. These systems, externally bonded in the form of sheets or laminates, have shown to increase the flexural and more recently the shear capacity of members. Major concerns of the system are issues related to the bond strength and premature peeling especially when reentrant corners are present. The objectives of this study were to verify the effectiveness of carbon FRP (CFRP) laminates on an I-section beam with no anchorage and to determine the feasibility of using an anchorage system to prevent premature debonding. The two types of anchorage systems used were a horizontal CFRP laminate and glass FRP (GFRP) spikes. These anchorage systems verified that the use of anchorage on I-shaped beams can prevent premature debonding of the laminate and allow the specimens to achieve a higher shear capacity. Recommendations for future research of such systems are also presented.
173

Tube bending with axial pull and internal pressure

Agarwal, Rohit 30 September 2004 (has links)
Tube bending is a widely used manufacturing process in the aerospace, automotive, and other industries. During tube bending, considerable in-plane distortion and thickness variation occurs. The thickness increases at the intrados (surface of tube in contact with the die) and it reduces at the extrados (outer surface of the tube). In some cases, when the bend die radius is small, wrinkling occurs at the intrados. In industry a mandrel is used to eliminate wrinkling and reduce distortion. However, in the case of a close bend die radius, use of a mandrel should be avoided as bending with the mandrel increases the thinning of the wall at the extrados, which is undesirable in the manufacturing operation. The present research focuses on additional loadings such as axial force and internal pressure which can be used to achieve better shape control and thickness distribution of the tube. Based on plasticity theories, an analytical model is developed to predict cross section distortion and thickness change of tubes under various loading conditions. Results from both the FEA and analytical model indicated that at the intrados the increase in thickness for bending with internal pressure and bending with combined axial pull and internal pressure was nearly the same. But in the case of bending with the combination of axial pull and internal pressure there was a significant reduction of thickness at the extrados. A parametric study was conducted for the case of bending with combined internal pressure and axial pull and it was seen that with proper selection of the pressure and axial pull wrinkling can be eliminated, thickness distribution around the tube can be optimized, and cross section distortion of the tube can be reduced. Predictions of the analytical model are in good agreement with finite element simulations and published experimental results. The model can be used to evaluate tooling and process design in tube bending.
174

Are Government Websites Achieving Universal Accessibility?: An Analysis of State Department of Health and Human Services’ Websites

Toshiba L Burns-Johnson 25 April 1907 (has links)
Research reports that the search for health information is the fourth most popular activity being done on the web (Pew Internet & American Life Project, 2004). However, for disabled persons, barriers experienced when interfacing with the Internet may cause healthcare websites to be inaccessible to them. This study explores the level of accessibility of healthcare websites and the relationship between accessibility and usability by determining how compliant state department of health and human services websites are with accessibility and usability guidelines. A content analysis of each state’s department of health and human services website was conducted. Results revealed that state department of health and human services websites are not very compliant with accessibility guidelines, are somewhat compliant with usability guidelines, and overall are not very accessible. The findings also indicate that there is a significant moderate relationship between accessibility and usability which suggests that the two concepts are interconnected.
175

Essays in Dynamic Macroeconometrics

Bañbura, Marta 26 June 2009 (has links)
The thesis contains four essays covering topics in the field of macroeconomic forecasting. The first two chapters consider factor models in the context of real-time forecasting with many indicators. Using a large number of predictors offers an opportunity to exploit a rich information set and is also considered to be a more robust approach in the presence of instabilities. On the other hand, it poses a challenge of how to extract the relevant information in a parsimonious way. Recent research shows that factor models provide an answer to this problem. The fundamental assumption underlying those models is that most of the co-movement of the variables in a given dataset can be summarized by only few latent variables, the factors. This assumption seems to be warranted in the case of macroeconomic and financial data. Important theoretical foundations for large factor models were laid by Forni, Hallin, Lippi and Reichlin (2000) and Stock and Watson (2002). Since then, different versions of factor models have been applied for forecasting, structural analysis or construction of economic activity indicators. Recently, Giannone, Reichlin and Small (2008) have used a factor model to produce projections of the U.S GDP in the presence of a real-time data flow. They propose a framework that can cope with large datasets characterised by staggered and nonsynchronous data releases (sometimes referred to as “ragged edge”). This is relevant as, in practice, important indicators like GDP are released with a substantial delay and, in the meantime, more timely variables can be used to assess the current state of the economy. The first chapter of the thesis entitled “A look into the factor model black box: publication lags and the role of hard and soft data in forecasting GDP” is based on joint work with Gerhard Rünstler and applies the framework of Giannone, Reichlin and Small (2008) to the case of euro area. In particular, we are interested in the role of “soft” and “hard” data in the GDP forecast and how it is related to their timeliness. The soft data include surveys and financial indicators and reflect market expectations. They are usually promptly available. In contrast, the hard indicators on real activity measure directly certain components of GDP (e.g. industrial production) and are published with a significant delay. We propose several measures in order to assess the role of individual or groups of series in the forecast while taking into account their respective publication lags. We find that surveys and financial data contain important information beyond the monthly real activity measures for the GDP forecasts, once their timeliness is properly accounted for. The second chapter entitled “Maximum likelihood estimation of large factor model on datasets with arbitrary pattern of missing data” is based on joint work with Michele Modugno. It proposes a methodology for the estimation of factor models on large cross-sections with a general pattern of missing data. In contrast to Giannone, Reichlin and Small (2008), we can handle datasets that are not only characterised by a “ragged edge”, but can include e.g. mixed frequency or short history indicators. The latter is particularly relevant for the euro area or other young economies, for which many series have been compiled only since recently. We adopt the maximum likelihood approach which, apart from the flexibility with regard to the pattern of missing data, is also more efficient and allows imposing restrictions on the parameters. Applied for small factor models by e.g. Geweke (1977), Sargent and Sims (1977) or Watson and Engle (1983), it has been shown by Doz, Giannone and Reichlin (2006) to be consistent, robust and computationally feasible also in the case of large cross-sections. To circumvent the computational complexity of a direct likelihood maximisation in the case of large cross-section, Doz, Giannone and Reichlin (2006) propose to use the iterative Expectation-Maximisation (EM) algorithm (used for the small model by Watson and Engle, 1983). Our contribution is to modify the EM steps to the case of missing data and to show how to augment the model, in order to account for the serial correlation of the idiosyncratic component. In addition, we derive the link between the unexpected part of a data release and the forecast revision and illustrate how this can be used to understand the sources of the latter in the case of simultaneous releases. We use this methodology for short-term forecasting and backdating of the euro area GDP on the basis of a large panel of monthly and quarterly data. In particular, we are able to examine the effect of quarterly variables and short history monthly series like the Purchasing Managers' surveys on the forecast. The third chapter is entitled “Large Bayesian VARs” and is based on joint work with Domenico Giannone and Lucrezia Reichlin. It proposes an alternative approach to factor models for dealing with the curse of dimensionality, namely Bayesian shrinkage. We study Vector Autoregressions (VARs) which have the advantage over factor models in that they allow structural analysis in a natural way. We consider systems including more than 100 variables. This is the first application in the literature to estimate a VAR of this size. Apart from the forecast considerations, as argued above, the size of the information set can be also relevant for the structural analysis, see e.g. Bernanke, Boivin and Eliasz (2005), Giannone and Reichlin (2006) or Christiano, Eichenbaum and Evans (1999) for a discussion. In addition, many problems may require the study of the dynamics of many variables: many countries, sectors or regions. While we use standard priors as proposed by Litterman (1986), an important novelty of the work is that we set the overall tightness of the prior in relation to the model size. In this we follow the recommendation by De Mol, Giannone and Reichlin (2008) who study the case of Bayesian regressions. They show that with increasing size of the model one should shrink more to avoid overfitting, but when data are collinear one is still able to extract the relevant sample information. We apply this principle in the case of VARs. We compare the large model with smaller systems in terms of forecasting performance and structural analysis of the effect of monetary policy shock. The results show that a standard Bayesian VAR model is an appropriate tool for large panels of data once the degree of shrinkage is set in relation to the model size. The fourth chapter entitled “Forecasting euro area inflation with wavelets: extracting information from real activity and money at different scales” proposes a framework for exploiting relationships between variables at different frequency bands in the context of forecasting. This work is motivated by the on-going debate whether money provides a reliable signal for the future price developments. The empirical evidence on the leading role of money for inflation in an out-of-sample forecast framework is not very strong, see e.g. Lenza (2006) or Fisher, Lenza, Pill and Reichlin (2008). At the same time, e.g. Gerlach (2003) or Assenmacher-Wesche and Gerlach (2007, 2008) argue that money and output could affect prices at different frequencies, however their analysis is performed in-sample. In this Chapter, it is investigated empirically which frequency bands and for which variables are the most relevant for the out-of-sample forecast of inflation when the information from prices, money and real activity is considered. To extract different frequency components from a series a wavelet transform is applied. It provides a simple and intuitive framework for band-pass filtering and allows a decomposition of series into different frequency bands. Its application in the multivariate out-of-sample forecast is novel in the literature. The results indicate that, indeed, different scales of money, prices and GDP can be relevant for the inflation forecast.
176

On a mathematical model of a bar with variable rectangular cross-section

Jaiani, George January 1998 (has links)
Generalizing an idea of I. Vekua [1] who, in order to construct theory of plates and shells, fields of displacements, strains and stresses of threedimensional theory of linear elasticity expands into the orthogonal Fourier-series by Legendre Polynomials with respect to the variable along thickness, and then leaves only first N + 1, N = 0, 1, ..., terms, in the bar model under consideration all above quantities have been expanded into orthogonal double Fourier-series by Legendre Polynomials with respect to the variables along thickness, and width of the bar, and then first (Nsub(3) + 1)(Nsub(2) + 1), Nsub(3), Nsub(2) = 0, 1,..., terms have been left. This case will be called (Nsub(3), Nsub(2)) approximation. Both in general (Nsub(3), Nsub(2)) and in particular (0,0) (1,0) cases of approximation, the question of wellposedness of initial and boundary value problems, existence and uniqueness of solutions have been investigated. The cases when variable cross-section turns into segments of straight line, and points have been also considered. Such bars will be called cusped bars (see also [2]).
177

Essays on nonlinear time series modelling och hypothesis testing

Strikholm, Birgit January 2004 (has links)
There seems to be a common understanding nowadays that the economy is nonlinear. Economic theory suggests features that can not be incorporated into linear frameworks, and over the decades a solid body of empirical evidence of nonlinearities in economic time series has been gathered. This thesis consists of four essays that have to do with various forms of nonlinear statistical inference. In the first chapter the problem of determining the number regimes in a threshold autoregressive (TAR) model is considered. Typically, the number of regimes (or thresholds) is assumed unknown and has to be determined from the data. The solution provided in the chapter first uses the smooth transition autoregressive (STAR) model with a fixed and rapid transition to approximate the TAR model. The number of thresholds is then determined using sequential misspecification tests developed for the STAR model.  The main characteristic of the proposed method is that only standard statistical inference is used, as opposed to non-standard inference or computation intensive bootstrap-based methods. In the second chapter a similar idea is employed and the structural break model is approximated with a smoothly time-varying autoregressive model. By making the smooth changes in parameters rapid, the model is able to closely approximate the corresponding model with breaks in the parameter structure. This approximation makes the misspecification tests developed for the STR modelling framework available and they can be used for sequentially determining the number of breaks. Again, the method is computationally simple as all tests rely on standard statistical inference. There exists literature suggesting that business cycle fluctuations affect the pattern of seasonality in macroeconomic series. A question asked in the third chapter is whether other factors such as changes in institutions or technological change may have this effect as well. The time-varying smooth transition autoregressive (TV- STAR) models that can incorporate both types of change are used to model the (possible) changes in seasonal patterns and shed light on the hypothesis that institutional and technological changes (proxied by time) may have a stronger effect on seasonal patterns than business cycle. The TV-STAR testing framework is applied to nine quarterly industrial production series from the G7 countries, Finland and Sweden. These series display strong seasonal patterns and also contain the business cycle fluctuations. The empirical results of the chapter suggest that seasonal patterns in these series have been changing over time and, furthermore, that the business cycle fluctuations do not seem to be the main cause for this change. The last chapter of the thesis considers the possibility of testing for Granger causality in bivariate nonlinear systems when the exact form of the nonlinear relationship between variables is not known. The idea is to linearize the testing problem by approximating the nonlinear system by its Taylor expansion. The expansion is linear in parameters and one gets round the difficulty caused by the unknown functional form of the relationship under investigation. / <p>Diss. Stockholm : Handelshögskolan, 2004</p>
178

On the smooth linear section of the Grassmannian Gr(2, n)

January 2012 (has links)
In this thesis, we will study the smooth linear section of the Grassmannian Gr(2, n). Explicitly, we give a criterion for the rationality of such linear section in terms of its codimension in the Plü ̈cker embedding in projective space. Moreover, to obtain a better understanding of the birational parametrization of these linear sections, we analyze their Hodge structures in the cases of even and odd codimensions. To be more precise, we provide numerous examples which suggest certain patterns of Hodge diamonds corresponding to even and odd cases and derive the proof of general patterns for codimension 3 smooth linear section of Gr(2, n) corresponding to odd and even n. / page 29 is missing from hardcopy
179

Automated Enrichment of Single-Walled Carbon Nanotubes with Optical Studies of Enriched Samples

Canning, Griffin 13 May 2013 (has links)
The design and performance of an instrument is presented whose purpose is the extraction of samples highly enriched in one species of single-walled carbon nanotubes from density gradient ultracentrifugation. This instrument extracts high purity samples which are characterized by various optical studies. The samples are found to be enriched in just a few species of nanotubes, with the major limitation to enrichment being the separation, rather than extraction. The samples are then used in optical and microscopic studies which attempt to determine the first absorption coefficient (S1) of the (6,5) species of nanotube. Initial experiments give a value of 9.2 ± 2.6 cm2 C atom-1. Future work is proposed to improve upon the experiment in an attempt to reduce possible errors
180

Sentencing Aboriginal Offenders: A Study of Court of Appeal Decisions in Light of Section 718.2 (e) of the Canadian Criminal Code

Dugas, Andrée 14 February 2013 (has links)
Section 718.2 (e)’s directive to canvass all available sanctions other than imprisonment that are reasonable in the circumstances, with particular attention to the circumstances of Aboriginal offenders was to be given real force. This study’s goal was therefore to identify what considerations may be impeding or encouraging the application of section 718.2 (e)’s directive through a constructivist discourse analysis of 33 court of appeal cases. The study has mapped trends and influences which weigh strongly on sentencing judges in the decision-making process and considerations that are affecting the application of this provision. Prohibitive and permissive dimensions of the Gladue case were identified related to the application of section 718.2 (e), creating competing ideals in the application of the provision. Modern Penal Rationality (MPR) underpinned many of the judges’ justifications. However, unforeseen considerations were also noted. Ultimately, MPR, dominates the sentencing calculus and diminishes section 718.2 (e)’s application and alternative/restorative potential.

Page generated in 0.0896 seconds