• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 841
  • 433
  • 244
  • 154
  • 117
  • 25
  • 24
  • 18
  • 14
  • 14
  • 13
  • 11
  • 10
  • 10
  • 7
  • Tagged with
  • 2449
  • 369
  • 339
  • 249
  • 210
  • 209
  • 193
  • 155
  • 148
  • 132
  • 130
  • 117
  • 113
  • 111
  • 109
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Using experimental loads with finite element analysis for durability predictions

Dakin, John D. January 1995 (has links)
This research work involved the prediction of the fatigue life of an automotive rear suspension twistbeam assembly fitted to a vehicle travelling over a customer correlated durability route. This was achieved by making use of the integrated concepts of scaling and superposition of linear static finite element analysis being driven by experimental load data - the so called 'quasi-static time domain' approach. A study of the free body diagram of the twistbeam resulted in an indeterminate load set of some 24 components, with experimental data indicating that a state of static unbalance existed. Subsequent to developing a matrix-based generalised method ofload cell calibration to confum the foregoing, a modal technique was developed to partition the experimental data into a static load set, causing elastic deformations, and a rigid load set, imparting rigid body accelerations. The semi-independent characteristics of the twistbeam necessitated the coupling of large structural displacements with inertia relief. This required extensive modifications to the current techniques and led to the development and use of a three dimensional functional response matrix in place of the conventional two dimensional one. Recommendations concerning appropriate finite element boundary conditions were also formulated to handle these effects. Finally, the limitations of the uniaxial fatigue model were revealed under the application of a set of tools for analysing the biaxiality and mobility of the maximum absolute principal stress.
132

Quantitative impurities effects on temperatures of tin and aluminium fixed-point cells

Petchpong, P. January 2009 (has links)
The International Temperature Scale of 1990 (ITS-90) defines the present S.I.(“System International”) means of measuring temperature. The ITS-90 uses the freezing points of metals to define temperature fixed points. It also uses long-stemplatinum resistance thermometers to interpolate between the fixed points from 660 °Cdown to 84 K (if one includes the Argon triple point). Impurities are a major source of uncertainty in the fixed point temperature (of the order of 1 mK). And a better understanding of the impurity effect is required to improve top-level metrologicalthermometry. Most historical experiments with impurities have worked at a muchhigher levels of impurities – say of the order of 100ppm - and in arrangements that are not used on a day-to-day basis in a metrology laboratory. This thesis describes the deliberate doping of tin and aluminium, each with three different impurities and the effects of these on the temperature of the tin and aluminium liquid-solid phase transitions. The impurities, of the order of 1-30 ppm,were Co, Pb and Sb in the tin and Cu, Si and Ti in the aluminium. The tin and aluminium samples were in the form of ~0.3 kg ingots that would normally be used to realise an ITS-90 fixed point. Measurements were made using equipment normally available in a metrological thermometry laboratory, rather than using specially prepared samples. The samples were chemically analysed (by Glow Discharge Mass Spectrometry(GD-MS)) before and after the doping. Using the amount of dopants introduced,and/or the chemical analysis data, the measured temperature changes were compared with those interpolated from the standard text. The experimental undoped liquid-solid transition curves were also compared against theoretical curves (calculated from atheoretical model MTDATA). The results obtained did not disagree with the Hansen interpolated values (though there was considerable uncertainty in some of the measurements (e.g. a factor of 2 ormore) due to the measurement of small changes. Within these uncertainties it indicatesthat the Sum of Individual Estimates (SIE) method of correcting for, at least, metal impurities in otherwise high purity metals remain valid. However the results also showed considerable discrepancies between the initial measured and calculated temperature shifts (based on the pre-existing impurities prior to doping) suggesting that there may be impurities that are not (separately) detected by the GD-MS method. There was evidence that the thermal history of the metal phase transitions can cause considerable segregation of some impurities, particularly those likely to increase the phase transition temperature through a peritectic (“positive” impurities), and that the effects of this segregation can be clearly seen on the shape of the melting curves of thetin doped with Sb. Some of the aluminium doped with Ti freezing curves may also show evidence of a“concave up” shape at the start of the freezing curve, as previously calculated by MTDATA, though the effect is not as pronounced. All individual phase transition measurements - made over tens of hours – were repeated at least three times and found to be reproducible, hence providing a real dataset that can be used for comparison with theoretical models still under development.
133

In situ and modelled soil moisture determination and upscaling from point-based to field scale

Ojo, Emmanuel Rotimi January 2015 (has links)
The relevance, value and multi-dimensional application of soil moisture in many areas such as hydrological, meteorological and agricultural sciences have increased the focus on this important part of the ecosystem. However, due to its spatial and temporal variability, accurate soil moisture determination is an ongoing challenge. In the fall of 2013 and spring of 2014, the accuracy of five soil moisture instruments was tested in heavy clay soils and the Root Mean Squared Error (RMSE) values of the default calibration ranged from 0.027 and 0.129 m3 m-3. However, after calibration, the range was improved to 0.014 – 0.040 m3 m-3. The need for calibration has led to the development of generic calibration procedures such as soil texture-based calibrations. As a result of the differences in soil minerology, especially in clay soils, the texture-based calibrations often yield very high RMSE. A novel approach that uses the Cation Exchange Capacity (CEC) grouping was independently tested at three sites and out of seven different calibration equations tested; the CEC-based calibration was the second best behind in situ derived calibration. The high cost of installing and maintaining a network of soil moisture instruments to obtain measurements at limited points has influenced the development of models that can estimate soil moisture. The Versatile Soil Moisture Budget (VSMB) is one of such models and was used in this study. The comparison of the VSMB modelled output to the observed soil moisture data from a single, temporally continuous, in-field calibrated Hydra probe gave mean RMSE values of 0.052 m3 m-3 at the eight site-years in coarse textured soils and 0.059 m3 m-3 at the six site-years in fine textured soils. At field-scale level, the representativeness of an arbitrarily placed soil moisture station was compared to the mean of 48 data samples collected across the field. The single location underestimated soil moisture at 3 of 4 coarse textured fields with an average RMSE of 0.038 m3 m-3 and at only one of the four fine textured sites monitored with an average RMSE of 0.059 m3 m-3. / February 2017
134

Lessons Learned from Operating C/A-Code COTS GPS Receivers on Low-Earth Orbiting Satellites for Navigation

Wiest, Terry, Nowitzky, Thomas E., Grippando, Steven A. 11 1900 (has links)
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada / Since June of 1993, an experimental GPS receiver system has been orbiting the earth aboard a small, low-altitude, polar-orbiting satellite called RADCAL. The purpose of the experiment was to prove the concept of using GPS for satellite navigation. If successful, the system would also provide a backup to the satellite's primary navigation beacon. The goal: provide position and velocity data to an accuracy of three to five meters, and provide attitude data to within a degree. The configuration of the RADCAL GPS experiment precluded realtime feedback loops for navigation; the data was stored and downloaded after a designated collection period. On the ground, a lengthy process was used to yield the position and attitude data days after the collection event. The GPS receivers and ground equipment were configured in several modes; they ultimately yielded a position accuracy of five meters, and attitude of two degrees. This was the original goal, and the experiment was considered successful. However, one of the receivers failed in November 1993, and the other failed in January 1995. The GPS receivers were commercially available and not spaceflight proven; they were suspected of being vulnerable to single-event upsets and latchups. This turned out to be the cause of the failure of both receivers. The interface between the GPS receivers and RADCAL's other subsystems proved to be the area which could not tolerate corrupt data. The single-event latchups problems would ultimately lead to the failure of the receivers. These difficulties, as well as other lesser obstacles, provide a host of lessons learned for future satellite navigation systems.
135

Preaging techniques as a means of stabilising thermoelectric drift in nickel-chromium/nickel-aluminium thermocouples for use in an aluminium heat treating furnace

Hart, Roderick William Wenham January 1991 (has links)
Submitted in compliance with the requirements for the Master's Diploma in Technology: Electronic Engineering, Technikon Natal, 1991. / This dissertation is primarily concerned with investigating and improving the degree of accuracy and precision that may be achieved from temperat~re measurements made utilising nickel-chromium/nickel-aluminium (Type K) thermocouples. The practice of heat treating extruded aluminium section creates specific metallurgical properties within section. Development of specialised aluminium alloys has necessitated the use of treatment temperatures,- close to the limit beyond which the alloy experiences undesirable, permanent, metallurgical change. This situation has demanded urgent attention to, in quality assurance terms, the, 'fitness for purpose', of primary temperature sensors. The most established of these sensors, the Type thermocouple, has known problems relating to calibration stability and drift. The substantial amount of furnace control instrumentation and cabling dedicated to measurement from Type K sensors precludes the simple conversion to an alternate sensor type. The more practical option of applying calibration correction factors to existing measuring systems is only feasible if sensor stability characteristics permit measurement traceability to' be established within required uncertainty limits. / M
136

Numerical simulation of backward erosion piping in heterogeneous fields

Liang, Yue, Yeh, Tian-Chyi Jim, Wang, Yu-Li, Liu, Mingwei, Wang, Junjie, Hao, Yonghong 04 1900 (has links)
Backward erosion piping (BEP) is one of the major causes of seepage failures in levees. Seepage fields dictate the BEP behaviors and are influenced by the heterogeneity of soil properties. To investigate the effects of the heterogeneity on the seepage failures, we develop a numerical algorithm and conduct simulations to study BEP progressions in geologic media with spatially stochastic parameters. Specifically, the void ratio e, the hydraulic conductivity k, and the ratio of the particle contents r of the media are represented as the stochastic variables. They are characterized by means and variances, the spatial correlation structures, and the cross correlation between variables. Results of the simulations reveal that the heterogeneity accelerates the development of preferential flow paths, which profoundly increase the likelihood of seepage failures. To account for unknown heterogeneity, we define the probability of the seepage instability (PI) to evaluate the failure potential of a given site. Using Monte-Carlo simulation (MCS), we demonstrate that the PI value is significantly influenced by the mean and the variance of ln k and its spatial correlation scales. But the other parameters, such as means and variances of e and r, and their cross correlation, have minor impacts. Based on PI analyses, we introduce a risk rating system to classify the field into different regions according to risk levels. This rating system is useful for seepage failures prevention and assists decision making when BEP occurs.
137

The LIBOR Market Model

Selic, Nevena 01 November 2006 (has links)
Student Number : 0003819T - MSc dissertation - School of Computational and Applied Mathematics - Faculty of Science / The over-the-counter (OTC) interest rate derivative market is large and rapidly developing. In March 2005, the Bank for International Settlements published its “Triennial Central Bank Survey” which examined the derivative market activity in 2004 (http://www.bis.org/publ/rpfx05.htm). The reported total gross market value of OTC derivatives stood at $6.4 trillion at the end of June 2004. The gross market value of interest rate derivatives comprised a massive 71.7% of the total, followed by foreign exchange derivatives (17.5%) and equity derivatives (5%). Further, the daily turnover in interest rate option trading increased from 5.9% (of the total daily turnover in the interest rate derivative market) in April 2001 to 16.7% in April 2004. This growth and success of the interest rate derivative market has resulted in the introduction of exotic interest rate products and the ongoing search for accurate and efficient pricing and hedging techniques for them. Interest rate caps and (European) swaptions form the largest and the most liquid part of the interest rate option market. These vanilla instruments depend only on the level of the yield curve. The market standard for pricing them is the Black (1976) model. Caps and swaptions are typically used by traders of interest rate derivatives to gamma and vega hedge complex products. Thus an important feature of an interest rate model is not only its ability to recover an arbitrary input yield curve, but also an ability to calibrate to the implied at-the-money cap and swaption volatilities. The LIBOR market model developed out of the market’s need to price and hedge exotic interest rate derivatives consistently with the Black (1976) caplet formula. The focus of this dissertation is this popular class of interest rate models. The fundamental traded assets in an interest rate model are zero-coupon bonds. The evolution of their values, assuming that the underlying movements are continuous, is driven by a finite number of Brownian motions. The traditional approach to modelling the term structure of interest rates is to postulate the evolution of the instantaneous short or forward rates. Contrastingly, in the LIBOR market model, the discrete forward rates are modelled directly. The additional assumption imposed is that the volatility function of the discrete forward rates is a deterministic function of time. In Chapter 2 we provide a brief overview of the history of interest rate modelling which led to the LIBOR market model. The general theory of derivative pricing is presented, followed by a exposition and derivation of the stochastic differential equations governing the forward LIBOR rates. The LIBOR market model framework only truly becomes a model once the volatility functions of the discrete forward rates are specified. The information provided by the yield curve, the cap and the swaption markets does not imply a unique form for these functions. In Chapter 3, we examine various specifications of the LIBOR market model. Once the model is specified, it is calibrated to the above mentioned market data. An advantage of the LIBOR market model is the ability to calibrate to a large set of liquid market instruments while generating a realistic evolution of the forward rate volatility structure (Piterbarg 2004). We examine some of the practical problems that arise when calibrating the market model and present an example calibration in the UK market. The necessity, in general, of pricing derivatives in the LIBOR market model using Monte Carlo simulation is explained in Chapter 4. Both the Monte Carlo and quasi-Monte Carlo simulation approaches are presented, together with an examination of the various discretizations of the forward rate stochastic differential equations. The chapter concludes with some numerical results comparing the performance of Monte Carlo estimates with quasi-Monte Carlo estimates and the performance of the discretization approaches. In the final chapter we discuss numerical techniques based on Monte Carlo simulation for pricing American derivatives. We present the primal and dual American option pricing problem formulations, followed by an overview of the two main numerical techniques for pricing American options using Monte Carlo simulation. Callable LIBOR exotics is a name given to a class of interest rate derivatives that have early exercise provisions (Bermudan style) to exercise into various underlying interest rate products. A popular approach for valuing these instruments in the LIBOR market model is to estimate the continuation value of the option using parametric regression and, subsequently, to estimate the option value using backward induction. This approach relies on the choice of relevant, i.e. problem specific predictor variables and also on the functional form of the regression function. It is certainly not a “black-box” type of approach. Instead of choosing the relevant predictor variables, we present the sliced inverse regression technique. Sliced inverse regression is a statistical technique that aims to capture the main features of the data with a few low-dimensional projections. In particular, we use the sliced inverse regression technique to identify the low-dimensional projections of the forward LIBOR rates and then we estimate the continuation value of the option using nonparametric regression techniques. The results for a Bermudan swaption in a two-factor LIBOR market model are compared to those in Andersen (2000).
138

Calibração de um modelo de qualidade da água em trecho crítico qualitativo do Rio Lambari, Poços de Caldas/MG /

Nakamura, Carolina Harue. January 2017 (has links)
Orientador: Gustavo Henrique Ribeiro da Silva / Co-orientador: Marcio Ricardo Salla / Banca: Marcos Von Sperling / Banca: Rodrigo Braga Moruzzi / Resumo: O emprego de um modelo matemático de qualidade da água em escala de bacia hidrográfica auxilia no melhor conhecimento da situação atual de seus corpos d'água e na simulação de cenários futuros da qualidade da água para o subsídio de decisões relativas ao seu aproveitamento e preservação. De forma auxiliar na gestão dos recursos hídricos, o presente trabalho teve como intuito calibrar um modelo matemático de qualidade da água em um ambiente lótico, empregando como ferramenta de apoio o aplicativo Análise de Bacias Críticas Ottocodificadas (ABaCO). A calibração foi realizada em um trecho de 14 km do Rio Lambari, situado no município mineiro de Poços de Caldas, pertencente à Bacia Hidrográfica do Rio Pardo, e considerado crítico qualitativamente pela Agência Nacional de Águas (ANA), através da Nota Técnica Conjunta nº 002/2012/SPR/SER-ANA, e ratificado pela Portaria ANA nº 62/2013. Os resultados da calibração, que foram obtidos automaticamente pela ferramenta Solver do Microsoft Excel® e manualmente para os parâmetros que necessitavam ser ajustados após a calibração automática, apresentaram bons ajustes entre as concentrações simuladas e observadas em campo para os parâmetros nitrogênio total, fósforo orgânico, fósforo inorgânico e fósforo total, quando considerada a análise de comportamento dos parâmetros. Os demais parâmetros calibrados (DBO, OD, nitrogênio orgânico, nitrogênio amoniacal e nitrato) tiveram ajustes satisfatórios. Quando verificados os resultados por um método ... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: The use of a water quality mathematical model at a hydrographic basin scale helps to understand better the current situation of water bodies and to simulate future scenarios of water quality to aid in decisions related to its use and preservation. In order to assist the management of water resources, the present study aimed to calibrate a water quality model, in a lotic environment, applying Ottocoded Critical Basins Analysis (ABaCO) as a support tool. The calibration was performed in a 14-kilometer section of the River Lambari, which is located in the city of Poços de Caldas, state of Minas Gerais. It belongs to the Pardo River Basin, and is considered qualitatively critical by the National Water Agency (ANA), through the Joint Technical Note number 002/2012/SPR/SRE-ANA and ratified by the ANA Ordinance number 62/2013. The calibration results, obtained automatically by the Solver Microsoft Excel® tool and manually for parameters that needed to be adjusted after automatic calibration, presented good adjustments between simulated and observed concentrations for the parameters total nitrogen, organic phosphorus, inorganic phosphorus and total phosphorus, considering the behavior analysis of the parameters. The other calibrated parameters (BOD, DO, organic nitrogen, ammoniacal nitrogen and nitrate) obtained satisfactory adjustments. When evaluating the results by a statistical method (coefficient of determination), it was observed that six parameters (organic nitrogen, ammoniacal nitrogen, total nitrogen, organic phosphorus, inorganic phosphorus and total phosphorus) presented negative values for all campaigns, caused by the low variability of concentrations between one campaign and another, making it difficult to obtain high values of the coefficient. A first view of the river's qualitative situation can be obtained... (Complete abstract electronic acess below) / Mestre
139

The identification of geometric errors in five-axis machine tools using the telescoping magnetic ballbar

Flynn, Joseph January 2016 (has links)
To maximise productivity and reduce scrap in high-value, low-volume production, five-axis machine tool (5A-MT) motion accuracy must be verified quickly and reliably. Numerous metrology instruments have been developed to measure errors arising from geometric imperfections within and between machine tool axes (amongst other sources). One example is the TMBB, which is becoming an increasingly popular instrument to measure both linear and rotary axis errors. This research proposes new TMBB measurement technique to rapidly, accurately and reliably measure all position-independent rotary axis errors in a 5A-MT. In this research two literature reviews have been conducted. The findings informed the subsequent development of a virtual machine tool (VMT). This VMT was used to capture the effects of rotary and linear axis position-independent geometric errors, and apparatus set-up errors on a variety of candidate measurement routines. This new knowledge then informed the design of an experimental methodology to capture specific phenomena that were observed within the VMT on a commercial 5A-MT. Finally, statistical analysis of experimental measurements facilitated a quantification of the repeatability, strengths and limitations of the final testing method concept. The major contribution of this research is the development of a single set-up testing procedure to identify all 5A-MT rotary axis location errors, whilst remaining robust in the presence of set-up and linear axis location errors. Additionally, a novel variance-based sensitivity analysis approach was used to design testing procedures. By considering the effects of extraneous error sources (set-up and linear location) in the design and validation phases, an added robustness was introduced. Furthermore, this research marks the first usage of Monte Carlo uncertainty analysis in conjunction with rotary axis TMBB testing. Experimental evidence has shown that the proposed corrections for set-up and linear axis errors are highly effective and completely indispensable in rotary axis testing of this kind. However, further development of the single set-up method is necessary, as geometric errors cannot always be measured identically at different testing locations. This has highlighted the importance of considering the influences on 5A-MT component errors on testing results, as the machine tool axes cannot necessarily be modelled as straight lines.
140

Split Cyclic Analog to Digital Converter Using A Nonlinear Gain Stage

Spetla, Hattie 02 September 2009 (has links)
"Previous implementations of digital background calibration for cyclic ADCs have required linear amplifier behavior in the gain stage for accurate correction. Correction is digital decoding of ADC outputs to determine the original ADC input. Permitting nonlinearity in the gain stage of the ADC allows for less demanding amplifier design requirements, reducing power and size. However this requires a method of determining the value of this variable gain during digital correction. Look up tables (LUTs,) are an effective and efficient method of compensating for analog circuit imperfections. The LUT correction and calibration method discussed in this work has been simulated using Cadence integrated circuit simulation ADC specifications and MATLAB."

Page generated in 0.1104 seconds