• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 551
  • 94
  • 78
  • 58
  • 36
  • 25
  • 25
  • 25
  • 25
  • 25
  • 24
  • 22
  • 15
  • 4
  • 3
  • Tagged with
  • 956
  • 956
  • 221
  • 163
  • 139
  • 126
  • 97
  • 92
  • 90
  • 74
  • 72
  • 69
  • 66
  • 65
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
421

Structural reliability of offshore wind turbines

Agarwal, Puneet, 1977- 31 August 2012 (has links)
Statistical extrapolation is required to predict extreme loads, associated with a target return period, for offshore wind turbines. In statistical extrapolation, “short-term" distributions of the load random variable(s) conditional on the environment are integrated with the joint probability distribution of environmental random variables (from wind, waves, current etc.) to obtain the so-called “long-term" distribution, from which long-term loads may be obtained for any return period. The accurate prediction of long-term extreme loads for offshore wind turbines, using efficient extrapolation procedures, is our main goal. While loads data, needed for extrapolation, are obtained by simulations in a design scenario, field data can be valuable for understanding the offshore environment and the resulting turbine response. We use limited field data from a 2MW turbine at the Blyth site in the United Kingdom, and study the influence of contrasting environmental (wind) regimes and associated waves at this site on long-term loads, derived using extrapolation. This study also highlights the need for efficient extrapolation procedures and for modeling nonlinear waves at sites with shallow water depths. An important first step in extrapolation is to establish robust short-term distributions of load extremes. Using data from simulations of a 5MW onshore turbine model, we compare empirical short-term load distributions when two alternative models for extremes--global and block maxima--are used. We develop a convergence criterion, based on controlling the uncertainty in rare load fractiles, which serves to assess whether or not an adequate number of simulations has been performed. To establish long-term loads for a 5MW offshore wind turbine, we employ an inverse reliability approach, which is shown to predict reasonably accurate long-term loads, compared to a more expensive direct integration approach. We show that blade pitching control actions can be a major source of response variability, due to which a large number of simulations may be required to obtain stable tails of short-term load distributions, and to predict accurate ultimate loads. We address model uncertainty as it pertains to wave models. We investigate the effect of using irregular nonlinear (second-order) waves, compared to irregular linear waves, on loads for an offshore wind turbine. We incorporate this nonlinear irregular wave model into a procedure for integrated wind-wave-response analysis of offshore wind turbines. We show that computed loads are generally somewhat larger with nonlinear waves and, hence, that modeling nonlinear waves is important is response simulations of offshore wind turbines and prediction of long-term loads. / text
422

The use of fractal dimension for texture-based enhancement of aeromagnetic data.

Dhu, Trevor January 2008 (has links)
This thesis investigates the potential of fractal dimension (FD) as a tool for enhancing airborne magnetic data. More specifically, this thesis investigates the potential of FD-based texture transform images as tools for aiding in the interpretation of airborne magnetic data. A series of different methods of estimating FD are investigated, specifically: • geometric methods (1D and 2D variation methods and 1D line divider method); • stochastic methods (1D and 2D Hurst methods and 1D and 2D semi-variogram methods), and; • spectral methods (1D and 2D wavelet methods and 1D and 2D Gabor methods). All of these methods are able to differentiate between varying theoretical FD in synthetic profiles. Moreover, these methods are able to differentiate between theoretical FDs when applied to entire profiles or in a moving window along the profile. Generally, the accuracy of the estimated FD improves when window size is increased. Similarly, the standard deviation of estimated FD decreases as window size increases. This result implied that the use of moving window FD estimates will require a trade off between the quality of the FD estimates and the need to use small windows to allow better spatial resolution. Application of the FD estimation methods to synthetic datasets containing simple ramps, ridges and point anomalies demonstrates that all of the 2D methods and most of the 1D methods are able to detect and enhance these features in the presence of up to 20% Gaussian noise. In contrast, the 1D Hurst and line divider methods can not clearly detect these features in as little as 10% Gaussian noise. Consequently, it is concluded that the 1D Hurst and line divider methods are inappropriate for enhancing airborne magnetic data. The application of these methods to simple synthetic airborne magnetic datasets highlights the methods’ sensitivity to very small variations in the data. All of the methods responded strongly to field lines some distance from the causative magnetic bodies. This effect was eliminated through the use of a variety of tolerances that essentially required a minimum level of difference between data points in order for FD to be calculated. Whilst this use of tolerances was required for synthetic datasets, its use was not required for noise corrupted versions of the synthetic magnetic data. The results from applying the FD estimation techniques to the synthetic airborne magnetic data suggested that these methods are more effective when applied to data from the pole. Whilst all of the methods were able to enhance the magnetic anomalies both at the pole and in the Southern hemisphere, the responses of the FD estimation techniques were notably simpler for the polar data. With the exception of the 1D Hurst and line divider methods, all of the methods were also able to enhance the synthetic magnetic data in the presence of 10% Gaussian noise. Application of the FD estimation methods to an airborne magnetic dataset from the Merlinleigh Sub-basin in Western Australia demonstrated their ability to enhance subtle structural features in relatively smooth airborne magnetic data. Moreover, the FD-based enhancements were able to enhance some features of this dataset better than any of the conventional enhancements considered (i.e. an analytic signal, vertical and total horizontal derivatives, and automatic gain control). Most of the FD estimation techniques enhanced similar features to each other. However, the 2D methods generally produced clearer results than their associated 1D methods. In contrast to this result, application of the FD-based enhancements to more variable airborne magnetic data from the Tanami region in the Northern Territory demonstrated that these methods are not as well suited to this style of data. The main conclusion from this work is that FD-based enhancement of relatively smooth airborne magnetic data can provide valuable input into an interpretation process. This suggests that these methods are particularly useful for aiding in the interpretation of airborne magnetic data from regions such as sedimentary basins where the distribution of magnetic sources is relatively smooth and simple. / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1339560 / Thesis (Ph.D.) - University of Adelaide, Australian School of Petroleum, 2008
423

Joint models for longitudinal and survival data

Yang, Lili 11 July 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Epidemiologic and clinical studies routinely collect longitudinal measures of multiple outcomes. These longitudinal outcomes can be used to establish the temporal order of relevant biological processes and their association with the onset of clinical symptoms. In the first part of this thesis, we proposed to use bivariate change point models for two longitudinal outcomes with a focus on estimating the correlation between the two change points. We adopted a Bayesian approach for parameter estimation and inference. In the second part, we considered the situation when time-to-event outcome is also collected along with multiple longitudinal biomarkers measured until the occurrence of the event or censoring. Joint models for longitudinal and time-to-event data can be used to estimate the association between the characteristics of the longitudinal measures over time and survival time. We developed a maximum-likelihood method to joint model multiple longitudinal biomarkers and a time-to-event outcome. In addition, we focused on predicting conditional survival probabilities and evaluating the predictive accuracy of multiple longitudinal biomarkers in the joint modeling framework. We assessed the performance of the proposed methods in simulation studies and applied the new methods to data sets from two cohort studies. / National Institutes of Health (NIH) Grants R01 AG019181, R24 MH080827, P30 AG10133, R01 AG09956.
424

A matched study to determine a conditional logistic model for prediction of business failure in South Africa

Mota, Stephen Kopano 12 1900 (has links)
Thesis (MBA)--Stellenbosch University, 2001 / ENGLISH ABSTRACT: The subject of prediction of business failure from an academic point of view dates back to the turn of the century with the development of a single ratio, the current ratio, as an evaluation of credit-worthiness. Subsequently studies conducted have become complex using different statistical techniques and more than one variable to predict failure. The challenge in these studies has been to establish a reliable model to predict failure. The aim of this report was to find out which financial factors best predicted failure in the South African environment using a matched study by refining some elements of the study conducted by Court (1993). The data used was similar to that of Court (1993), which was independently obtained from the Bureau of Financial Analysis of the University of Pretoria. The variables used in the study were then computed from this raw data. The variables were then imputed into the stataΤΜ statical software package to run a conditional logistic regression model. As a result of a small sample size and a substantial number of missing variables in the sample size, the study did not reveal an accurate indication of the important variable. It was also found that with the instability and general complexity of conditional logistic regression the study need not have been a matched study. The recommendation is that future research be done with a larger sample size using the same methodology. It is also recommended that the data include non-financial variables. / AFRIKAANSE OPSOMMING: Die voorspelling van besigheidsmislukkings as 'n akademiese onderwerp, dateer vanaf die begin van die vorige eeu met die ontwikkeling van 'n enkele verhouding, die bedryfsverhouding, as maatstaf van kredietwaardigheid. Die toepassing van statistiese tegnieke en inkorporasie van meerdere veranderlikes het aan verdere studies 'n hoë mate van kompleksiteit verleen. Die gevolglike uitdaging was om 'n betroubare model te ontwikkel om besighiedsmislukkings akkuraat te kan voorspel. Die doel van hierdie verslag is om aan te dui welke finansiele faktore mees gepas sal wees om besigheidsmislukkings in die Suid Afrikaanse omgewing te voorspel. Die verslag gee weer die bevindinge van 'n gepaarde studie wat gegrond is op 'n verfyning van sekere elemente soos geneem uit die Court studie van 1993. Die data gebruik, is baie soos die wat die Court studie onderlê en is onafhanklik verkry vanaf die Bureau vir Finansiele Analise (Universiteit van Pretoria). Die veranderlikes wat in die studie gebruik is gebaseer op hierdie rou data en is ingesleutel en verwerk deur die stataΤΜ statistiese sagteware program na 'n kondisionele, logiese regressie model. As gevolg van 'n klein steekproef en 'n beduidenswaardige aantal ontbrekende veranderlikes in hierdie steekproef, kon die studie nie 'n belangrike veranderlike met akkuraatheid aandui nie. Dit is ook bevind dat die onstabiliteit en algemene kompleksiteit van die kondisionele, logiese regressie model die gebruik van 'n gepaarde studie onnodig gelaat het. Die aanbeveling is dat verdere navorsing dieselfde metodologie sal toepas op 'n groter steekproef. Dit word ook aanbeveel dat nie-finansiele veranderlikes by die data ingesluit word.
425

Modelling market risk with SAS Risk Dimensions : a step by step implementation

Du Toit, Carl 03 1900 (has links)
Thesis (MComm (Statistics and Actuarial Science))--University of Stellenbosch, 2005. / Financial institutions invest in financial securities like equities, options and government bonds. Two measures, namely return and risk, are associated with each investment position. Return is a measure of the profit or loss of the investment, whilst risk is defined as the uncertainty about return. A financial institution that holds a portfolio of securities is exposed to different types of risk. The most well-known types are market, credit, liquidity, operational and legal risk. An institution has the need to quantify for each type of risk, the extent of its exposure. Currently, standard risk measures that aim to quantify risk only exist for market and credit risk. Extensive calculations are usually required to obtain values for risk measures. The investments positions that form the portfolio, as well as the market information that are used in the risk measure calculations, change during each trading day. Hence, the financial institution needs a business tool that has the ability to calculate various standard risk measures for dynamic market and position data at the end of each trading day. SAS Risk Dimensions is a software package that provides a solution to the calculation problem. A risk management system is created with this package and is used to calculate all the relevant risk measures on a daily basis. The purpose of this document is to explain and illustrate all the steps that should be followed to create a suitable risk management system with SAS Risk Dimensions.
426

An analysis of the technical efficiency in Hong Kong's construction industry

Wang, You-song, 王幼松. January 1998 (has links)
published_or_final_version / Real Estate and Construction / Doctoral / Doctor of Philosophy
427

The Fixed v. Variable Sampling Interval Shewhart X-Bar Control Chart in the Presence of Positively Autocorrelated Data

Harvey, Martha M. (Martha Mattern) 05 1900 (has links)
This study uses simulation to examine differences between fixed sampling interval (FSI) and variable sampling interval (VSI) Shewhart X-bar control charts for processes that produce positively autocorrelated data. The influence of sample size (1 and 5), autocorrelation parameter, shift in process mean, and length of time between samples is investigated by comparing average time (ATS) and average number of samples (ANSS) to produce an out of control signal for FSI and VSI Shewhart X-bar charts. These comparisons are conducted in two ways: control chart limits pre-set at ±3σ_x / √n and limits computed from the sampling process. Proper interpretation of the Shewhart X-bar chart requires the assumption that observations are statistically independent; however, process data are often autocorrelated over time. Results of this study indicate that increasing the time between samples decreases the effect of positive autocorrelation between samples. Thus, with sufficient time between samples the assumption of independence is essentially not violated. Samples of size 5 produce a faster signal than samples of size 1 with both the FSI and VSI Shewhart X-bar chart when positive autocorrelation is present. However, samples of size 5 require the same time when the data are independent, indicating that this effect is a result of autocorrelation. This research determined that the VSI Shewhart X-bar chart signals increasingly faster than the corresponding FSI chart as the shift in the process mean increases. If the process is likely to exhibit a large shift in the mean, then the VSI technique is recommended. But the faster signaling time of the VSI chart is undesirable when the process is operating on target. However, if the control limits are estimated from process samples, results show that when the process is in control the ARL for the FSI and the ANSS for the VSI are approximately the same, and exceed the expected value when the limits are fixed.
428

Conditioning of unobserved period-specific abundances to improve estimation of dynamic populations

Dail, David (David Andrew) 28 February 2012 (has links)
Obtaining accurate estimates of animal abundance is made difficult by the fact that most animal species are detected imperfectly. Early attempts at building likelihood models that account for unknown detection probability impose a simplifying assumption unrealistic for many populations, however: no births, deaths, migration or emigration can occur in the population throughout the study (i.e., population closure). In this dissertation, I develop likelihood models that account for unknown detection and do not require assuming population closure. In fact, the proposed models yield a statistical test for population closure. The basic idea utilizes a procedure in three steps: (1) condition the probability of the observed data on the (unobserved) period- specific abundances; (2) multiply this conditional probability by the (prior) likelihood for the period abundances; and (3) remove (via summation) the period- specific abundances from the joint likelihood, leaving the marginal likelihood of the observed data. The utility of this procedure is two-fold: step (1) allows detection probability to be more accurately estimated, and step (2) allows population dynamics such as entering migration rate and survival probability to be modeled. The main difficulty of this procedure arises in the summation in step (3), although it is greatly simplified by assuming abundances in one period depend only the most previous period (i.e., abundances have the Markov property). I apply this procedure to form abundance and site occupancy rate estimators for both the setting where observed point counts are available and the setting where only the presence or absence of an animal species is ob- served. Although the two settings yield very different likelihood models and estimators, the basic procedure forming these estimators is constant in both. / Graduation date: 2012
429

Multivariate Quality Control Using Loss-Scaled Principal Components

Murphy, Terrence Edward 24 November 2004 (has links)
We consider a principal components based decomposition of the expected value of the multivariate quadratic loss function, i.e., MQL. The principal components are formed by scaling the original data by the contents of the loss constant matrix, which defines the economic penalty associated with specific variables being off their desired target values. We demonstrate the extent to which a subset of these ``loss-scaled principal components", i.e., LSPC, accounts for the two components of expected MQL, namely the trace-covariance term and the off-target vector product. We employ the LSPC to solve a robust design problem of full and reduced dimensionality with deterministic models that approximate the true solution and demonstrate comparable results in less computational time. We also employ the LSPC to construct a test statistic called loss-scaled T^2 for multivariate statistical process control. We show for one case how the proposed test statistic has faster detection than Hotelling's T^2 of shifts in location for variables with high weighting in the MQL. In addition we introduce a principal component based decomposition of Hotelling's T^2 to diagnose the variables responsible for driving the location and/or dispersion of a subgroup of multivariate observations out of statistical control. We demonstrate the accuracy of this diagnostic technique on a data set from the literature and show its potential for diagnosing the loss-scaled T^2 statistic as well.
430

Prediction of consumer liking from trained sensory panel information: evaluation of artificial neural networks (ANN)

Krishnamurthy, Raju, Chemical Sciences & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
This study set out to establish artificial neural networks (ANN) as an alternate to regression methods (multiple linear, principal components and partial least squares regression) to predict consumer liking from trained sensory panel data. The study has two parts viz., I) Flavour study - evaluation of ANNs to predict consumer flavour preferences from trained sensory panel data and 2) Fragrance study ??? evaluation of different ANN architectures to predict consumer fragrance liking from trained sensory panel data. In this study, a multi-layer feedforward neural network architecture with input, hidden and output layer(s) was designed. The back-propagation algorithm was utilised in training of neural networks. The network learning parameters such as learning rate and momentum rate were optimised by the grid experiments for a fixed number of learning cycles. In flavour study, ANNs were trained using the trained sensory panel raw data as well as transformed data. The networks trained with sensory panel raw data achieved 98% correct learning, whereas the testing was within the range of 28 -35%. A suitable transformation methods were applied to reduce the variations in trained sensory panel raw data. The networks trained with transformed sensory panel data achieved between 80-90% correct learning and 80-95% correct testing. In fragrance study, ANNs were trained using the trained sensory panel raw data as well as principal component data. The networks trained with sensory panel raw data achieved 100% correct learning, and testing was in a range of 70-94%. Principal component analysis was applied to reduce redundancy in the trained sensory panel data. The networks trained with principal component data achieved about 100% correct learning and 90% correct testing. It was shown that due to its excellent noise tolerance property and ability to predict more than one type of consumer liking using a single model, the ANN approach promises to be an effective modelling tool.

Page generated in 0.116 seconds