• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 551
  • 94
  • 78
  • 58
  • 36
  • 25
  • 25
  • 25
  • 25
  • 25
  • 24
  • 22
  • 15
  • 4
  • 3
  • Tagged with
  • 956
  • 956
  • 221
  • 163
  • 139
  • 126
  • 97
  • 92
  • 90
  • 74
  • 72
  • 69
  • 66
  • 65
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
641

Contextualized risk mitigation based on geological proxies in alluvial diamond mining using geostatistical techniques

Jacob, Jana January 2016 (has links)
A thesis submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, Johannesburg, in fulfillment of the requirements for the degree of Doctor of Philosophy. Johannesburg 2016 / Quantifying risk in the absence of hard data presents a significant challenge. Onshore mining of the diamondiferous linear beach deposit along the south western coast of Namibia has been ongoing for more than 80 years. A historical delineated campaign from the 1930s to 1960s used coast perpendicular trenches spaced 500 m apart, comprising a total of 26 000 individual samples, to identify 6 onshore raised beaches. These linear beaches extend offshore and are successfully mined in water depths deeper than 30 m. There is, however, a roughly 4 km wide submerged coast parallel strip adjacent to the mostly mined out onshore beaches for which no real hard data is available at present. The submerged beaches within the 4 km coast parallel strip hold great potential for being highly diamondiferous. To date hard data is not yet available to quantify or validate this potential. The question is how to obtain sufficient hard data within the techno economic constraints to enable a resource with an acceptable level of confidence to be developed. The work presented in this thesis illustrates how virtual orebodies (VOBs) are created based on geological proxies in order to have a basis to assess and rank different sampling and drilling strategies. Overview of 4 papers Paper I demonstrates the challenge of obtaining a realistic variogram that can be used in variogram-based geostatistical simulations. Simulated annealing is used to unfold the coastline and improve the detectable variography for a number of the beaches. Paper II shows how expert opinion interpretation is used to supplement sparse data that is utilised to create an indicator simulation to study the presence and absence of diamondiferous gravel. When only the sparse data is used the resultant simulation is unsuitable as a VOB upon which drilling strategies can be assessed. Paper III outlines how expert opinion hand sketches are used to create a VOB. The composite probability map based on geological proxies is adjusted using a grade profile based on adjacent onshore data before it is seeded with stones and used as a VOB for strategy testing. Paper IV illustrates how the Nachman model based on a Negative Binomial Distribution (NBD) is used to predict a minimum background grade by considering only the zero proportions (Zp) of the grade data. v Conclusions and future work In the realm of creating spatial simulations that can serve as VOBs it is very difficult to attempt to quantify uncertainty when no hard data is available. In the absence of hard data, geological proxies and expert opinion are the only inputs that can be used to create VOBs. Subsequently these VOBs are used as a base to be analysed in order to evaluate and rank different sampling and drilling strategies based on techno economic constraints. VOBs must be updated and reviewed as hard data becomes available after which sampling strategies should be reassessed. During early stage exploration projects the Zp of sample results can be used to predict a minimum background grade and rank different targets for further sampling and valuation. The research highlights the possibility that multi point statistics (MPS) can be used. Higher order MPS should be further investigated as an additional method for creating VOBs upon which sampling strategies can be assessed. / MT2017
642

Statistical problems in measuring convective rainfall

Seed, Alan William January 1989 (has links)
No description available.
643

Algorithm Design and Localization Analysis in Sequential and Statistical Learning

Xu, Yunbei January 2023 (has links)
Learning theory is a dynamic and rapidly evolving field that aims to provide mathematical foundations for designing and understanding the behavior of algorithms and procedures that can learn from data automatically. At the heart of this field lies the interplay between algorithm design and statistical complexity analysis, with sharp statistical complexity characterizations often requiring localization analysis. This dissertation aims to advance the fields of machine learning and decision making by contributing to two key directions: principled algorithm design and localized statistical complexity. Our research develops novel algorithmic techniques and analytical frameworks to build more effective and robust learning systems. Specifically, we focus on studying uniform convergence and localization in statistical learning theory, developing efficient algorithms using the optimism principle for contextual bandits, and creating Bayesian design principles for bandit and reinforcement learning problems.
644

Data Quality Assessment for the Secondary Use of Person-Generated Wearable Device Data: Assessing Self-Tracking Data for Research Purposes

Cho, Sylvia January 2021 (has links)
The Quantified Self movement has led to an increased routine use of consumer wearables, generating large amounts of person-generated wearable device data. This has become an opportunity to researchers to conduct research with large-scale person-generated wearable device data without having to collect data in a costly and time-consuming way. However, there are known challenges of wearable device data such as missing data or inaccurate data which raises the need to assess the quality of data before conducting research. Currently, there is a lack of in-depth understanding on data quality challenges of using person-generated wearable device data for research purposes, and how data quality assessment should be conducted. Data quality assessment could be especially a burden to those without the domain knowledge on a specific data type, which might be the case for emerging biomedical data sources. The goal of this dissertation is to advance the knowledge on data quality challenges and assessment of person-generated wearable device data and facilitate data quality assessment for those without the domain knowledge on the emerging data type. The dissertation consists of two aims: (1) identifying data quality dimensions important for assessing the quality of person-generated wearable device data for research purposes, (2) designing and evaluating an interactive data quality characterization tool that supports researchers in assessing the fitness-for-use of fitness tracker data. In the first aim, a multi-method approach was taken, conducting literature review, survey, and focus group discussion sessions. We found that intrinsic data quality dimensions applicable to electronic health record data such as conformance, completeness, and plausibility are applicable to person-generated wearable device data. In addition, contextual/fitness-for-use dimensions such as breadth and density completeness, and temporal data granularity were identified given the fact that our focus was on assessing data quality for research purposes. In the second aim, we followed an iterative design process from understanding informational needs to designing a prototype, and evaluating the usability of the final version of a tool. The tool allows users to customize the definition of data completeness (fitness-for-use measures), and provides data summarization on the cohort that meets that definition. We found that the interactive tool that incorporates fitness-for-use measures and allows customization on data completeness, can support assessing fitness-for-use assessment more accurately and in less time than a tool that only presents information on intrinsic data quality measures.
645

Statistical methods applied to acousto-ultrasonic technique

Madhav, Arun 17 November 2012 (has links)
The growth in the extent of applications of composite materials, particularly in commercial products, has been dramatic and carries an implied mandate for effective methods for material quality evaluation. The cost of composite materials dictates that non-destructive test methods be used. At the same time, the nature of composites limits the use of conventional techniques such as radiography , eddy-current or ultrasonics. Recently, a new technique known as the Acousto-Ultrasonic (AU) technique, has been developed and appears to hold promise as a method for the evaluation of composite material quality. Implementation of the AU method is examined using the zeroth moment method developed by Henneke et.al. A new parameter termed as Acousto Ultrasonic Factor (AUF) has been defined for this purpose. The behavior of the AUF response to specimens of known quality is investigated statistically. It is found that the transformed/actual readings follow a Beta distribution and that specimens of different quality are readily distinguishable using the statistical analysis of the AUF response. Reasonable future steps for translating these findings into efficient quality evaluation methods have been suggested.</p / Master of Science
646

On Some Problems In Transfer Learning

Galbraith, Nicholas R. January 2024 (has links)
This thesis consists of studies of two important problems in transfer learning: binary classification under covariate-shift transfer, and off-policy evaluation in reinforcement learning. First, the problem of binary classification under covariate shift is considered, for which the first efficient procedure for optimal pruning of a dyadic classification tree is presented, where optimality is derived with respect to a notion of 𝒂𝒗𝒆𝒓𝒂𝒈𝒆 𝒅𝒊𝒔𝒄𝒓𝒆𝒑𝒂𝒏𝒄𝒚 between the shifted marginal distributions of source and target. Further, it is demonstrated that the procedure is adaptive to the discrepancy between marginal distributions in a neighbourhood of the decision boundary. It is shown how this notion of average discrepancy can be viewed as a measure of 𝒓𝒆𝒍𝒂𝒕𝒊𝒗𝒆 𝒅𝒊𝒎𝒆𝒏𝒔𝒊𝒐𝒏 between distributions, as it relates to existing notions of information such as the Minkowski and Renyi dimensions. Experiments are carried out on real data to verify the efficacy of the pruning procedure as compared to other baseline methods for pruning under transfer. The problem of off-policy evaluation for reinforcement learning is then considered, where two minimax lower bounds for the mean-square error of off-policy evaluation under Markov decision processes are derived. The first of these gives a non-asymptotic lower bound for OPE in finite state and action spaces over a model in which the mean reward is perturbed arbitrarily (up to a given magnitude) that depends on an average weighted chi-square divergence between the behaviour and target policies. The second provides an asymptotic lower bound for OPE in continuous state-space when the mean reward and policy ratio functions lie in a certain smoothness class. Finally, the results of a study that purported to have derived a policy for sepsis treatment in ICUs are replicated and shown to suffer from excessive variance and therefore to be unreliable; our lower bound is computed and used as evidence that reliable off-policy estimation from this data would have required a great deal more samples than were available.
647

Size effect on shear strength of FRP reinforced concrete beams

Ashour, Ashraf, Kara, Ilker F. 07 December 2013 (has links)
yes / This paper presents test results of six concrete beams reinforced with longitudinal carbon fiber reinforced polymer (CFRP) bars and without vertical shear reinforcement. All beams were tested under a two-point loading system to investigate shear behavior of CFRP reinforced concrete beams. Beam depth and amount of CFRP reinforcement were the main parameters investigated. All beams failed due to a sudden diagonal shear crack at almost 45°. A simplified, empirical expression for the shear capacity of FRP reinforced concrete members accounting for most influential parameters is developed based on the design-by-testing approach using a large database of 134 specimens collected from the literature including the beams tested in this study. The equations of six existing design standards for shear capacity of FRP reinforced concrete beams have also been evaluated using the large database collected. The existing shear design methods for FRP reinforced concrete beams give either conservative or unsafe predictions for many specimens in the database and their accuracy are mostly dependent on the effective depth and type of FRP reinforcement. On the other hand, the proposed equation provides reasonably accurate shear capacity predictions for a wide range of FRP reinforced concrete beams.
648

A General Approach to Buhlmann Credibility Theory

Yan, Yujie yy 08 1900 (has links)
Credibility theory is widely used in insurance. It is included in the examination of the Society of Actuaries and in the construction and evaluation of actuarial models. In particular, the Buhlmann credibility model has played a fundamental role in both actuarial theory and practice. It provides a mathematical rigorous procedure for deciding how much credibility should be given to the actual experience rating of an individual risk relative to the manual rating common to a particular class of risks. However, for any selected risk, the Buhlmann model assumes that the outcome random variables in both experience periods and future periods are independent and identically distributed. In addition, the Buhlmann method uses sample mean-based estimators to insure the selected risk, which may be a poor estimator of future costs if only a few observations of past events (costs) are available. We present an extension of the Buhlmann model and propose a general method based on a linear combination of both robust and efficient estimators in a dependence framework. The performance of the proposed procedure is demonstrated by Monte Carlo simulations.
649

Correcting for Measurement Error and Misclassification using General Location Models

Kwizera, Muhire Honorine January 2023 (has links)
Measurement error is common in epidemiologic studies and can lead to biased statistical inference. It is well known, for example, that regression analyses involving measurement error in predictors often produce biased model coefficient estimates. The work in this dissertation adds to the existing vast literature on measurement error by proposing a missing data treatment of measurement error through general location models. The focus is on the case in which information about the measurement error model is not obtained from a subsample of the main study data but from separate, external information, namely the external calibration. Methods for handling measurement error in the setting of external calibration are in need with the increase in the availability of external data sources and the popularity of data integration in epidemiologic studies. General location models are well suited for the joint analysis of continuous and discrete variables. They offer direct relationships with the linear and logistic regression models and can be readily implemented using frequentist and Bayesian approaches. We use the general location models to correct for measurement error and misclassification in the context of three practical problems. The first problem concerns measurement error in a continuous variable from a dataset containing both continuous and categorical variables. In the second problem, measurement error in the continuous variable is further complicated by the limit of detection (LOD) of the measurement instrument, resulting in some measures of the error-prone continuous variable undetectable if they are below LOD. The third problem deals with misclassification in a binary treatment variable. We implement the proposed methods using Bayesian approaches for the first two problems and using the Expectation-maximization algorithm for the third problem. For the first problem we propose a Bayesian approach, based on the general location model, to correct measurement error of a continuous variable in a data set with both continuous and categorical variables. We consider the external calibration setting where in addition to the main study data of interest, calibration data are available and provide information on the measurement error but not on the error-free variables. The proposed method uses observed data from both the calibration and main study samples and incorporates relationships among all variables in measurement error adjustment, unlike existing methods that only use the calibration data for model estimation. We assume by strong nondifferential measurement error (sNDME) that the measurement error is independent of all the error-free variables given the true value of the error-prone variable. The sNDME assumption allows us to identify our model parameters. We show through simulations that the proposed method yields reduced bias, smaller mean squared error, and interval coverage closer to the nominal level compared to existing methods in regression settings. Furthermore, this improvement is pronounced with increased measurement error, higher correlation between covariates, and stronger covariate effects. We apply the new method to the New York City Neighborhood Asthma and Allergy Study to examine the association between indoor allergen concentrations and asthma morbidity among urban asthmatic children. The simultaneous occurrence of measurement error and LOD is common particularly in environmental exposures such as measurements of the indoor allergen concentrations mentioned in the first problem. Statistical analyses that do not address these two problems simultaneously could lead to wrong scientific conclusions. To address this second problem, we extend the Bayesian general location models for measurement error adjustment to handle both measurement error and values below LOD in a continuous environmental exposure in a regression setting with mixed continuous and discrete variables. We treat values below LOD as censored. Simulations show that our method yields smaller bias and root mean squared error and the posterior credible interval of our method has coverage closer to the nominal level compared to alternative methods, even when the proportion of data below LOD is moderate. We revisit data from the New York City Neighborhood Asthma and Allergy Study and quantify the effect of indoor allergen concentrations on childhood asthma when over 50% of the measured concentrations are below LOD. We finally look at the third problem of group mean comparison when treatment groups are misclassified. Our motivation comes from the Frequent User Services Engagement (FUSE) study. Researchers wish to compare quantitative health and social outcome measures for frequent jail-and-shelter users who were assigned housing and those who were not housed, and misclassification occurs as a result of noncompliance. The recommended intent-to-treat analysis which is based on initial group assignment is known to underestimate group mean differences. We use the general location model to estimate differences in group means after adjusting for misclassification in the binary grouping variable. Information on the misclassification is available through the sensitivity and specificity. We assume nondifferential misclassification so that misclassification does not depend on the outcome. We use the expectation-maximization algorithm to obtain estimates of the general location model parameters and the group means difference. Simulations show the bias reduction in the estimates of group means difference.
650

The Functional Mechanism of the Bacterial Ribosome, an Archetypal Biomolecular Machine

Ray, Korak Kumar January 2023 (has links)
Biomolecular machines are responsible for carrying out a host of essential cellular processes. In accordance to the wide range of functions they execute, the architectures of these also vary greatly. Yet, despite this diversity in both structure and function, they have some common characteristics. They are all large macromolecular complexes that enact multiple steps during the course of their functions. They are also ’Brownian’ in nature, i.e., they rectify the thermal motions of their surroundings into work. Yet how these machines can utilise their surrounding thermal energy in a directional manner, and do so in a cycle over and over again, is still not well understood. The work I present in this thesis spans the development, evaluation and use of biophysical, in particular single-molecule, tools in the study of the functional mechanisms of biomolecular machines. In Chapter 2, I describe a mathematical framework which utilises both the framework of Bayesian inference to relate any experimental data to an ideal template irrespective of the scale, background and noise in the data. This framework may be used for the analysis of data generated by multiple experimental techniques in an accurate, fast, and human-independent manner. One such application is described in Chapter 3, where this framework is used to evaluate the extent of spatial information present in experimental data generated using cryogenic electron microscopy (cryoEM). This application will not only aid the study of biomolecular structure using cryoEM by structural biologists, but also enable biophysicists and biochemists who use structural models to interpret and design their experiments to evaluate the cryoEM data they need to use for their investigations. In Chapter 4, I describe an investigation into the use of one class of analytical models, hidden Markov models (HMMs) to accurately extract kinetic information from single-molecule experimental data, such as the data generated by single-molecule fluorescence resonance energy transfer (smFRET) experiments. Finally in Chapter 5, I describe how single-molecule experiments have led to the discovery of a mechanism by which ligands can modulate and drive the conformational dynamics of the ribosome in a manner that facilitates ribosome-catalysed protein synthesis. This mechanism has implications to our understanding of the functional mechanisms of the ribosome in particular, and of biomolecular machines in general.

Page generated in 0.1307 seconds