• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 299
  • 103
  • 39
  • 35
  • 32
  • 23
  • 11
  • 10
  • 9
  • 8
  • 8
  • 6
  • 6
  • 5
  • 5
  • Tagged with
  • 691
  • 126
  • 126
  • 123
  • 105
  • 93
  • 89
  • 82
  • 76
  • 70
  • 59
  • 57
  • 54
  • 53
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
481

Application Of Statistical Methods In Risk And Reliability

Heard, Astrid 01 January 2005 (has links)
The dissertation considers construction of confidence intervals for a cumulative distribution function F(z) and its inverse at some fixed points z and u on the basis of an i.i.d. sample where the sample size is relatively small. The sample is modeled as having the flexible Generalized Gamma distribution with all three parameters being unknown. This approach can be viewed as an alternative to nonparametric techniques which do not specify distribution of X and lead to less efficient procedures. The confidence intervals are constructed by objective Bayesian methods and use the Jeffreys noninformative prior. Performance of the resulting confidence intervals is studied via Monte Carlo simulations and compared to the performance of nonparametric confidence intervals based on binomial proportion. In addition, techniques for change point detection are analyzed and further evaluated via Monte Carlo simulations. The effect of a change point on the interval estimators is studied both analytically and via Monte Carlo simulations.
482

An Analysis Of Misclassification Rates For Decision Trees

Zhong, Mingyu 01 January 2007 (has links)
The decision tree is a well-known methodology for classification and regression. In this dissertation, we focus on the minimization of the misclassification rate for decision tree classifiers. We derive the necessary equations that provide the optimal tree prediction, the estimated risk of the tree's prediction, and the reliability of the tree's risk estimation. We carry out an extensive analysis of the application of Lidstone's law of succession for the estimation of the class probabilities. In contrast to existing research, we not only compute the expected values of the risks but also calculate the corresponding reliability of the risk (measured by standard deviations). We also provide an explicit expression of the k-norm estimation for the tree's misclassification rate that combines both the expected value and the reliability. Furthermore, our proposed and proven theorem on k-norm estimation suggests an efficient pruning algorithm that has a clear theoretical interpretation, is easily implemented, and does not require a validation set. Our experiments show that our proposed pruning algorithm produces accurate trees quickly that compares very favorably with two other well-known pruning algorithms, CCP of CART and EBP of C4.5. Finally, our work provides a deeper understanding of decision trees.
483

Analyzing the Safety Effects of Edge Lane Roads for All Road Users

Lamera, Marcial F 01 September 2020 (has links) (PDF)
This thesis acts as one of the first studies that analyzes the safety effects of Edge Lane Roads (ELR) for all road users. This is important since ELRs can be a solution to many issues, such as alleviating congestion, increasing multimodality along roadways, and reducing maintenance costs. ELRs in both North America and Australia were observed. Starting with the North American ELRs, the following study designs were employed to estimate the safety of ELRs: (a) yoked comparison where each ELR installation was matched with at least two comparable 2-lane roads to serve as comparison sites and (b) an Empirical Bayes (EB) before/after analysis for ELR sites where requisite data on AADT and other relevant characteristics were available. Crash data was collected and compiled into four different groups: ELR before implementation, ELR after implementation, comparison site before ELR implementation, and comparison site after ELR comparison. The yoked comparison showed 9 of the 13 sites that had lower crash counts compared to their respective comparison sites. The EB analysis showed all 11 ELRs that were observed demonstrated a reduction in crashes. Moving to the Australian ELRs, the following study designs were employed: (c) analysis of general crash counts/trends, and (d) reverse EB analysis. The analysis of general crash counts and trends showed that each of the Australian ELRs exhibited very low amounts of crashes for 5 years, which further shows how safe these facilities are. Moving forward to the reverse EB analysis, 5 of the 8 ELR sites demonstrated a reduction in crashes. Overall, the results were generally favorable and indicated that ELRs provided a safer experience for cyclists, drivers, and pedestrians. More analysis is recommended as more data becomes available on these ELRs. Examples of this include using pedestrian and bicycle data to better understand the safety effects VRUs experience on North American facilities or gathering enough crash data to conduct 3-year reverse EB analyses for ELRs that were expanded to 2-lane roads. Hence, a recommendation can be made to implement a few experimental ELRs in rural locations throughout the State of California to help it meet its SB-1 objectives.
484

E-banking operational risk assessment. A soft computing approach in the context of the Nigerian banking industry.

Ochuko, Rita E. January 2012 (has links)
This study investigates E-banking Operational Risk Assessment (ORA) to enable the development of a new ORA framework and methodology. The general view is that E-banking systems have modified some of the traditional banking risks, particularly Operational Risk (OR) as suggested by the Basel Committee on Banking Supervision in 2003. In addition, recent E-banking financial losses together with risk management principles and standards raise the need for an effective ORA methodology and framework in the context of E-banking. Moreover, evaluation tools and / or methods for ORA are highly subjective, are still in their infant stages, and have not yet reached a consensus. Therefore, it is essential to develop valid and reliable methods for effective ORA and evaluations. The main contribution of this thesis is to apply Fuzzy Inference System (FIS) and Tree Augmented Naïve Bayes (TAN) classifier as standard tools for identifying OR, and measuring OR exposure level. In addition, a new ORA methodology is proposed which consists of four major steps: a risk model, assessment approach, analysis approach and a risk assessment process. Further, a new ORA framework and measurement metrics are proposed with six factors: frequency of triggering event, effectiveness of avoidance barriers, frequency of undesirable operational state, effectiveness of recovery barriers before the risk outcome, approximate cost for Undesirable Operational State (UOS) occurrence, and severity of the risk outcome. The study results were reported based on surveys conducted with Nigerian senior banking officers and banking customers. The study revealed that the framework and assessment tools gave good predictions for risk learning and inference in such systems. Thus, results obtained can be considered promising and useful for both E-banking system adopters and future researchers in this area.
485

Understanding Sales Performance Using Natural Language Processing - An experimental study evaluating rule-based algorithms in a B2B setting

Smedberg, Angelica January 2023 (has links)
Natural Language Processing (NLP) is a branch in data science that marries artificial intelligence with linguistics. Essentially, it tries to program computers to understand human language, both spoken and written. Over the past decade, researchers have applied novel algorithms to gain a better understanding of human sentiment. While no easy feat, incredible improvements have allowed organizations, politicians, governments, and other institutions to capture the attitudes and opinions of the public. It has been particularly constructive for companies who want to check the pulse of a new product or see what the positive or negative sentiments are for their services. NLP has even become useful in boosting sales performance and improving training. Over the years, there have been countless studies on sales performance, both from a psychological perspective, where characteristics of salespersons are explored, and from a data science/AI (Artificial Intelligence) perspective, where text is analyzed to predict sales forecasting (Pai & Liu, 2018) and coach sales agents using AI trainers (Luo et al., 2021). However, few studies have discussed how NLP models can help characterize sales performance using actual sales transcripts. Thus, there is a need to explore to what extent NLP models can inform B2B businesses of the characteristics embodied within their salesforce. This study aims to fill that literature gap. Through a partnership with a medium-sized tech company based out of California, USA, this study conducted an experiment to try and answer to what extent can we characterize sales performance based on real-life sales communication? And in what ways can conversational data inform the sales team at a California-based mid-sized tech company about how top performers communicate with customers? In total, over 5000 sentences containing over 110 000 words were collected and analyzed using two separate rule-based sentiment analysis techniques: TextBlob developed by Steven Loria (2013) and Valence Aware Dictionary and sEntiment Reasoner (VADER) developed by CJ Hutto and Eric Gilbert (2014). A Naïve Bayes classifier was then adopted to test and train each sentiment output from the two rule-based techniques. While both models obtained high accuracy, above 90%, it was concluded that an oversampled VADER approach yields the highest results. Additionally, VADER also tends to classify positive and negative sentences more correctly than TextBlob, when manually reviewing the output, hence making it a better model for the used dataset.
486

TESTING FOR DIFFERENTIALLY EXPRESSED GENES AND KEY BIOLOGICAL CATEGORIES IN DNA MICROARRAY ANALYSIS

SARTOR, MAUREEN A. January 2007 (has links)
No description available.
487

A Bayesian approach to the estimation of adult skeletal age: assessing the facility of multifactorial and three-dimensional methods to improve accuracy of age estimation

Barette, Tammy S. 07 June 2007 (has links)
No description available.
488

Evaluation of Uncertainty in Hydrodynamic Modeling

Camacho Rincon, Rene Alexander 17 August 2013 (has links)
Uncertainty analysis in hydrodynamic modeling is useful to identify and report the limitations of a model caused by different sources of error. In the practice, the main sources of errors are divided into model structure errors, errors in the input data due to measurement imprecision among other, and parametric errors resulting from the difficulty of identifying physically representative parameter values valid at the temporal and spatial scale of the models. This investigation identifies, implements, evaluates, and recommends a set of methods for the evaluation of model structure uncertainty, parametric uncertainty, and input data uncertainty in hydrodynamic modeling studies. A comprehensive review of uncertainty analysis methods is provided and a set of widely applied methods is selected and implemented in real case studies identifying the main limitations and benefits of their use in hydrodynamic studies. In particular, the following methods are investigated: the First Order Variance Analysis (FOVA) method, the Monte Carlo Uncertainty Analysis (MCUA) method, the Bayesian Monte Carlo (BMC) method, the Markov Chain Monte Carlo (MCMC) method and the Generalized Likelihood Uncertainty Estimation (GLUE) method. The results of this investigation indicate that the uncertainty estimates computed with FOVA are consistent with the results obtained by MCUA. In addition, the comparison of BMC, MCMC and GLUE indicates that BMC and MCMC provide similar estimations of the posterior parameter probability distributions, single-point parameter values, and uncertainty bounds mainly due to the use of the same likelihood function, and the low number of parameters involved in the inference process. However, the implementation of MCMC is substantially more complex than the implementation of BMC given that its sampling algorithm requires a careful definition of auxiliary proposal probability distributions along with their variances to obtain parameter samples that effectively belong to the posterior parameter distribution. The analysis also suggest that the results of GLUE are inconsistent with the results of BMC and MCMC. It is concluded that BMC is a powerful and parsimonious strategy for evaluation of all the sources of uncertainty in hydrodynamic modeling. Despites of the computational requirements of BMC, the method can be easily implemented in most practical applications.
489

Deep Learning for Ordinary Differential Equations and Predictive Uncertainty

Yijia Liu (17984911) 19 April 2024 (has links)
<p dir="ltr">Deep neural networks (DNNs) have demonstrated outstanding performance in numerous tasks such as image recognition and natural language processing. However, in dynamic systems modeling, the tasks of estimating and uncovering the potentially nonlinear structure of systems represented by ordinary differential equations (ODEs) pose a significant challenge. In this dissertation, we employ DNNs to enable precise and efficient parameter estimation of dynamic systems. In addition, we introduce a highly flexible neural ODE model to capture both nonlinear and sparse dependent relations among multiple functional processes. Nonetheless, DNNs are susceptible to overfitting and often struggle to accurately assess predictive uncertainty despite their widespread success across various AI domains. The challenge of defining meaningful priors for DNN weights and characterizing predictive uncertainty persists. In this dissertation, we present a novel neural adaptive empirical Bayes framework with a new class of prior distributions to address weight uncertainty.</p><p dir="ltr">In the first part, we propose a precise and efficient approach utilizing DNNs for estimation and inference of ODEs given noisy data. The DNNs are employed directly as a nonparametric proxy for the true solution of the ODEs, eliminating the need for numerical integration and resulting in significant computational time savings. We develop a gradient descent algorithm to estimate both the DNNs solution and the parameters of the ODEs by optimizing a fidelity-penalized likelihood loss function. This ensures that the derivatives of the DNNs estimator conform to the system of ODEs. Our method is particularly effective in scenarios where only a set of variables transformed from the system components by a given function are observed. We establish the convergence rate of the DNNs estimator and demonstrate that the derivatives of the DNNs solution asymptotically satisfy the ODEs determined by the inferred parameters. Simulations and real data analysis of COVID-19 daily cases are conducted to show the superior performance of our method in terms of accuracy of parameter estimates and system recovery, and computational speed.</p><p dir="ltr">In the second part, we present a novel sparse neural ODE model to characterize flexible relations among multiple functional processes. This model represents the latent states of the functions using a set of ODEs and models the dynamic changes of these states utilizing a DNN with a specially designed architecture and sparsity-inducing regularization. Our new model is able to capture both nonlinear and sparse dependent relations among multivariate functions. We develop an efficient optimization algorithm to estimate the unknown weights for the DNN under the sparsity constraint. Furthermore, we establish both algorithmic convergence and selection consistency, providing theoretical guarantees for the proposed method. We illustrate the efficacy of the method through simulation studies and a gene regulatory network example.</p><p dir="ltr">In the third part, we introduce a class of implicit generative priors to facilitate Bayesian modeling and inference. These priors are derived through a nonlinear transformation of a known low-dimensional distribution, allowing us to handle complex data distributions and capture the underlying manifold structure effectively. Our framework combines variational inference with a gradient ascent algorithm, which serves to select the hyperparameters and approximate the posterior distribution. Theoretical justification is established through both the posterior and classification consistency. We demonstrate the practical applications of our framework through extensive simulation examples and real-world datasets. Our experimental results highlight the superiority of our proposed framework over existing methods, such as sparse variational Bayesian and generative models, in terms of prediction accuracy and uncertainty quantification.</p>
490

Asset-liability modelling and pension schemes: the application of robust optimization to USS

Platanakis, Emmanouil, Sutcliffe, C. 08 May 2015 (has links)
Yes / This paper uses a novel numerical optimization technique – robust optimization – that is well suited to solving the asset–liability management (ALM) problem for pension schemes. It requires the estimation of fewer stochastic parameters, reduces estimation risk and adopts a prudent approach to asset allocation. This study is the first to apply it to a real-world pension scheme, and the first ALM model of a pension scheme to maximize the Sharpe ratio. We disaggregate pension liabilities into three components – active members, deferred members and pensioners, and transform the optimal asset allocation into the scheme’s projected contribution rate. The robust optimization model is extended to include liabilities and used to derive optimal investment policies for the Universities Superannuation Scheme (USS), benchmarked against the Sharpe and Tint, Bayes–Stein and Black–Litterman models as well as the actual USS investment decisions. Over a 144-month out-of-sample period, robust optimization is superior to the four benchmarks across 20 performance criteria and has a remarkably stable asset allocation – essentially fix-mix. These conclusions are supported by six robustness checks.

Page generated in 0.0381 seconds