• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 17
  • 9
  • 7
  • 7
  • 6
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 168
  • 168
  • 41
  • 41
  • 35
  • 32
  • 29
  • 29
  • 23
  • 22
  • 18
  • 17
  • 17
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Klasifikační metody analýzy vrstvy nervových vláken na sítnici / A Classification Methods for Retinal Nerve Fibre Layer Analysis

Zapletal, Petr January 2010 (has links)
This thesis is deal with classification for retinal nerve fibre layer. Texture features from six texture analysis methods are used for classification. All methods calculate feature vector from inputs images. This feature vector is characterized for every cluster (class). Classification is realized by three supervised learning algorithms and one unsupervised learning algorithm. The first testing algorithm is called Ho-Kashyap. The next is Bayess classifier NDDF (Normal Density Discriminant Function). The third is the Nearest Neighbor algorithm k-NN and the last tested classifier is algorithm K-means, which belongs to clustering. For better compactness of this thesis, three methods for selection of training patterns in supervised learning algorithms are implemented. The methods are based on Repeated Random Subsampling Cross Validation, K-Fold Cross Validation and Leave One Out Cross Validation algorithms. All algorithms are quantitatively compared in the sense of classication error evaluation.
72

A risk-transaction cost trade-off model for index tracking

Singh, Alex January 2014 (has links)
This master thesis considers and evaluates a few different risk models for stock portfolios, including an ordinary sample covariance matrix, factor models and an approach inspired from random matrix theory. The risk models are evaluated by simulating minimum variance portfolios and employing a cross-validation. The Bloomberg+ transaction cost model is investigated and used to optimize portfolios of stocks, with respect to a trade off between the active risk of the portfolio and transaction costs. Further a few different simulations are performed while using the optimizer to rebalance long-only portfolios. The optimization problem is solved using an active-set algorithm. A couple of approaches are shown that may be used to visually try to decide a value for the risk aversion parameter λ in the objective function of the optimization problem. The thesis concludes that there is a practical difference between the different risk models that are evaluated. The ordinary sample covariance matrix is shown to not perform as well as the other models. It also shows that more frequent rebalancing is preferable to less frequent. Further the thesis goes on to show a peculiar behavior of the optimization problem, which is that the optimizer does not rebalance all the way to 0 in simulations, even if enough time is provided, unless it is explicitly required by the constraints.
73

Assessing the Absolute and Relative Performance of IRTrees Using Cross-Validation and the RORME Index

DiTrapani, John B. 03 September 2019 (has links)
No description available.
74

Regression and time estimation in the manufacturing industry

Bjernulf, Walter January 2023 (has links)
In this thesis an analysis is performed on operation times for different sized products in a manufacturing company. The thesis will introduce and summarise most of the theory needed to perform regression and also cover a worked example where three different regression models are learned, evaluated and analysed. Conformal prediction, which at the moment is a hot topic in machine learning, will also be introduced and will be used in the worked example.
75

Integration of Genome Scale Data for Identifying New Biomarkers in Colon Cancer: Integrated Analysis of Transcriptomics and Epigenomics Data from High Throughput Technologies in Order to Identifying New Biomarkers Genes for Personalised Targeted Therapies for Patients Suffering from Colon Cancer

Hassan, Aamir Ul January 2017 (has links)
Colorectal cancer is the third most common cancer and the leading cause of cancer deaths in Western industrialised countries. Despite recent advances in the screening, diagnosis, and treatment of colorectal cancer, an estimated 608,000 people die every year due to colon cancer. Our current knowledge of colorectal carcinogenesis indicates a multifactorial and multi-step process that involves various genetic alterations and several biological pathways. The identification of molecular markers with early diagnostic and precise clinical outcome in colon cancer is a challenging task because of tumour heterogeneity. This Ph.D.-thesis presents the molecular and cellular mechanisms leading to colorectal cancer. A systematical review of the literature is conducted on Microarray Gene expression profiling, gene ontology enrichment analysis, microRNA and system Biology and various bioinformatics tools. We aimed this study to stratify a colon tumour into molecular distinct subtypes, identification of novel diagnostic targets and prediction of reliable prognostic signatures for clinical practice using microarray expression datasets. We performed an integrated analysis of gene expression data based on genetic, epigenetic and extensive clinical information using unsupervised learning, correlation and functional network analysis. As results, we identified 267-gene and 124-gene signatures that can distinguish normal, primary and metastatic tissues, and also involved in important regulatory functions such as immune-response, lipid metabolism and peroxisome proliferator-activated receptors (PPARs) signalling pathways. For the first time, we also identify miRNAs that can differentiate between primary colon from metastatic and a prognostic signature of grade and stage levels, which can be a major contributor to complex transcriptional phenotypes in a colon tumour.
76

Sequential Adaptive Designs In Computer Experiments For Response Surface Model Fit

LAM, CHEN QUIN 29 July 2008 (has links)
No description available.
77

Three Essays in Inference and Computational Problems in Econometrics

Todorov, Zvezdomir January 2020 (has links)
This dissertation is organized into three independent chapters. In Chapter 1, I consider the selection of weights for averaging a set of threshold models. Existing model averaging literature primarily focuses on averaging linear models, I consider threshold regression models. The theory I developed in that chapter demonstrates that the proposed jackknife model averaging estimator achieves asymptotic optimality when the set of candidate models are all misspecified threshold models. The simulations study demonstrates that the jackknife model averaging estimator achieves the lowest mean squared error when contrasted against other model selection and model averaging methods. In Chapter 2, I propose a model averaging framework for the synthetic control method of Abadie and Gardeazabal (2003) and Abadie et al. (2010). The proposed estimator serves a twofold purpose. First, it reduces the bias in estimating the weights each member of the donor pool receives. Secondly, it accounts for model uncertainty for the program evaluation estimation. I study two variations of the model, one where model weights are derived by solving a cross-validation quadratic program and another where each candidate model receives equal weights. Next, I show how to apply the placebo study and the conformal inference procedure for both versions of my estimator. With a simulation study, I reveal that the superior performance of the proposed procedure. In Chapter 3, which is co-authored with my advisor Professor Youngki Shin, we provide an exact computation algorithm for the maximum rank correlation estimator using the mixed integer programming (MIP) approach. We construct a new constrained optimization problem by transforming all indicator functions into binary parameters to be estimated and show that the transformation is equivalent to the original problem. Using a modern MIP solver, we apply the proposed method to an empirical example and Monte Carlo simulations. The results show that the proposed algorithm performs better than the existing alternatives. / Dissertation / Doctor of Philosophy (PhD)
78

Improving computational predictions of Cis-regulatory binding sites in genomic data

Rezwan, Faisal Ibne January 2011 (has links)
Cis-regulatory elements are the short regions of DNA to which specific regulatory proteins bind and these interactions subsequently influence the level of transcription for associated genes, by inhibiting or enhancing the transcription process. It is known that much of the genetic change underlying morphological evolution takes place in these regions, rather than in the coding regions of genes. Identifying these sites in a genome is a non-trivial problem. Experimental (wet-lab) methods for finding binding sites exist, but all have some limitations regarding their applicability, accuracy, availability or cost. On the other hand computational methods for predicting the position of binding sites are less expensive and faster. Unfortunately, however, these algorithms perform rather poorly, some missing most binding sites and others over-predicting their presence. The aim of this thesis is to develop and improve computational approaches for the prediction of transcription factor binding sites (TFBSs) by integrating the results of computational algorithms and other sources of complementary biological evidence. Previous related work involved the use of machine learning algorithms for integrating predictions of TFBSs, with particular emphasis on the use of the Support Vector Machine (SVM). This thesis has built upon, extended and considerably improved this earlier work. Data from two organisms was used here. Firstly the relatively simple genome of yeast was used. In yeast, the binding sites are fairly well characterised and they are normally located near the genes that they regulate. The techniques used on the yeast genome were also tested on the more complex genome of the mouse. It is known that the regulatory mechanisms of the eukaryotic species, mouse, is considerably more complex and it was therefore interesting to investigate the techniques described here on such an organism. The initial results were however not particularly encouraging: although a small improvement on the base algorithms could be obtained, the predictions were still of low quality. This was the case for both the yeast and mouse genomes. However, when the negatively labeled vectors in the training set were changed, a substantial improvement in performance was observed. The first change was to choose regions in the mouse genome a long way (distal) from a gene over 4000 base pairs away - as regions not containing binding sites. This produced a major improvement in performance. The second change was simply to use randomised training vectors, which contained no meaningful biological information, as the negative class. This gave some improvement over the yeast genome, but had a very substantial benefit for the mouse data, considerably improving on the aforementioned distal negative training data. In fact the resulting classifier was finding over 80% of the binding sites in the test set and moreover 80% of the predictions were correct. The final experiment used an updated version of the yeast dataset, using more state of the art algorithms and more recent TFBSs annotation data. Here it was found that using randomised or distal negative examples once again gave very good results, comparable to the results obtained on the mouse genome. Another source of negative data was tried for this yeast data, namely using vectors taken from intronic regions. Interestingly this gave the best results.
79

Infrared face recognition

Lee, Colin K. 06 1900 (has links)
Approved for public release, distribution is unlimited / This study continues a previous face recognition investigation using uncooled infrared technology. The database developed in an earlier study is further expanded to include 50 volunteers with 30 facial images from each subject. The automatic image reduction method reduces the pixel size of each image from 160 120 to 60 45 . The study reexamines two linear classification methods: the Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (LDA). Both PCA and LDA apply eigenvectors and eigenvalues concepts. In addition, the Singular Value Decomposition based Snapshot method is applied to decrease the computational load. The K-fold Cross Validation is applied to estimate classification performances. Results indicate that the best PCA-based method (using all eigenvectors) produces an average classification performance equal to 79.22%. Incorporated with PCA for dimension reduction, the LDA-based method achieves 94.58% accuracy in average classification performance. Additional testing on unfocused images produces no significant impact on the overall classification performance. Overall results again confirm uncooled IR imaging can be used to identify individual subjects in a constrained indoor environment. / Lieutenant, United States Navy
80

Assessment of Single Crystal X-ray Diffraction Data Quality

Krause, Lennard 02 March 2017 (has links)
No description available.

Page generated in 0.1235 seconds