• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 81
  • 17
  • 9
  • 7
  • 7
  • 6
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 170
  • 170
  • 42
  • 41
  • 36
  • 33
  • 30
  • 30
  • 23
  • 22
  • 18
  • 18
  • 17
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Klasifikační metody analýzy vrstvy nervových vláken na sítnici / A Classification Methods for Retinal Nerve Fibre Layer Analysis

Zapletal, Petr January 2010 (has links)
This thesis is deal with classification for retinal nerve fibre layer. Texture features from six texture analysis methods are used for classification. All methods calculate feature vector from inputs images. This feature vector is characterized for every cluster (class). Classification is realized by three supervised learning algorithms and one unsupervised learning algorithm. The first testing algorithm is called Ho-Kashyap. The next is Bayess classifier NDDF (Normal Density Discriminant Function). The third is the Nearest Neighbor algorithm k-NN and the last tested classifier is algorithm K-means, which belongs to clustering. For better compactness of this thesis, three methods for selection of training patterns in supervised learning algorithms are implemented. The methods are based on Repeated Random Subsampling Cross Validation, K-Fold Cross Validation and Leave One Out Cross Validation algorithms. All algorithms are quantitatively compared in the sense of classication error evaluation.
72

A risk-transaction cost trade-off model for index tracking

Singh, Alex January 2014 (has links)
This master thesis considers and evaluates a few different risk models for stock portfolios, including an ordinary sample covariance matrix, factor models and an approach inspired from random matrix theory. The risk models are evaluated by simulating minimum variance portfolios and employing a cross-validation. The Bloomberg+ transaction cost model is investigated and used to optimize portfolios of stocks, with respect to a trade off between the active risk of the portfolio and transaction costs. Further a few different simulations are performed while using the optimizer to rebalance long-only portfolios. The optimization problem is solved using an active-set algorithm. A couple of approaches are shown that may be used to visually try to decide a value for the risk aversion parameter λ in the objective function of the optimization problem. The thesis concludes that there is a practical difference between the different risk models that are evaluated. The ordinary sample covariance matrix is shown to not perform as well as the other models. It also shows that more frequent rebalancing is preferable to less frequent. Further the thesis goes on to show a peculiar behavior of the optimization problem, which is that the optimizer does not rebalance all the way to 0 in simulations, even if enough time is provided, unless it is explicitly required by the constraints.
73

Assessing the Absolute and Relative Performance of IRTrees Using Cross-Validation and the RORME Index

DiTrapani, John B. 03 September 2019 (has links)
No description available.
74

Regression and time estimation in the manufacturing industry

Bjernulf, Walter January 2023 (has links)
In this thesis an analysis is performed on operation times for different sized products in a manufacturing company. The thesis will introduce and summarise most of the theory needed to perform regression and also cover a worked example where three different regression models are learned, evaluated and analysed. Conformal prediction, which at the moment is a hot topic in machine learning, will also be introduced and will be used in the worked example.
75

Integration of Genome Scale Data for Identifying New Biomarkers in Colon Cancer: Integrated Analysis of Transcriptomics and Epigenomics Data from High Throughput Technologies in Order to Identifying New Biomarkers Genes for Personalised Targeted Therapies for Patients Suffering from Colon Cancer

Hassan, Aamir Ul January 2017 (has links)
Colorectal cancer is the third most common cancer and the leading cause of cancer deaths in Western industrialised countries. Despite recent advances in the screening, diagnosis, and treatment of colorectal cancer, an estimated 608,000 people die every year due to colon cancer. Our current knowledge of colorectal carcinogenesis indicates a multifactorial and multi-step process that involves various genetic alterations and several biological pathways. The identification of molecular markers with early diagnostic and precise clinical outcome in colon cancer is a challenging task because of tumour heterogeneity. This Ph.D.-thesis presents the molecular and cellular mechanisms leading to colorectal cancer. A systematical review of the literature is conducted on Microarray Gene expression profiling, gene ontology enrichment analysis, microRNA and system Biology and various bioinformatics tools. We aimed this study to stratify a colon tumour into molecular distinct subtypes, identification of novel diagnostic targets and prediction of reliable prognostic signatures for clinical practice using microarray expression datasets. We performed an integrated analysis of gene expression data based on genetic, epigenetic and extensive clinical information using unsupervised learning, correlation and functional network analysis. As results, we identified 267-gene and 124-gene signatures that can distinguish normal, primary and metastatic tissues, and also involved in important regulatory functions such as immune-response, lipid metabolism and peroxisome proliferator-activated receptors (PPARs) signalling pathways. For the first time, we also identify miRNAs that can differentiate between primary colon from metastatic and a prognostic signature of grade and stage levels, which can be a major contributor to complex transcriptional phenotypes in a colon tumour.
76

Sequential Adaptive Designs In Computer Experiments For Response Surface Model Fit

LAM, CHEN QUIN 29 July 2008 (has links)
No description available.
77

Three Essays in Inference and Computational Problems in Econometrics

Todorov, Zvezdomir January 2020 (has links)
This dissertation is organized into three independent chapters. In Chapter 1, I consider the selection of weights for averaging a set of threshold models. Existing model averaging literature primarily focuses on averaging linear models, I consider threshold regression models. The theory I developed in that chapter demonstrates that the proposed jackknife model averaging estimator achieves asymptotic optimality when the set of candidate models are all misspecified threshold models. The simulations study demonstrates that the jackknife model averaging estimator achieves the lowest mean squared error when contrasted against other model selection and model averaging methods. In Chapter 2, I propose a model averaging framework for the synthetic control method of Abadie and Gardeazabal (2003) and Abadie et al. (2010). The proposed estimator serves a twofold purpose. First, it reduces the bias in estimating the weights each member of the donor pool receives. Secondly, it accounts for model uncertainty for the program evaluation estimation. I study two variations of the model, one where model weights are derived by solving a cross-validation quadratic program and another where each candidate model receives equal weights. Next, I show how to apply the placebo study and the conformal inference procedure for both versions of my estimator. With a simulation study, I reveal that the superior performance of the proposed procedure. In Chapter 3, which is co-authored with my advisor Professor Youngki Shin, we provide an exact computation algorithm for the maximum rank correlation estimator using the mixed integer programming (MIP) approach. We construct a new constrained optimization problem by transforming all indicator functions into binary parameters to be estimated and show that the transformation is equivalent to the original problem. Using a modern MIP solver, we apply the proposed method to an empirical example and Monte Carlo simulations. The results show that the proposed algorithm performs better than the existing alternatives. / Dissertation / Doctor of Philosophy (PhD)
78

The Design, Prototyping, and Validation of a New Wearable Sensor System for Monitoring Lumbar Spinal Motion in Daily Activities

Bischoff, Brianna 11 June 2024 (has links) (PDF)
Lower back pain is a widespread problem affecting millions worldwide, because understanding its development and effective treatment remains challenging. Current treatment success is often evaluated using patient-reported outcomes, which tend to be qualitative and subjective in nature, making objective success measurement difficult. Wearable sensors can provide quantitative measurements, thereby helping physicians improve care for countless individuals around the world. These sensors also have the potential to provide longitudinal data on daily motion patterns, aiding in monitoring the progress of treatment plans for lower back pain. In this work it was hypothesized that a new wearable sensor garment that makes use of high-deflection strain gauge technology--called the Z-SPINE System--will be capable of collecting biomechanical information capable of detecting characteristics of motion associated with chronic lower back pain from subjects as compared to skin-adhered wearable sensor systems. The initial prototyping development of the Z-SPINE System focused on optimizing the device's conformity to the skin, as well as the ease of use and comfortability of the design. Preliminary motion capture tests concluded that a waist belt made of an elastic four way stretch material with silicone patches and no ribbing had the highest skin conformity of the garment types tested, and further design decisions were made utilizing this knowledge. A human subject study was conducted with 30 subjects who performed 14 functional movements with both the Z-SPINE System, and the SPINE Sense System--a pre-existing wearable sensor system that utilizes the same high-deflection strain gauge technology and is adhered directly to the back. Multiple features were extracted from the strain sensor datasets for use in machine learning modeling, where the model was trained to distinguish the different movements from each other. The accuracy of the model was assessed using 4 different category number variations--two 4 category, one 7 category, and one 13 category variation. Four different machine learning models were used, with the random forest classifier generally performing the best, yielding prediction accuracies of 85.95% for the SPINE Sense System data, and 71.23% for the Z-SPINE System data in the 4 category tests. As an additional part of the human subject study, the usability of the Z-SPINE System was also assessed. Each participant filled out a system usability scale questionnaire in regards to their opinion and experience with the system after having used it; the average score given by participants was 83.4, with general feedback consisting of positive remarks about the comfort and ease of use of the current design and suggestions for improving the battery placement and fit of the Z-SPINE system. It is concluded that a machine learning model of the data from the Z-SPINE System can identify biomechanical motion with reasonable accuracy as compared to a skin-adhered wearable sensor system when the number of categories is limited. It is also concluded that the system is simple and intuitive to use.
79

Improving computational predictions of Cis-regulatory binding sites in genomic data

Rezwan, Faisal Ibne January 2011 (has links)
Cis-regulatory elements are the short regions of DNA to which specific regulatory proteins bind and these interactions subsequently influence the level of transcription for associated genes, by inhibiting or enhancing the transcription process. It is known that much of the genetic change underlying morphological evolution takes place in these regions, rather than in the coding regions of genes. Identifying these sites in a genome is a non-trivial problem. Experimental (wet-lab) methods for finding binding sites exist, but all have some limitations regarding their applicability, accuracy, availability or cost. On the other hand computational methods for predicting the position of binding sites are less expensive and faster. Unfortunately, however, these algorithms perform rather poorly, some missing most binding sites and others over-predicting their presence. The aim of this thesis is to develop and improve computational approaches for the prediction of transcription factor binding sites (TFBSs) by integrating the results of computational algorithms and other sources of complementary biological evidence. Previous related work involved the use of machine learning algorithms for integrating predictions of TFBSs, with particular emphasis on the use of the Support Vector Machine (SVM). This thesis has built upon, extended and considerably improved this earlier work. Data from two organisms was used here. Firstly the relatively simple genome of yeast was used. In yeast, the binding sites are fairly well characterised and they are normally located near the genes that they regulate. The techniques used on the yeast genome were also tested on the more complex genome of the mouse. It is known that the regulatory mechanisms of the eukaryotic species, mouse, is considerably more complex and it was therefore interesting to investigate the techniques described here on such an organism. The initial results were however not particularly encouraging: although a small improvement on the base algorithms could be obtained, the predictions were still of low quality. This was the case for both the yeast and mouse genomes. However, when the negatively labeled vectors in the training set were changed, a substantial improvement in performance was observed. The first change was to choose regions in the mouse genome a long way (distal) from a gene over 4000 base pairs away - as regions not containing binding sites. This produced a major improvement in performance. The second change was simply to use randomised training vectors, which contained no meaningful biological information, as the negative class. This gave some improvement over the yeast genome, but had a very substantial benefit for the mouse data, considerably improving on the aforementioned distal negative training data. In fact the resulting classifier was finding over 80% of the binding sites in the test set and moreover 80% of the predictions were correct. The final experiment used an updated version of the yeast dataset, using more state of the art algorithms and more recent TFBSs annotation data. Here it was found that using randomised or distal negative examples once again gave very good results, comparable to the results obtained on the mouse genome. Another source of negative data was tried for this yeast data, namely using vectors taken from intronic regions. Interestingly this gave the best results.
80

Infrared face recognition

Lee, Colin K. 06 1900 (has links)
Approved for public release, distribution is unlimited / This study continues a previous face recognition investigation using uncooled infrared technology. The database developed in an earlier study is further expanded to include 50 volunteers with 30 facial images from each subject. The automatic image reduction method reduces the pixel size of each image from 160 120 to 60 45 . The study reexamines two linear classification methods: the Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (LDA). Both PCA and LDA apply eigenvectors and eigenvalues concepts. In addition, the Singular Value Decomposition based Snapshot method is applied to decrease the computational load. The K-fold Cross Validation is applied to estimate classification performances. Results indicate that the best PCA-based method (using all eigenvectors) produces an average classification performance equal to 79.22%. Incorporated with PCA for dimension reduction, the LDA-based method achieves 94.58% accuracy in average classification performance. Additional testing on unfocused images produces no significant impact on the overall classification performance. Overall results again confirm uncooled IR imaging can be used to identify individual subjects in a constrained indoor environment. / Lieutenant, United States Navy

Page generated in 0.1557 seconds