• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 5
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Metodologia computacional para detecção e diagnóstico automáticos e planejamento cirúrgico do estrabismo / COMPUTATIONAL METHODS FOR DETECTION AND AUTOMATIC DIAGNOSIS AND SURGICAL PLANNING OF STRABISMUS

ALMEIDA, João Dallyson Sousa de 05 July 2013 (has links)
Submitted by Rosivalda Pereira (mrs.pereira@ufma.br) on 2017-08-14T20:25:49Z No. of bitstreams: 1 JoaoDallyson.pdf: 6621483 bytes, checksum: 19e928fa3d5789994cc1db5d829e0575 (MD5) / Made available in DSpace on 2017-08-14T20:25:49Z (GMT). No. of bitstreams: 1 JoaoDallyson.pdf: 6621483 bytes, checksum: 19e928fa3d5789994cc1db5d829e0575 (MD5) Previous issue date: 2013-07-05 / Strabismus is a condition that affects approximately 4% of the population causing aesthetic problems, reversible at any age, and irreversible sensory changes that modify the mechanism of vision. The Hirschberg test is one of the types of existing tests to detect such a condition. Detection Systems and computeraided diagnosis are being used with some success in helping health professionals. However, in spite of the increasing routine use of high-tech technologies, the diagnosis and therapy in ophthalmology is not a reality within the strabismus subspecialty. Thus, this thesis aims to present a methodology to detect and automatically diagnose and propose the plan of strabismus surgery through digital images. To do this, the study is organized in seven steps: (1) face segmentation; (2) eye region detection; (3) eyes location; (4) limbus and brilliance location; (5) detection, (6) diagnosis and (7) surgical planning of strabismus. The effectiveness of the study in the indication of the diagnosis and surgical plan was evaluated by the mean diference between the results provided by the methodology and the original indication of the expert. Patients were evaluated for eye positions: PPO, INFRA, SUPRA, DEXTRO and LEVO. The method was 88% accurate in identifying esotropias (ET), 100% in exotropias (XT), 80.33% in hipertropias (HT) and 83.33% in hipotropias (HoT). The overall average error in diagnosis was 5:6 and 3:83 for horizontal and vertical desviations, respectivelly. In planning surgeries of medial rectus muscles the average error was 0.6 mm for recession, and 0.9 mm for ressection. For lateral rectus muscles, the average error was 0.8 mm for recession, and 1 mm for resection. / O estrabismo é uma patologia que afeta cerca de 4% da população, provocando problemas estéticos (reversíveis a qualquer idade) e alterações sensoriais irreversíveis, modi cando o mecanismo da visão. O teste de Hirschberg é um dos tipos de exames existentes para detectar tal patologia. Sistemas de Detecção e Diagnóstico auxiliados por computador estão sendo usados com relativo sucesso no auxílio aos pro fissionais de saúde. No entanto, o emprego rotineiro de recursos de alta tecnologia no auxílio diagnóstico e terapêutico em oftalmologia não é uma realidade dentro da subespecialidade estrabismo. Sendo assim, o presente trabalho tem como objetivo apresentar uma metodologia para detectar e diagnosticar automaticamente, além de propor o plano cirúrgico do estrabismo por meio de imagens digitais. Para tanto, o estudo está organizado em sete estágios: (1) segmentação da face; (2) detecção da região dos olhos; (3) localização dos olhos; (4) localização do limbo e do brilho; (5) detecção; (6) diagnóstico e (7) planejamento cirúrgico do estrabismo. A e ficácia do estudo na indicação do diagnóstico e do plano cirúrgico foi avaliada pela m édia da diferença entre os resultados fornecidos pela metodologia e as indicações originais do especialista. Os pacientes foram avaliados nas posições do olhar: PPO, INFRA, SUPRA, DEXTRO e LEVO. O método obteve acuracia de 88% na identi cação de esotropias (ET), 100% nas exotropias (XT), 80,33% nas hipertropias (HT) e 83,33% nas hipotropias (HoT). O erro médio global na realização do diagnóstico foi de 5:6 e 3:83 para desvios horizontais e verticais, respectivamente. No planejamento de cirurgias de músculos retos mediais obteve-se erro médio de 0,6 mm para recuo, e 0,9 mm para ressecção. Para os músculos retos laterais, o erro médio foi de 0,8 mm para recuo e 1 mm para ressecção.
2

Support vector classification analysis of resting state functional connectivity fMRI

Craddock, Richard Cameron 17 November 2009 (has links)
Since its discovery in 1995 resting state functional connectivity derived from functional MRI data has become a popular neuroimaging method for study psychiatric disorders. Current methods for analyzing resting state functional connectivity in disease involve thousands of univariate tests, and the specification of regions of interests to employ in the analysis. There are several drawbacks to these methods. First the mass univariate tests employed are insensitive to the information present in distributed networks of functional connectivity. Second, the null hypothesis testing employed to select functional connectivity dierences between groups does not evaluate the predictive power of identified functional connectivities. Third, the specification of regions of interests is confounded by experimentor bias in terms of which regions should be modeled and experimental error in terms of the size and location of these regions of interests. The objective of this dissertation is to improve the methods for functional connectivity analysis using multivariate predictive modeling, feature selection, and whole brain parcellation. A method of applying Support vector classification (SVC) to resting state functional connectivity data was developed in the context of a neuroimaging study of depression. The interpretability of the obtained classifier was optimized using feature selection techniques that incorporate reliability information. The problem of selecting regions of interests for whole brain functional connectivity analysis was addressed by clustering whole brain functional connectivity data to parcellate the brain into contiguous functionally homogenous regions. This newly developed famework was applied to derive a classifier capable of correctly seperating the functional connectivity patterns of patients with depression from those of healthy controls 90% of the time. The features most relevant to the obtain classifier match those previously identified in previous studies, but also include several regions not previously implicated in the functional networks underlying depression.
3

SPARSE DISCRETE WAVELET DECOMPOSITION AND FILTER BANK TECHNIQUES FOR SPEECH RECOGNITION

Jingzhao Dai (6642491) 11 June 2019 (has links)
<p>Speech recognition is widely applied to translation from speech to related text, voice driven commands, human machine interface and so on [1]-[8]. It has been increasingly proliferated to Human’s lives in the modern age. To improve the accuracy of speech recognition, various algorithms such as artificial neural network, hidden Markov model and so on have been developed [1], [2].</p> <p>In this thesis work, the tasks of speech recognition with various classifiers are investigated. The classifiers employed include the support vector machine (SVM), k-nearest neighbors (KNN), random forest (RF) and convolutional neural network (CNN). Two novel features extraction methods of sparse discrete wavelet decomposition (SDWD) and bandpass filtering (BPF) based on the Mel filter banks [9] are developed and proposed. In order to meet diversity of classification algorithms, one-dimensional (1D) and two-dimensional (2D) features are required to be obtained. The 1D features are the array of power coefficients in frequency bands, which are dedicated for training SVM, KNN and RF classifiers while the 2D features are formed both in frequency domain and temporal variations. In fact, the 2D feature consists of the power values in decomposed bands versus consecutive speech frames. Most importantly, the 2D feature with geometric transformation are adopted to train CNN.</p> <p>Speech recognition including males and females are from the recorded data set as well as the standard data set. Firstly, the recordings with little noise and clear pronunciation are applied with the proposed feature extraction methods. After many trials and experiments using this dataset, a high recognition accuracy is achieved. Then, these feature extraction methods are further applied to the standard recordings having random characteristics with ambient noise and unclear pronunciation. Many experiment results validate the effectiveness of the proposed feature extraction techniques.</p>
4

Application of the Duality Theory

Lorenz, Nicole 15 August 2012 (has links) (PDF)
The aim of this thesis is to present new results concerning duality in scalar optimization. We show how the theory can be applied to optimization problems arising in the theory of risk measures, portfolio optimization and machine learning. First we give some notations and preliminaries we need within the thesis. After that we recall how the well-known Lagrange dual problem can be derived by using the general perturbation theory and give some generalized interior point regularity conditions used in the literature. Using these facts we consider some special scalar optimization problems having a composed objective function and geometric (and cone) constraints. We derive their duals, give strong duality results and optimality condition using some regularity conditions. Thus we complete and/or extend some results in the literature especially by using the mentioned regularity conditions, which are weaker than the classical ones. We further consider a scalar optimization problem having single chance constraints and a convex objective function. We also derive its dual, give a strong duality result and further consider a special case of this problem. Thus we show how the conjugate duality theory can be used for stochastic programming problems and extend some results given in the literature. In the third chapter of this thesis we consider convex risk and deviation measures. We present some more general measures than the ones given in the literature and derive formulas for their conjugate functions. Using these we calculate some dual representation formulas for the risk and deviation measures and correct some formulas in the literature. Finally we proof some subdifferential formulas for measures and risk functions by using the facts above. The generalized deviation measures we introduced in the previous chapter can be used to formulate some portfolio optimization problems we consider in the fourth chapter. Their duals, strong duality results and optimality conditions are derived by using the general theory and the conjugate functions, respectively, given in the second and third chapter. Analogous calculations are done for a portfolio optimization problem having single chance constraints using the general theory given in the second chapter. Thus we give an application of the duality theory in the well-developed field of portfolio optimization. We close this thesis by considering a general Support Vector Machines problem and derive its dual using the conjugate duality theory. We give a strong duality result and necessary as well as sufficient optimality conditions. By considering different cost functions we get problems for Support Vector Regression and Support Vector Classification. We extend the results given in the literature by dropping the assumption of invertibility of the kernel matrix. We use a cost function that generalizes the well-known Vapnik's ε-insensitive loss and consider the optimization problems that arise by using this. We show how the general theory can be applied for a real data set, especially we predict the concrete compressive strength by using a special Support Vector Regression problem.
5

Application of the Duality Theory: New Possibilities within the Theory of Risk Measures, Portfolio Optimization and Machine Learning

Lorenz, Nicole 28 June 2012 (has links)
The aim of this thesis is to present new results concerning duality in scalar optimization. We show how the theory can be applied to optimization problems arising in the theory of risk measures, portfolio optimization and machine learning. First we give some notations and preliminaries we need within the thesis. After that we recall how the well-known Lagrange dual problem can be derived by using the general perturbation theory and give some generalized interior point regularity conditions used in the literature. Using these facts we consider some special scalar optimization problems having a composed objective function and geometric (and cone) constraints. We derive their duals, give strong duality results and optimality condition using some regularity conditions. Thus we complete and/or extend some results in the literature especially by using the mentioned regularity conditions, which are weaker than the classical ones. We further consider a scalar optimization problem having single chance constraints and a convex objective function. We also derive its dual, give a strong duality result and further consider a special case of this problem. Thus we show how the conjugate duality theory can be used for stochastic programming problems and extend some results given in the literature. In the third chapter of this thesis we consider convex risk and deviation measures. We present some more general measures than the ones given in the literature and derive formulas for their conjugate functions. Using these we calculate some dual representation formulas for the risk and deviation measures and correct some formulas in the literature. Finally we proof some subdifferential formulas for measures and risk functions by using the facts above. The generalized deviation measures we introduced in the previous chapter can be used to formulate some portfolio optimization problems we consider in the fourth chapter. Their duals, strong duality results and optimality conditions are derived by using the general theory and the conjugate functions, respectively, given in the second and third chapter. Analogous calculations are done for a portfolio optimization problem having single chance constraints using the general theory given in the second chapter. Thus we give an application of the duality theory in the well-developed field of portfolio optimization. We close this thesis by considering a general Support Vector Machines problem and derive its dual using the conjugate duality theory. We give a strong duality result and necessary as well as sufficient optimality conditions. By considering different cost functions we get problems for Support Vector Regression and Support Vector Classification. We extend the results given in the literature by dropping the assumption of invertibility of the kernel matrix. We use a cost function that generalizes the well-known Vapnik's ε-insensitive loss and consider the optimization problems that arise by using this. We show how the general theory can be applied for a real data set, especially we predict the concrete compressive strength by using a special Support Vector Regression problem.

Page generated in 0.1227 seconds