• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • 1
  • Tagged with
  • 7
  • 7
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Multivariate Exploration and Processing of Sensor Data-applications with multidimensional sensor systems

Petersson, Henrik January 2008 (has links)
A sensor is a device that transforms a physical, chemical, or biological stimulus into a readable signal. The integral part that sensors make in modern technology is considerable and many are those trying to take the development of sensor technology further. Sensor systems are becoming more and more complex and may contain a wide range of different sensors, where each may deliver a multitude of signals.Although the data generated by modern sensor systems contain lots of information, the information may not be clearly visible. Appropriate handling of data becomes crucial to reveal what is sought, but unfortunately, that process is not always straightforward and there are many aspects to consider. Therefore, analysis of multidimensional sensor data has become a science.The topic of this thesis is signal processing of multidimensional sensordata. Surveys are given on methods to explore data and to use the data to quantify or classify samples. It is also discussed how to avoid the rise of artifacts and how to compensate for sensor deficiencies. Special interest is put on methods being practically applicable to chemical gas sensors. The merits and limitations of chemical sensors are discussed and it is argued that multivariate data analysis plays an important role using such sensors. The contribution made to the public by this thesis is primarily on techniques dealing with difficulties related to the operation of sensors in applications. In the second paper, a method is suggested that aims at suppressing the negative effects caused by unwanted sensor-to-sensor differences. If such differences are not suppressed sufficiently, systems where sensors occasionally must be replaced may degrade and lose performance. The strong-point of the suggested method is its relative ease of use considering large-scale production of sensor components and when integrating sensors into mass-market products. The third paper presents a method that facilitates and speeds up the process of assembling an array of sensors that is optimal for a particular application. The method combines multivariate data analysis with the `Scanning Light Pulse Technique'. In the first and fourth papers, the problem of source separation is studied. In two separate applications, one using gas sensors for combustion control and one using acoustic sensors for ground surveillance, it has been identified that the current sensors outputs mixtures of both interesting- and interfering signals. By different means, the two papers applies and evaluates methods to extract the relevant information under such circumstances. / En sensor är en komponent som överför en fysikalisk, kemisk, eller biologisk storhet eller kvalitet till en utläsbar signal. Sensorer utgör idag en viktig del i flertalet högteknologiska produkter och sensorforskning är ett aktivt område. Komplexiteten på sensorbaserade system ökar och det blir möjligt att registrera allt er olika typer av mätsignaler. Mätsignalerna är inte alltid direkt tydbara, varvid signalbehandling blir ett väsentligt verktyg för att vaska fram den viktiga information som sökes. Signalbehandling av sensorsignaler är dessvärre inte en okomplicerad procedur och det finns många aspekter att beakta. Av denna anledning har signalbehandling och analys av sensorsignaler utvecklats till ett eget forskningsområde. Denna avhandling avhandlar metoder för att analysera komplexa multidimensionella sensorsignaler. En introduktion ges till metoder för att, utifrån mätningar, klassificera och kvantifiera egenskaper hos mätobjekt. En överblick ges av de effekter som kan uppstå på grund av imperfektioner hos sensorerna och en diskussion föres kring metoder för att undvika eller lindra de problem som dessa imperfektioner kan ge uppkomst till. Speciell vikt lägges vid sådana metoder som medför en direkt applicerbarhet och nytta för system av kemiska sensorer. I avhandlingen ingår fyra artiklar, som vart och en belyser hur de metoder som beskrivits kan användas i praktiska situationer. / Sensor,
2

Techniky klasifikace proteinů / Protein Classification Techniques

Dekrét, Lukáš January 2020 (has links)
Main goal of classifying proteins into families is to understand structural, functional and evolutionary relationships between individual proteins, which are not easily deducible from available data. Since the structure and function of proteins are closely related, determination of function is mainly based on structural properties, that can be obtained relatively easily with current resources. Protein classification is also used in development of special medicines, in the diagnosis of clinical diseases or in personalized healthcare, which means a lot of investment in it. I created a new hierarchical tool for protein classification that achieves better results than some existing solutions. The implementation of the tool was preceded by acquaintance with the properties of proteins, examination of existing classification approaches, creation of an extensive data set, realizing experiments and selection of the final classifiers of the hierarchical tool.
3

Metodologia computacional para detecção e diagnóstico automáticos e planejamento cirúrgico do estrabismo / COMPUTATIONAL METHODS FOR DETECTION AND AUTOMATIC DIAGNOSIS AND SURGICAL PLANNING OF STRABISMUS

ALMEIDA, João Dallyson Sousa de 05 July 2013 (has links)
Submitted by Rosivalda Pereira (mrs.pereira@ufma.br) on 2017-08-14T20:25:49Z No. of bitstreams: 1 JoaoDallyson.pdf: 6621483 bytes, checksum: 19e928fa3d5789994cc1db5d829e0575 (MD5) / Made available in DSpace on 2017-08-14T20:25:49Z (GMT). No. of bitstreams: 1 JoaoDallyson.pdf: 6621483 bytes, checksum: 19e928fa3d5789994cc1db5d829e0575 (MD5) Previous issue date: 2013-07-05 / Strabismus is a condition that affects approximately 4% of the population causing aesthetic problems, reversible at any age, and irreversible sensory changes that modify the mechanism of vision. The Hirschberg test is one of the types of existing tests to detect such a condition. Detection Systems and computeraided diagnosis are being used with some success in helping health professionals. However, in spite of the increasing routine use of high-tech technologies, the diagnosis and therapy in ophthalmology is not a reality within the strabismus subspecialty. Thus, this thesis aims to present a methodology to detect and automatically diagnose and propose the plan of strabismus surgery through digital images. To do this, the study is organized in seven steps: (1) face segmentation; (2) eye region detection; (3) eyes location; (4) limbus and brilliance location; (5) detection, (6) diagnosis and (7) surgical planning of strabismus. The effectiveness of the study in the indication of the diagnosis and surgical plan was evaluated by the mean diference between the results provided by the methodology and the original indication of the expert. Patients were evaluated for eye positions: PPO, INFRA, SUPRA, DEXTRO and LEVO. The method was 88% accurate in identifying esotropias (ET), 100% in exotropias (XT), 80.33% in hipertropias (HT) and 83.33% in hipotropias (HoT). The overall average error in diagnosis was 5:6 and 3:83 for horizontal and vertical desviations, respectivelly. In planning surgeries of medial rectus muscles the average error was 0.6 mm for recession, and 0.9 mm for ressection. For lateral rectus muscles, the average error was 0.8 mm for recession, and 1 mm for resection. / O estrabismo é uma patologia que afeta cerca de 4% da população, provocando problemas estéticos (reversíveis a qualquer idade) e alterações sensoriais irreversíveis, modi cando o mecanismo da visão. O teste de Hirschberg é um dos tipos de exames existentes para detectar tal patologia. Sistemas de Detecção e Diagnóstico auxiliados por computador estão sendo usados com relativo sucesso no auxílio aos pro fissionais de saúde. No entanto, o emprego rotineiro de recursos de alta tecnologia no auxílio diagnóstico e terapêutico em oftalmologia não é uma realidade dentro da subespecialidade estrabismo. Sendo assim, o presente trabalho tem como objetivo apresentar uma metodologia para detectar e diagnosticar automaticamente, além de propor o plano cirúrgico do estrabismo por meio de imagens digitais. Para tanto, o estudo está organizado em sete estágios: (1) segmentação da face; (2) detecção da região dos olhos; (3) localização dos olhos; (4) localização do limbo e do brilho; (5) detecção; (6) diagnóstico e (7) planejamento cirúrgico do estrabismo. A e ficácia do estudo na indicação do diagnóstico e do plano cirúrgico foi avaliada pela m édia da diferença entre os resultados fornecidos pela metodologia e as indicações originais do especialista. Os pacientes foram avaliados nas posições do olhar: PPO, INFRA, SUPRA, DEXTRO e LEVO. O método obteve acuracia de 88% na identi cação de esotropias (ET), 100% nas exotropias (XT), 80,33% nas hipertropias (HT) e 83,33% nas hipotropias (HoT). O erro médio global na realização do diagnóstico foi de 5:6 e 3:83 para desvios horizontais e verticais, respectivamente. No planejamento de cirurgias de músculos retos mediais obteve-se erro médio de 0,6 mm para recuo, e 0,9 mm para ressecção. Para os músculos retos laterais, o erro médio foi de 0,8 mm para recuo e 1 mm para ressecção.
4

Support vector classification analysis of resting state functional connectivity fMRI

Craddock, Richard Cameron 17 November 2009 (has links)
Since its discovery in 1995 resting state functional connectivity derived from functional MRI data has become a popular neuroimaging method for study psychiatric disorders. Current methods for analyzing resting state functional connectivity in disease involve thousands of univariate tests, and the specification of regions of interests to employ in the analysis. There are several drawbacks to these methods. First the mass univariate tests employed are insensitive to the information present in distributed networks of functional connectivity. Second, the null hypothesis testing employed to select functional connectivity dierences between groups does not evaluate the predictive power of identified functional connectivities. Third, the specification of regions of interests is confounded by experimentor bias in terms of which regions should be modeled and experimental error in terms of the size and location of these regions of interests. The objective of this dissertation is to improve the methods for functional connectivity analysis using multivariate predictive modeling, feature selection, and whole brain parcellation. A method of applying Support vector classification (SVC) to resting state functional connectivity data was developed in the context of a neuroimaging study of depression. The interpretability of the obtained classifier was optimized using feature selection techniques that incorporate reliability information. The problem of selecting regions of interests for whole brain functional connectivity analysis was addressed by clustering whole brain functional connectivity data to parcellate the brain into contiguous functionally homogenous regions. This newly developed famework was applied to derive a classifier capable of correctly seperating the functional connectivity patterns of patients with depression from those of healthy controls 90% of the time. The features most relevant to the obtain classifier match those previously identified in previous studies, but also include several regions not previously implicated in the functional networks underlying depression.
5

SPARSE DISCRETE WAVELET DECOMPOSITION AND FILTER BANK TECHNIQUES FOR SPEECH RECOGNITION

Jingzhao Dai (6642491) 11 June 2019 (has links)
<p>Speech recognition is widely applied to translation from speech to related text, voice driven commands, human machine interface and so on [1]-[8]. It has been increasingly proliferated to Human’s lives in the modern age. To improve the accuracy of speech recognition, various algorithms such as artificial neural network, hidden Markov model and so on have been developed [1], [2].</p> <p>In this thesis work, the tasks of speech recognition with various classifiers are investigated. The classifiers employed include the support vector machine (SVM), k-nearest neighbors (KNN), random forest (RF) and convolutional neural network (CNN). Two novel features extraction methods of sparse discrete wavelet decomposition (SDWD) and bandpass filtering (BPF) based on the Mel filter banks [9] are developed and proposed. In order to meet diversity of classification algorithms, one-dimensional (1D) and two-dimensional (2D) features are required to be obtained. The 1D features are the array of power coefficients in frequency bands, which are dedicated for training SVM, KNN and RF classifiers while the 2D features are formed both in frequency domain and temporal variations. In fact, the 2D feature consists of the power values in decomposed bands versus consecutive speech frames. Most importantly, the 2D feature with geometric transformation are adopted to train CNN.</p> <p>Speech recognition including males and females are from the recorded data set as well as the standard data set. Firstly, the recordings with little noise and clear pronunciation are applied with the proposed feature extraction methods. After many trials and experiments using this dataset, a high recognition accuracy is achieved. Then, these feature extraction methods are further applied to the standard recordings having random characteristics with ambient noise and unclear pronunciation. Many experiment results validate the effectiveness of the proposed feature extraction techniques.</p>
6

Application of the Duality Theory

Lorenz, Nicole 15 August 2012 (has links) (PDF)
The aim of this thesis is to present new results concerning duality in scalar optimization. We show how the theory can be applied to optimization problems arising in the theory of risk measures, portfolio optimization and machine learning. First we give some notations and preliminaries we need within the thesis. After that we recall how the well-known Lagrange dual problem can be derived by using the general perturbation theory and give some generalized interior point regularity conditions used in the literature. Using these facts we consider some special scalar optimization problems having a composed objective function and geometric (and cone) constraints. We derive their duals, give strong duality results and optimality condition using some regularity conditions. Thus we complete and/or extend some results in the literature especially by using the mentioned regularity conditions, which are weaker than the classical ones. We further consider a scalar optimization problem having single chance constraints and a convex objective function. We also derive its dual, give a strong duality result and further consider a special case of this problem. Thus we show how the conjugate duality theory can be used for stochastic programming problems and extend some results given in the literature. In the third chapter of this thesis we consider convex risk and deviation measures. We present some more general measures than the ones given in the literature and derive formulas for their conjugate functions. Using these we calculate some dual representation formulas for the risk and deviation measures and correct some formulas in the literature. Finally we proof some subdifferential formulas for measures and risk functions by using the facts above. The generalized deviation measures we introduced in the previous chapter can be used to formulate some portfolio optimization problems we consider in the fourth chapter. Their duals, strong duality results and optimality conditions are derived by using the general theory and the conjugate functions, respectively, given in the second and third chapter. Analogous calculations are done for a portfolio optimization problem having single chance constraints using the general theory given in the second chapter. Thus we give an application of the duality theory in the well-developed field of portfolio optimization. We close this thesis by considering a general Support Vector Machines problem and derive its dual using the conjugate duality theory. We give a strong duality result and necessary as well as sufficient optimality conditions. By considering different cost functions we get problems for Support Vector Regression and Support Vector Classification. We extend the results given in the literature by dropping the assumption of invertibility of the kernel matrix. We use a cost function that generalizes the well-known Vapnik's ε-insensitive loss and consider the optimization problems that arise by using this. We show how the general theory can be applied for a real data set, especially we predict the concrete compressive strength by using a special Support Vector Regression problem.
7

Application of the Duality Theory: New Possibilities within the Theory of Risk Measures, Portfolio Optimization and Machine Learning

Lorenz, Nicole 28 June 2012 (has links)
The aim of this thesis is to present new results concerning duality in scalar optimization. We show how the theory can be applied to optimization problems arising in the theory of risk measures, portfolio optimization and machine learning. First we give some notations and preliminaries we need within the thesis. After that we recall how the well-known Lagrange dual problem can be derived by using the general perturbation theory and give some generalized interior point regularity conditions used in the literature. Using these facts we consider some special scalar optimization problems having a composed objective function and geometric (and cone) constraints. We derive their duals, give strong duality results and optimality condition using some regularity conditions. Thus we complete and/or extend some results in the literature especially by using the mentioned regularity conditions, which are weaker than the classical ones. We further consider a scalar optimization problem having single chance constraints and a convex objective function. We also derive its dual, give a strong duality result and further consider a special case of this problem. Thus we show how the conjugate duality theory can be used for stochastic programming problems and extend some results given in the literature. In the third chapter of this thesis we consider convex risk and deviation measures. We present some more general measures than the ones given in the literature and derive formulas for their conjugate functions. Using these we calculate some dual representation formulas for the risk and deviation measures and correct some formulas in the literature. Finally we proof some subdifferential formulas for measures and risk functions by using the facts above. The generalized deviation measures we introduced in the previous chapter can be used to formulate some portfolio optimization problems we consider in the fourth chapter. Their duals, strong duality results and optimality conditions are derived by using the general theory and the conjugate functions, respectively, given in the second and third chapter. Analogous calculations are done for a portfolio optimization problem having single chance constraints using the general theory given in the second chapter. Thus we give an application of the duality theory in the well-developed field of portfolio optimization. We close this thesis by considering a general Support Vector Machines problem and derive its dual using the conjugate duality theory. We give a strong duality result and necessary as well as sufficient optimality conditions. By considering different cost functions we get problems for Support Vector Regression and Support Vector Classification. We extend the results given in the literature by dropping the assumption of invertibility of the kernel matrix. We use a cost function that generalizes the well-known Vapnik's ε-insensitive loss and consider the optimization problems that arise by using this. We show how the general theory can be applied for a real data set, especially we predict the concrete compressive strength by using a special Support Vector Regression problem.

Page generated in 0.0925 seconds