• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 393
  • 168
  • 46
  • 44
  • 29
  • 21
  • 19
  • 18
  • 17
  • 17
  • 15
  • 7
  • 4
  • 3
  • 3
  • Tagged with
  • 949
  • 949
  • 748
  • 149
  • 148
  • 142
  • 124
  • 113
  • 97
  • 87
  • 76
  • 74
  • 72
  • 64
  • 63
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Application of supervised and unsupervised learning to analysis of the arterial pressure pulse

Walsh, Andrew Michael, Graduate school of biomedical engineering, UNSW January 2006 (has links)
This thesis presents an investigation of statistical analytical methods applied to the analysis of the shape of the arterial pressure waveform. The arterial pulse is analysed by a selection of both supervised and unsupervised methods of learning. Supervised learning methods are generally better known as regression. Unsupervised learning methods seek patterns in data without the specification of a target variable. The theoretical relationship between arterial pressure and wave shape is first investigated by study of a transmission line model of the arterial tree. A meta-database of pulse waveforms obtained by the SphygmoCor"??" device is then analysed by the unsupervised learning technique of Self Organising Maps (SOM). The map patterns indicate that the observed arterial pressures affect the wave shape in a similar way as predicted by the theoretical model. A database of continuous arterial pressure obtained by catheter line during sleep is used to derive supervised models that enable estimation of arterial pressures, based on the measured wave shapes. Independent component analysis (ICA) is also used in a supervised learning methodology to show the theoretical plausibility of separating the pressure signals from unwanted noise components. The accuracy and repeatability of the SphygmoCor?? device is measured and discussed. Alternative regression models are introduced that improve on the existing models in the estimation of central cardiovascular parameters from peripheral arterial wave shapes. Results of this investigation show that from the information in the wave shape, it is possible, in theory, to estimate the continuous underlying pressures within the artery to a degree of accuracy acceptable to the Association for the Advancement of Medical Instrumentation. This could facilitate a new role for non-invasive sphygmographic devices, to be used not only for feature estimation but as alternatives to invasive arterial pressure sensors in the measurement of continuous blood pressure.
182

Human Promoter Recognition Based on Principal Component Analysis

Li, Xiaomeng January 2008 (has links)
Master of Engineering / This thesis presents an innovative human promoter recognition model HPR-PCA. Principal component analysis (PCA) is applied on context feature selection DNA sequences and the prediction network is built with the artificial neural network (ANN). A thorough literature review of all the relevant topics in the promoter prediction field is also provided. As the main technique of HPR-PCA, the application of PCA on feature selection is firstly developed. In order to find informative and discriminative features for effective classification, PCA is applied on the different n-mer promoter and exon combined frequency matrices, and principal components (PCs) of each matrix are generated to construct the new feature space. ANN built classifiers are used to test the discriminability of each feature space. Finally, the 3 and 5-mer feature matrix is selected as the context feature in this model. Two proposed schemes of HPR-PCA model are discussed and the implementations of sub-modules in each scheme are introduced. The context features selected by PCA are III used to build three promoter and non-promoter classifiers. CpG-island modules are embedded into models in different ways. In the comparison, Scheme I obtains better prediction results on two test sets so it is adopted as the model for HPR-PCA for further evaluation. Three existing promoter prediction systems are used to compare to HPR-PCA on three test sets including the chromosome 22 sequence. The performance of HPR-PCA is outstanding compared to the other four systems.
183

Free Probability, Sample Covariance Matrices and Stochastic Eigen-Inference

Edelman, Alan, Rao, N. Raj 01 1900 (has links)
Random matrix theory is now a big subject with applications in many disciplines of science, engineering and finance. This talk is a survey specifically oriented towards the needs and interests of a computationally inclined audience. We include the important mathematics (free probability) that permit the characterization of a large class of random matrices. We discuss how computational software is transforming this theory into practice by highlighting its use in the context of a stochastic eigen-inference application. / Singapore-MIT Alliance (SMA)
184

Αφαίρεση θορύβου από ηλεκτροεγκεφαλογράφημα με χρήση τυφλού διαχωρισμού σημάτων

Μπερεδήμας, Νικόλαος 11 May 2010 (has links)
Το ηλεκτροεγκεφαλογράφημα (ΗΕΓ) είναι μια καταγραφή διαφορών δυναμικού στο τριχωτό της κεφαλής που προέρχονται από τη βιοηλεκτρική δραστηριότητα του εγκεφάλου. Με ιστορία άνω των 70 ετών, η αξία του ΗΕΓ σαν κλινική εξέταση είναι δεδομένη, με σημαντικό πλεονέκτημα το γεγονός ότι είναι μια μη επεμβατική μέθοδος. Ωστόσο, το πλήθος των ιστών που παρεμβάλλονται ανάμεσα στον εγκέφαλο και το τριχωτό της κεφαλής, σε συνδυασμό με το μικρό ύψος των εγκεφαλικών ρυθμών (τάξης μV) κάνουν τις ΗΕΓ καταγραφές επιρρεπείς σε πλήθος παρασίτων εξωεγκεφαλικής προέλευσης (artifacts). Όσον αφορά την κλινική εξέταση το πρόβλημα των artifacts είναι αντιμετωπίσιμο σε κάποιο βαθμό. Άλλωστε, για την κλινική εξέταση έχουν λογική μια απαίτηση ακινησίας και ηρεμίας του εξεταζομένου, που δεν είναι όμως πάντα δυνατή, σε ηλεκτρομαγνητικά θωρακισμένο χώρο, το κόστος του οποίου είναι αποσβέσιμο σε μακροπρόθεσμο χρονικό ορίζοντα. Σε τελική ανάλυση, η διάρκεια καταγραφής ενός ΗΕΓ μπορεί να επιμηκυνθεί τόσο όσο χρειάζεται ο κλινικός ιατρός ώστε να εξάγει ασφαλή διάγνωση. Τέτοιου είδους περιορισμοί όμως, μάλλον φαντάζουν εκτός λογικής σε φιλόδοξες εμπορικές εφαρμογές στον τομέα του Brain Computer Interface. Οι λύσεις σε αυτόν τον τομέα πρέπει να είναι φθηνές, να δουλεύουν ικανοποιητικά στο συνηθισμένο οικιακό ή εργασιακό περιβάλλον και να μην περιορίζουν τον χρήστη. Η προσέγγιση λοιπόν δεν πρέπει να είναι τόσο στον περιορισμό των artifacts, όσο στην αναγνώριση και αφαίρεσή τους. Στην παρούσα εργασία η αφαίρεση των artifacts προσεγγίζεται σαν ένα πρόβλημα Τυφλού Διαχωρισμού Σημάτων. Εφαρμόζονται τεχνικές Ανάλυσης Ανεξαρτήτων Συνιστωσών με σκοπό το διαχωρισμό των artifacts σε ξεχωριστές Ανεξάρτητες Συνιστώσες κάνοντας εύκολη στη συνέχεια την αφαίρεση τους Η παραπάνω προσέγγιση εκτός της προαναφερθείσας εφαρμογής στον τομέα του Brain Computer Interface, έχει σαφώς και κλινική αξία. Θα μπορούσε να εφαρμοστεί για παράδειγμα σε μη συνεργάσιμους ασθενείς (π.χ. μικρά παιδιά) ή σε θορυβώδη εξωτερικά περιβάλλοντα αποσυνδέοντας το ηλεκτροεγκεφαλογράφημα από την απαίτηση ενός καλά ελεγχόμενου, ηλεκτρομαγνητικά θωρακισμένου χώρου. / --
185

Υλοποίηση του αλγορίθμου FAST-ICA στον μικροελεγκτή ADuC7020

Γκούσκου, Μαρία 01 February 2013 (has links)
Αντικείμενο της παρούσας διπλωματικής εργασίας είναι η υλοποίηση του αλγορίθμου FAST-ICA, ο οποίος εφαρμόζει μια μέθοδο Ανάλυσης Ανεξάρτητων Συνιστωσών (ICA), στον μικροελεγκτή ADuC7020 της Analog Devices. Η εργασία αυτή περιλαμβάνει τέσσερα κεφάλαια. Στο πρώτο κεφάλαιο ορίζεται το θεωρητικό υπόβαθρο πάνω στο οποίο στηρίζονται οι μέθοδοι Ανάλυσης Ανεξάρτητων Συνιστωσών και παρουσιάζονται κάποιες απλές εφαρμογές. Στο δεύτερο κεφάλαιο εξηγείται λεπτομερώς η μέθοδος Ανάλυσης Ανεξάρτητων Συνιστωσών που εφαρμόζει ένας συγκεκριμένος αλγόριθμος, ο FAST-ICA. Στο τρίτο κεφάλαιο γίνεται εισαγωγή σε στοιχειώδεις έννοιες όπως αυτές του μικροελεγκτή και ενσωματωμένου συστήματος, και παρουσιάζεται λεπτομερώς ο μικροελεγκτής ADuC7020 καθώς και η λειτουργία των περιφερειακών του. Τέλος, στο τέταρτο κεφάλαιο περιγράφεται αναλυτικά ο προγραμματισμός του μικροελεγκτή ADuC7020 και γίνεται επεξήγηση του τελικού προγράμματος στο οποίο εφαρμόστηκε ο αλγόριθμός FASTICA. / The aim of this thesis is the implementation of the FAST-ICA algorithm, which performs a method called Independent Component Analysis, in the ADuC7020 microcontroller of Analog Devices. The thesis consists of four chapters. In the first chapter, we define the theoretical background on which, the methods for Independent Component Analysis are based. Some simple applications are also introduced in this chapter. In the second chapter, a detailed report is given on the particular methods that are included in the FAST-ICA algorithm. In the third chapter, basic concepts are presented, such as the concept of the microcontroller. In this chapter, there is also an extensive analysis on the ADuC7020 microcontroller and the functions of its main peripherals. Finally, in chapter four we explain the programming of the microcontroller as well as the main program of the FAST-ICA algorithm.
186

Ομαδοποίηση δεδομένων και εφαρμογές

Σαρρής, Γιώργος 06 December 2013 (has links)
Στην παρούσα διπλωματική εργασία γίνεται αναλυτική παρουσίαση των μεθόδων ομαδοποίησης, καθώς και της Ανάλυσης Κύριων Συνιστωσών (ΑΚΣ). Σκοπός είναι να μελετηθεί η αποτελεσματικότητα της χρήσης, της Ανάλυσης Κύριων Συνιστωσών σε σύνολα δεδομένων προς ομαδοποίηση. Πιο συγκεκριμένα, συγκρίνονται εμπειρικά τα πειραματικά αποτελέσματα που παράχθηκαν από την ομαδοποίηση συνόλων δεδομένων πριν και μετά τη χρήση της ΑΚΣ σε τεχνητά και πραγματικά σύνολα δεδομένων. Στο πρώτο κεφάλαιο πραγματοποιείται μία παρουσίαση των κύριων εννοιών που άπτονται της ομαδοποίησης δεδομένων, καθώς και παρουσιάζονται οι πιο γνωστές τεχνικές ομαδοποίησης. Στο δεύτερο κεφάλαιο παρουσιάζεται συνοπτικά η Ανάλυση Κύριων Συνιστωσών, καθώς και παραθέτονται διάφορα κριτήρια επιλογής του πλήθους των κύριων συνιστωσών. Η εργασία τελειώνει με την παρουσίαση πειραματικών αποτελεσμάτων ομαδοποίησης σε τεχνητά και πραγματικά σύνολα δεδομένων, πριν και μετά τη χρήση της ΑΚΣ με διαφορετικές τεχνικές ομαδοποίησης. / In the thesis at hand, several clustering methods along with the Principal Component Analysis (PCA) are presented. The main goal is to study the application of PCA on data sets for clustering purposes. More specifically, several clustering algorithms are compared through experimental simulations and results on data sets before and after the use of PCA respectively. For this purpose simulated and real case data are employed. The first chapter of the thesis is devoted to the analytical definition of clustering and the presentation of several clustering techniques. In the second chapter, the PCA method is presented along with an extended study on criteria for determining the number of principal directions. Finally, the thesis ends with extensive experimental results, discussion and concluding remarks.
187

On the development of control systems technology for fermentation processes

Loftus, John January 2017 (has links)
Fermentation processes play an integral role in the manufacture of pharmaceutical products. The Quality by Design initiative, combined with Process Analytical Technologies, aims to facilitate the consistent production of high quality products in the most efficient and economical way. The ability to estimate and control product quality from these processes is essential in achieving this aim. Large historical datasets are commonplace in the pharmaceutical industry and multivariate methods based on PCA and PLS have been successfully used in a wide range of applications to extract useful information from such datasets. This thesis has focused on the development and application of novel multivariate methods to the estimation and control of product quality from a number of processes. The document is divided into four main categories. Firstly, the related literature and inherent mathematical techniques are summarised. Following this, the three main technical areas of work are presented. The first of these relates to the development of a novel method for estimating the quality of products from a proprietary process using PCA. The ability to estimate product quality is useful for identifying production steps that are potentially problematic and also increases process efficiency by ensuring that any defective products are detected before they undergo any further processing. The proposed method is simple and robust and has been applied to two separate case studies, the results of which demonstrate the efficacy of the technique. The second area of work concentrates on the development of a novel method of identifying the operational phases of batch fermentation processes and is based on PCA and associated statistics. Knowledge of the operational phases of a process can be beneficial from a monitoring and control perspective and allows a process to be divided into phases that can be approximated by a linear model. The devised methodology is applied to two separate fermentation processes and results show the capability of the proposed method. The third area of work focuses on undertaking a performance evaluation of two multivariate algorithms, PLS and EPLS, in controlling the end-point product yield of fermentation processes. Control of end-point product quality is of crucial importance in many manufacturing industries, such as the pharmaceutical industry. Developing a controller based on historical and identification process data is attractive due to the simplicity of modelling and the increasing availability of process data. The methodology is applied to two case studies and performance evaluated. From both a prediction and control perspective, it is seen that EPLS outperforms PLS, which is important if modelling data is limited.
188

Combinação de previsões : uma proposta utilizando análise de componentes principais

Martins, Vera Lúcia Milani January 2014 (has links)
A obtenção de previsões com maior acuracidade é uma necessidade constantemente requerida, em tempos onde há imensa disponibilidade de dados e recursos computacionais cada dia mais eficientes. Tais critérios possibilitaram o desenvolvimento de muitas técnicas de previsão individual ou de métodos de combinação que são considerados eficientes no intuito de reduzir erros. O desenvolvimento de novas técnicas, por sua vez, promove questionamentos quanto à identificação de quantas ou quais técnicas de previsão individual combinar. A literatura não é unânime ao tentar responder a estes questionamentos e indica a importância da correlação entre os erros de previsão na precisão da combinação. Posto isso, esta tese apresenta uma alternativa aos métodos atuais de combinar previsões, contemplando a correlação entre os erros de previsão, além de propor uma forma de identificar técnicas de previsão que sejam distintas quanto à modelagem de características da série de dados. Para identificar grupos de técnicas de previsão individual que sejam similares, utilizou-se a Análise de Agrupamentos em erros gerados por 15 técnicas de previsão que modelaram uma mesma série de dados real com tendência e sazonalidade. O resultado indicou a formação de 3 agrupamentos. Como alternativa aos métodos atuais de combinar previsão e selecionar a quantidade adequada de técnicas, utilizou-se a Análise de Componentes Principais. O método proposto mostrou-se viável quando comparado com outros métodos de combinação e quando submetido à modelagem de séries com maior variabilidade. / The obtaining of more accurate forecasts is a necessity often required in times where there is a huge availability of data and computing resources becoming more efficient every day. These criteria allowed the development of many individual forecasting techniques or combination methods that are considered efficient in order to reduce errors. The development of new techniques, in turn, promotes questioning as the identification of how many or which techniques to combine individual forecasts. The literature is not unanimous when trying to answer these questions and indicates the importance of the correlation between forecast errors on the accuracy of the combination. That said, this presents an alternative to current methods of combining forecasts, considering the correlation between forecast errors, and propose a way to identify predictive techniques that are different about the modeling features of the data series. To identify groups of individual forecasting techniques that are similar, it was used the cluster analysis on errors generated by 15 forecasting techniques that shaped the same series of real data with trend and seasonality. The result indicated the formation of 3 clusters. As an alternative to current methods of combining forecasting and selecting the appropriate amount of techniques, it was used the Principal Component Analysis. The proposed method has proved feasible when compared to other methods of combining and when subjected to modeling of series with greater variability.
189

Simplified plasma models based on reduced kinetics

Bellemans, Aurélie 01 December 2017 (has links) (PDF)
Performing high-fidelity plasma simulations remains computationally expensive because of their large dimension and complex chemistry. Atmospheric re-entry plasmas for instance, involve hundreds of species in thousands of reactions used in detailed physical models. These models are very complex as they describe the non-equilibrium phenomena due to finite-rate processes in the flow. Chemical non-equilibrium arises because of the many dissociation, ionization and excitation reaction at various time-scales. Vibrational, rotational, electronic and translational temperatures characterize the flow and exchange energy between species, which leads to thermal non-equilibrium.With the current computational resources, detailed three-dimensional simulations are still out of reach. Detailed calculations using the full dynamics are often restricted to a zero- or one-dimensional description. A trade-off has to be made between the level of accuracy of the model and its computational cost. This thesis presents various methods to develop accurate reduced kinetic models for plasma flows. Starting from detailed chemistry, high-fidelity reductions are achieved through the application of either physics-based techniques, such as presented by the binning methods and time-scale based reductions, either empirical techniques given by principal component analysis. As an original contribution to the existing methods, the physics-based techniques are combined with principal component analysis uniting both communities. The different techniques are trained on a 34 species collisional-radiative model for argon plasma by comparing shock relaxation simulations.The best performing method is applied on the large N-N2 mechanism containing 9391 species and 23 million reactions calculated by the NASA Ames Research Center. As a preliminary step, the system dynamics is analyzed to improve our understanding of the various processes occurring in plasma flows. The reactions are analyzed and classified according to their importance. A deep investigation of the kinetics enables finding the main variables and parameters characterizing the plasma, which can thereafter be used to develop or improve existing reductions.As a result, a novel coarse grain model has been developed for argon by binning the electronic excited levels and the ionized species into 2 Boltzmann averaged energy bins. The ground state is solved individually together with the free electrons, reducing the species mass conservation equations from 34 to 4. Principal component analysis has been transferred from the combustion community to plasma flows by investigating the Manifold-Generated and Score-PCA techniques. PCA identifies low dimensional manifolds empirically, projecting the full kinetics to its base of principal components. A novel approach combines the binning techniques with PCA, finding an optimized model for reducing the N3 rovibrational collisional model. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
190

Liquidity measurements and the return-liquidity relationship : empirical evidence from Germany, the UK, the US and China

Bo, Yibo January 2017 (has links)
With reference to the existing literature on liquidity, three key questions have emerged during the last several decades: (i) How to measure liquidity in the most efficient way? (ii) What is the empirical pattern in the relation between market liquidity and stock returns? (iii) What are the determinants of the changes in the Return-Liquidity Relationship? This thesis take the above three questions as its principal focus and studies them by undertaking three separate empirical chapters, using a substantial dataset that covers all the listed firms in these four global economies – Germany, the UK, the US and China from 2001 to 2013. The empirical results imply the following: (i) The Transaction-Cost based liquidity measures, particularly the Quoted Proportional Spread, should be regarded as the most representative liquidity measurement. (ii) There is no evidence consistent with a fixed empirical pattern in the Return-Liquidity Relationship across these four countries as market liquidity is preferred in both Germany and UK, while the opposite results have been obtained for the Chinese stock market. That is, higher market leads to higher stock returns in these two European countries as the higher market liquidity facilitates capital movements to more efficient investments. However in China, the huge number of individual investors generates higher market liquidity through speculative trading rather than as a result of value-related investments, which heightens market risk and thus results in a decrease in stock prices. (iii) There is weak evidence that stock market returns have positive determinant effects on both MLIs (the market impact of liquidity on stock returns) and FLIs, (the firm-level impact of liquidity on stock returns) Return-Liquidity relation on market and firm level respectively. While only FLIs are positively correlated with stock market volatility and the inflation rate and negatively affected by the short-term interest rate.

Page generated in 0.0631 seconds