• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 307
  • 139
  • 34
  • 31
  • 24
  • 19
  • 16
  • 16
  • 14
  • 12
  • 7
  • 6
  • 4
  • 3
  • 2
  • Tagged with
  • 747
  • 747
  • 747
  • 141
  • 118
  • 112
  • 102
  • 86
  • 69
  • 66
  • 59
  • 59
  • 55
  • 55
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Analytický nástroj pro generování bicích triggerů z downmix záznamu / Analysing Tool for Generating of Drum Triggers from Downmix Record

Konzal, Jan January 2020 (has links)
This thesis deals with the design and implementation of a tool for generating drums triggers from a downmix record. The work describes the preprocessing of the input audio signal and methods for the classification of strokes. The drum classification is based on the similarity of the signals in the frequency domain. Principal component analysis (PCA) was used to reduce the number of dimensions and to find the characteristic properties of the input data. The method support vector machine (SVM) was used to classify the data into individual classes representing parts of the drum kit. The software was programmed in Matlab. The classification model was trained on a set of 728 drum samples for seven categories (kick, snare, hi-hat, crash, ride, kick + hi-hat, snare + hi-hat). The success of the system in the classification is 75 %.
312

Advanced Algorithms for Classification and Anomaly Detection on Log File Data : Comparative study of different Machine Learning Approaches

Wessman, Filip January 2021 (has links)
Background: A problematic area in today’s large scale distributed systems is the exponential amount of growing log data. Finding anomalies by observing and monitoring this data with manual human inspection methods becomes progressively more challenging, complex and time consuming. This is vital for making these systems available around-the-clock. Aim: The main objective of this study is to determine which are the most suitable Machine Learning (ML) algorithms and if they can live up to needs and requirements regarding optimization and efficiency in the log data monitoring area. Including what specific steps of the overall problem can be improved by using these algorithms for anomaly detection and classification on different real provided data logs. Approach: Initial pre-study is conducted, logs are collected and then preprocessed with log parsing tool Drain and regular expressions. The approach consisted of a combination of K-Means + XGBoost and respectively Principal Component Analysis (PCA) + K-Means + XGBoost. These was trained, tested and with different metrics individually evaluated against two datasets, one being a Server data log and on a HTTP Access log. Results: The results showed that both approaches performed very well on both datasets. Able to with high accuracy, precision and low calculation time classify, detect and make predictions on log data events. It was further shown that when applied without dimensionality reduction, PCA, results of the prediction model is slightly better, by a few percent. As for the prediction time, there was marginally small to no difference for when comparing the prediction time with and without PCA. Conclusions: Overall there are very small differences when comparing the results for with and without PCA. But in essence, it is better to do not use PCA and instead apply the original data on the ML models. The models performance is generally very dependent on the data being applied, it the initial preprocessing steps, size and it is structure, especially affecting the calculation time the most.
313

Detekce ischemie v EKG záznamech / Detection of ischemia in ECG

Tichý, Pavel January 2014 (has links)
This paper describes the manifestations of ischemia in the ECG signals and summarizes some methods allowing automatic detection of ischemia. Morphological features were then calculated from ECG signals available from UBMI and statistically evaluated to select features appropriate for further automatic classification. Multilayer feedforward neural network was used for classification of heart beats. The neural network was designed in Matlab. Classification performance up to 99.9% was obtained on available dataset.
314

Traitement électrocinétique des sédiments de dragage multi-contaminés et évolution de leur toxicité / Electro-remediation of dredged multi-contaminated sediments and the evolution of their toxicity

Tian, Yue 15 December 2017 (has links)
Les travaux de cette thèse sont consacrés principalement à l'optimisation d'une méthode de remédiation électrocinétique (EK) comme une technologie appropriée pour le traitement de sédiments de dragage de faible perméabilité hydraulique et multi-contaminés (en éléments traces (ET), hydrocarbures aromatiques polycycliques (HAP) et polychlorobiphényles (PCB)). Cette étude porte également sur l’effet du traitement EK sur l’évolution de la toxicité des sédiments. Après une revue bibliographique, une seconde partie a été dédiée aux méthodes d’analyse des contaminants, avec un focus sur leur extraction de la matrice sédimentaire ; ainsi, une nouvelle méthode d’extraction par dispersion de la matrice solide (MSPD) a été développée, pour une extraction rapide et simultanée des HAP et de PCB et une purification de l’échantillon, qui s’est avérée plus efficace que la méthode d’extraction assistée par micro-ondes (MAE). Plusieurs études expérimentales (à différentes échelles) de remédiation électrocinétique ont été décrites dans une troisième partie ; ces études ont été menées sur un sédiment reconstitué ou des sédiments de dragage portuaire. De nombreuses combinaisons de tensioactifs et d’agents chélatants ont été testées comme agents d’amélioration pour abaisser simultanément la concentration en métaux (Cd, Cr, Cu, Pb, Zn) et des HAP/PCB. Le choix a été effectué en raison notamment de leur faible toxicité potentielle, en vue de pouvoir les appliquer ultérieurement pour une restauration sur site : (bio)surfactants (Rhamnolipides, Saponine et Tween 20) combinés avec des agents chélatants (acide citrique (CA) et EDDS). Les résultats obtenus montrent que les métaux (à l'exception de Cr) sont difficiles à extraire de ces sédiments de dragage portuaire à caractère réducteur, qui présentent une capacité tampon élevée, une perméabilité hydraulique très faible et une teneur en matière organique élevée. En revanche, les HAP et les PCB fournissent de meilleurs taux d'abattement (29,2% et 50,2%, respectivement). Dans une quatrième partie, l'efficacité du procédé EK a également été évaluée à travers l’évolution de la toxicité aiguë des sédiments traités sur les copépodes E. affinis exposés aux élutriats de sédiments. Les résultats ont montré que l'utilisation de CA,des biosurfactants et du Tween 20 n'a pas eu d'impact significatif sur la toxicité des sédiments traités. Cependant, les copépodes E. affinis étaient sensibles aux faibles valeurs de pH et aux conditions très oxydantes, ainsi qu’à la présence de Cu et, dans une moindre mesure, de Pb, à condition toutefois qu’ils soient rendus plus mobiles et biodisponibles. En revanche, la toxicité a été peu et même négativement corrélée aux concentrations des HAP et des PCB après le traitement EK, probablement en raison de la production de métabolites oxydés des HAP et des PCB, plus toxiques que les composés natifs. / This thesis research is mainly devoted to the optimization of an electrokinetic (EK) remediation process as a promising technology for treating multi-contaminated (trace metals, polycyclic aromatic hydrocarbons (PAHs) and polychlorobiphenyles (PCBs)) dredged harbor sediments of low permeability. This study is also investigating the effect of the EK treatment on the evolution of sediment toxicity. After a bibliographic review, asecond part of this study was dedicated to the analytical methods carried out for the characterization of the sediment and its contaminants, particularly to their extraction from the sediment matrix; thus a new extraction method, through matrix solid phase dispersion (MSPD) was developed, for a fast and simultaneous extraction of both PAHs and PCBs, and a sample purification.MSPD appeared more efficient than the microwave assisted extraction (MAE) method. Thereafter many EK experiments (at different scales) were described in a third part. EK remediation tests were performed using a spiked model sediment or natural harbor dredged sediments. Many combinations of surfactants and chelators were tested as EK enhancing agents for decreasing simultaneously metal (Cd, Cr, Cu, Pb, Zn) and PAH/PCB levels. Their choice was done because of their possible low toxicity with a view to use them for future site restoration: (bio)surfactants (rhamnolipids, saponin and Tween 20) combined with chelators (citric acid (CA) and EDDS). The results showed that metals (except Cr) were difficult to remove from this kind of dredged sediment owing to its reductive character, to its high buffering capacity, to its very low hydraulic permeability and to its high organic matter content. However PAHs and PCBs showed better removal levels (29.2% and 50.2%, respectively). In a fourth part, the efficiency of the EK process was also assessed by measuring the evolution of the acute toxicity of the treated sediment on E. affinis copepods exposed to sediment elutriates. The results showed that using CA, biosurfactants or Tween 20 as enhancing agents did not significantly impact the toxicity of the treated sediment. However, E. affinis copepods were significantly sensitive to low pH values and oxidative conditions, to Cu, and to a lesser extent to Pb amounts, if they were transformed in more mobile and bioavailable forms. In contrast, acute toxicity was only slightly and even negatively correlated to PAH and PCB amounts after EK treatment, probably due to the production of oxidized metabolites of PAHs and PCBs, more toxic than the parent compounds.
315

Análise de componentes principais aplicada a avaliação de atributos de agregados na separação sólido líquido /

Almeida, Thaís de. January 2020 (has links)
Orientador: Rodrigo Braga Moruzzi / Resumo: Água de qualidade, livre de poluentes e patógenos é um recurso humano necessário e valioso. As contaminações por fontes naturais e antrópicas podem ameaçar a qualidade desses cursos d’água, fazendo-se necessário um tratamento prévio antes de ser disponibilizada para abastecimento púbico. Com objetivo de eliminação de contaminantes e impurezas diversos processos e operações de tratamento físico/químico são utilizados, como a coagulação, a floculação, e processos de separação sólido/líquido. Para a avaliação do padrão de qualidade final da água pós tratamento são necessários índices de monitoramento, que podem ser obtidos através métodos diretos e/ou indiretos. Os métodos diretos de características físicas e morfológicas tem ganhado cada vez mais atenção entre os estudos da área. Seus parâmetros, como tamanho das partículas e estrutura de fractal têm sido um novo recurso para a temática floculação. Buscando maior entendimento sobre os principais fatores que contribuem para a separação dos agregados de fractal, e consequentemente melhor eficiência de remoção, o presente estudo teve como objetivo investigar o desempenho da Sedimentação Gravitacional e da Flotação por Ar Dissolvido, e suas relações com as características físicas das partículas floculentas a partir da análise das principais variáveis que interferiram nos processos. Para tal, foram investigadas em escala de laboratório quatro diferentes águas preparadas com ácido húmico, caulinita e coaguladas com Sulfato de Alumíni... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: Quality water, without pollutants and pathogens is a necessary and valuable human resource. Contamination of natural and anthropogenic sources can affect the quality of these watercourses, requiring primary treatment before being available for public supply. In order to eliminate contaminants and impurities several processes and physical/chemical treatment operations such as, coagulation, flocculation, and solid/liquid separation are used. For the evaluation of the final quality of the water, monitoring indices are necessary, which can be obtained through direct or indirect methods. The direct methods of physical and morphological characteristics have increased attention in this area studies. Parameters such as particle size and fractal structure has been a new feature for flocculation thematic. The aim of this study was investigate the performance of Gravitational Sedimentation and Dissolved Air Flotation and their relationship to the particle’s physical characteristics particles from the analysis of the main variables that interfered in the processes. For this purpose, four different types of water prepared with humic acid, kaolin solution and coagulated with Aluminum Sulphate and Ferric Chloride were investigated in laboratory scale. The flocculation process was monitored by digital image analysis in order to obtain variables that help to determine the particle’s physical characteristics such as the Particle Size Distribution (DTP) and its representative β parameter as wel... (Complete abstract click electronic access below) / Mestre
316

Mobile systems for monitoring Parkinson's disease

Memedi, Mevludin January 2011 (has links)
This thesis presents the development and evaluation of IT-based methods and systems for supporting assessment of symptoms and enabling remote monitoring of Parkinson‟s disease (PD) patients. PD is a common neurological disorder associated with impaired body movements. Its clinical management regarding treatment outcomes and follow-up of patients is complex. In order to reveal the full extent of a patient‟s condition, there is a need for repeated and time-stamped assessments related to both patient‟s perception towards common symptoms and motor function. In this thesis, data from a mobile device test battery, collected during a three year clinical study, was used for the development and evaluation of methods. The data was gathered from a series of tests, consisting of selfassessments and motor tests (tapping and spiral drawing). These tests were carried out repeatedly in a telemedicine setting during week-long test periods. One objective was to develop a computer method that would process tracedspiral drawings and generate a score representing PD-related drawing impairments. The data processing part consisted of using the discrete wavelet transform and principal component analysis. When this computer method was evaluated against human clinical ratings, the results showed that it could perform quantitative assessments of drawing impairment in spirals comparatively well. As a part of this objective, a review of systems and methods for detecting the handwriting and drawing impairment using touch screens was performed. The review showed that measures concerning forces, accelerations, and radial displacements were the most important ones in detecting fine motor movement anomalies. Another objective of this thesis work was to design and evaluate an information system for delivering assessment support information to the treating clinical staff for monitoring PD symptoms in their patients. The system consisted of a patient node for data collection based on the mobile device test battery, a service node for data storage and processing, and a web application for data presentation. A system module was designed for compiling the test battery time series into summary scores on a test period level. The web application allowed adequate graphic feedback of the summary scores to the treating clinical staff. The evaluation results for this integrated system indicate that it can be used as a tool for frequent PD symptom assessments in home environments.
317

Reduced-orderCombustion Models for Innovative Energy Conversion Technologies

Malik, Mohammad Rafi 01 February 2021 (has links) (PDF)
The present research seeks to advance the understanding and application of Principal Component Analysis (PCA)-based combustion modelling for practical systems application. This work is a consistent extension to the standard PC-transport model, and integrates the use of Gaussian Process Regression (GPR) in order to increase the accuracy and the potential of size reduction offered by PCA. This new model, labelled PC-GPR, is successively applied and validated in a priori and a posteriori studies.In the first part of this dissertation, the PC-GPR model is validated in an a priori study based on steady and unsteady perfectly stirred reactor (PSR) calculations. The model showed its great accuracy in the predictions for methane and propane, using large kinetic mechanisms. In particular, for methane, the use of GPR allowed to model accurately the system with only 2 principal components (PCs) instead of the 34 variables in the original GRI-3.0 kinetic mechanism. For propane, the model was applied to two different mechanisms consisting of 50 species and 162 species respectively. The PC-GPR model was able to achieve a very significant reduction, and the thermo-chemical state-space was accurately predicted using only 2 PCs for both mechanisms.The second part of this work is dedicated to the application of the PC-GPR model in the framework of non-premixed turbulent combustion in a fully three-dimensional Large Eddy Simulation (LES). To this end, an a posteriori validation is performed on the Sandia flames D, E and F. The PC-GPR model showed very good accuracy in the predictions of the three flames when compared with experimental data using only 2 PCs, instead of the 35 species originally present in the GRI 3.0 mechanism. Moreover, the PC-GPR model was also able to handle the extinction and re-ignition phenomena in flames E and F, thanks to the unsteady data in the training manifold. A comparison with the FPV model showed that the combination of the unsteady data set and the best controlling variables for the system defined by PCA provide an alternative to the use of steady flamelets parameterized by user-defined variables and combined with a PDF approach.The last part of this research focuses on the application of the PC-GPR model in a more challenging case, a lifted methane/air flame. Several key features of the model are investigated: the sensitivty to the training data set, the influence of the scaling methods, the issue of data sampling and the potential of a subgrid scale (SGS) closure. In particular, it is shown that the training data set must contain the effects of diffusion in order to accurately predict the different properties of the lifted flame. Moreover, the kernel density weighting method, used to address the issue of non-homogenous data density usually found in numerical data sets, allowed to improve the predictions of the PC-GPR model. Finally, the integration of subgrid scale closure to the PC-GPR model allowed to significantly improve the simulations results using a presumed PDF closure. A qualitative comparison with the FPV model showed that the results provided by the PC-GPR model are overall very comparable to the FPV results, with a reduced numerical cost as PC-GPR requires a 4D lookup table, instead of a 5D in the case of FPV. / Le double défi de l'énergie et du changement climatique mettent en avant lanécessité de développer des nouvelles technologies de combustion, étantdonné que les projections les plus réalistes montrent que la plus grandeaugmentation de l'offre d'énergie pour les décennies à venir se fera à partirde combustibles fossiles. Ceci représente donc une forte motivation pour larecherche sur l'efficacité énergétique et les technologies propres. Parmicelles-ci, la combustion sans flamme est un concept nouvellementdéveloppé qui permet d'obtenir des rendements thermiques élevés avecdes économies de carburant tout en maintenant les émissions polluantes àun niveau très bas. L'intérêt croissant pour cette technologie est égalementmotivé par sa grande flexibilité de carburant, ce qui représente uneprécieuse opportunité pour les carburants à faible valeur calorifique, lesdéchets industriels à haute valeur calorifique et les combustibles à based'hydrogène. Etant donné que cette technologie est plutôt récente, elle estde ce fait encore mal comprise. Les solutions d'une application industriellesont très difficiles à transposer à d'autres. Pour améliorer les connaissancesdans le domaine de la combustion sans flamme, il est nécessaire de menerdes études fondamentales sur ce nouveau procédé de combustion afin defavoriser son développement. En particulier, il y a deux différencesmajeures par rapport aux flammes classiques :d’une part, les niveaux deturbulence rencontrés dans la combustion sans flamme sont rehaussés, enraison des gaz de recirculation, réduisant ainsi les échelles de mélange.D'autre part, les échelles chimiques sont augmentées, en raison de ladilution des réactifs. Par conséquent, les échelles turbulentes et chimiquessont du même ordre de grandeur, ce qui conduit à un couplage très fort.Après un examen approfondi de l'état de l'art sur la modélisation de lacombustion sans flamme, le coeur du projet représentera le développementd'une nouvelle approche pour le traitement de l'interaction turbulence /chimie pour les systèmes sans flamme dans le contexte des simulationsaux grandes échelles (Large Eddy Simulations, LES). Cette approche serafondée sur la méthode PCA (Principal Component Analysis) afin d'identifierles échelles chimiques de premier plan du processus d'oxydation. Cetteprocédure permettra de ne suivre sur la grille LES qu'un nombre réduit descalaires non conservés, ceux contrôlant l'évolution du système. Destechniques de régression non-linéaires seront couplées avec PCA afind’augmenter la précision et la réductibilité du modèle. Après avoir été validégrâce à des données expérimentales de problèmes simplifiés, le modèlesera mis à l'échelle afin de gérer des applications plus grandes, pertinentespour la combustion sans flamme. Les données expérimentales etnumériques seront validées en utilisant des indicateurs de validationappropriés pour évaluer les incertitudes expérimentales et numériques. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
318

Advances in the analysis of event-related potential data with factor analytic methods

Scharf, Florian 04 April 2019 (has links)
Researchers are often interested in comparing brain activity between experimental contexts. Event-related potentials (ERPs) are a common electrophysiological measure of brain activity that is time-locked to an event (e.g., a stimulus presented to the participant). A variety of decomposition methods has been used for ERP data among them temporal exploratory factor analysis (EFA). Essentially, temporal EFA decomposes the ERP waveform into a set of latent factors where the factor loadings reflect the time courses of the latent factors, and the amplitudes are represented by the factor scores. An important methodological concern is to ensure the estimates of the condition effects are unbiased and the term variance misallocation has been introduced in reference to the case of biased estimates. The aim of the present thesis was to explore how exploratory factor analytic methods can be made less prone to variance misallocation. These efforts resulted in a series of three publications in which variance misallocation in EFA was described as a consequence of the properties of ERP data, ESEM was proposed as an extension of EFA that acknowledges the structure of ERP data sets, and regularized estimation was suggested as an alternative to simple structure rotation with desirable properties. The presence of multiple sources of (co-)variance, the factor scoring step, and high temporal overlap of the factors were identified as major causes of variance misallocation in EFA for ERP data. It was shown that ESEM is capable of separating the (co-)variance sources and that it avoids biases due to factor scoring. Further, regularized estimation was shown to be a suitable alternative for factor rotation that is able to recover factor loading patterns in which only a subset of the variables follow a simple structure. Based on these results, regSEMs and ESEMs with ERP-specific rotation have been proposed as promising extensions of the EFA approach that might be less prone to variance misallocation. Future research should provide a direct comparison of regSEM and ESEM, and conduct simulation studies with more physiologically motivated data generation algorithms.
319

Principal Component Modelling of Fuel Consumption ofSeagoing Vessels and Optimising Fuel Consumption as a Mixed-Integer Problem

Ivan, Jean-Paul January 2020 (has links)
The fuel consumption of a seagoing vessel is, through a combination of Box-Cox transforms and principal component analysis, reduced to a univariatefunction of the primary principle component with mean model error −3.2%and error standard deviation 10.3%. In the process, a Latin-hypercube-inspired space partitioning sampling technique is developed and successfully used to produce a representative sampleused in determining the regression coefficients. Finally, a formal optimisation problem for minimising the fuel use is described. The problem is derived from a parametrised expression for the fuel consumption, and has only 3, or 2 if simplified, free variables at each timestep. Some information has been redacted in order to comply with NDA restrictions. Most redactions are either names (of vessels or otherwise), units, andin some cases (especially on figures) quantities. / <p>Presentation was performed remotely using Zoom.</p>
320

Rozlišení různých druhů vajec pomocí hmotnostní spektrometrie / Differentiation of different types of eggs by mass spectrometry

Švárová, Markéta January 2021 (has links)
The aim of the diploma thesis was to distinguish bird species based on the expected different protein composition of eggs and their individual components using mass spectrometry. The obtained results could be possibly used to identify the animal origin of egg materials used in works of art. For the research, components of the eggs (yolk, egg white and their mixture) of nine available bird species were collected and dried - pheasant (Phasianus colchicus), domestic goose (Anser anser domesticus), domestic duck (Anas platyrhynchos domesticus), muscovy duck (Cairina moschata), chicken (Gallus gallus f. domestica) - four different specimens, Japanese quail (Coturnix japonica), pampas rhea (Rhea americana), red ore (Alectoris rufa), speckled guinea fowl (Numida meleagris). The samples were digested by the enzyme trypsin and prepared using ZIP-TIP for the next measurements by MALDI-TOF (Matrix-Assisted Laser Desorption / Ionization - Time of Flight) mass spectrometry. The obtained data were evaluated by Principal Component Analysis (PCA). The used method showed that the most bird species can be distinguished by yolks (with an approximately 95% success rate) and more than half of the species by egg white proteins (success rate 83%) and by the egg white and yolk mixtures (success rate 80%). Two samples of...

Page generated in 0.0789 seconds