• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 37
  • 27
  • 6
  • 5
  • 5
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 93
  • 93
  • 47
  • 29
  • 29
  • 28
  • 26
  • 26
  • 24
  • 20
  • 19
  • 16
  • 15
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Prediction of the transaction confirmation time in Ethereum Blockchain

Singh, Harsh Jot 08 1900 (has links)
La blockchain propose un système d'enregistrement décentralisé, immuable et transparent. Elle offre un réseau de nœuds sans entité de gouvernance centralisée, ce qui la rend "indéchiffrable" et donc plus sûr que le système d'enregistrement centralisé sur papier ou centralisé telles que les banques. L’approche traditionnelle basée sur l’enregistrement ne fonctionne pas bien avec les relations numériques où les données changent constamment. Contrairement aux canaux traditionnels, régis par des entités centralisées, blockchain offre à ses utilisateurs un certain niveau d'anonymat en leur permettant d'interagir sans divulguer leur identité personnelle et en leur permettant de gagner la confiance sans passer par une entité tierce. En raison des caractéristiques susmentionnées de la blockchain, de plus en plus d'utilisateurs dans le monde sont enclins à effectuer une transaction numérique via blockchain plutôt que par des canaux rudimentaires. Par conséquent, nous devons de toute urgence mieux comprendre comment ces opérations sont gérées par la blockchain et combien de temps cela prend à un nœud du réseau pour confirmer une transaction et l’ajouter au réseau de la blockchain. Dans cette thèse, nous visons à introduire une nouvelle approche qui permettrait d'estimer le temps il faudrait à un nœud de la blockchain Ethereum pour accepter et confirmer une transaction sur un bloc tout en utilisant l'apprentissage automatique. Nous explorons deux des approches les plus fondamentales de l’apprentissage automatique, soit la classification et la régression, afin de déterminer lequel des deux offrirait l’outil le plus efficace pour effectuer la prévision du temps de confirmation dans la blockchain Ethereum. Nous explorons le classificateur Naïve Bayes, le classificateur Random Forest et le classificateur Multilayer Perceptron pour l’approche de la classification. Comme la plupart des transactions sur Ethereum sont confirmées dans le délai de confirmation moyen (15 secondes) de deux confirmations de bloc, nous discutons également des moyens pour résoudre le problème asymétrique du jeu de données rencontré avec l’approche de la classification. Nous visons également à comparer la précision prédictive de deux modèles de régression d’apprentissage automatique, soit le Random Forest Regressor et le Multilayer Perceptron, par rapport à des modèles de régression statistique, précédemment proposés, avec un critère d’évaluation défini, afin de déterminer si l’apprentissage automatique offre un modèle prédictif plus précis que les modèles statistiques conventionnels. / Blockchain offers a decentralized, immutable, transparent system of records. It offers a peer-to-peer network of nodes with no centralised governing entity making it ‘unhackable’ and therefore, more secure than the traditional paper based or centralised system of records like banks etc. While there are certain advantages to the paper based recording approach, it does not work well with digital relationships where the data is in constant flux. Unlike traditional channels, governed by centralized entities, blockchain offers its users a certain level of anonymity by providing capabilities to interact without disclosing their personal identities and allows them to build trust without a third-party governing entity. Due to the aforementioned characteristics of blockchain, more and more users around the globe are inclined towards making a digital transaction via blockchain than via rudimentary channels. Therefore, there is a dire need for us to gain insight on how these transactions are processed by the blockchain and how much time it may take for a peer to confirm a transaction and add it to the blockchain network. In this thesis, we aim to introduce a novel approach that would allow one to estimate the time (in block time or otherwise) it would take for Ethereum Blockchain to accept and confirm a transaction to a block using machine learning. We explore two of the most fundamental machine learning approaches, i.e., Classification and Regression in order to determine which of the two would be more accurate to make confirmation time prediction in the Ethereum blockchain. More specifically, we explore Naïve Bayes classifier, Random Forest classifier and Multilayer Perceptron classifier for the classification approach. Since most transactions in the network are confirmed well within the average confirmation time of two block confirmations or 15 seconds, we also discuss ways to tackle the skewed dataset problem encountered in case of the classification approach. We also aim to compare the predictive accuracy of two machine learning regression models- Random Forest Regressor and Multilayer Perceptron against previously proposed statistical regression models under a set evaluation criterion; the objective is to determine whether machine learning offers a more accurate predictive model than conventional statistical models.
52

Dr. Polopoly - IntelligentSystem Monitoring : An Experimental and Comparative Study ofMultilayer Perceptrons and Random Forests ForError Diagnosis In A Network of Servers

Djupfeldt, Petter January 2016 (has links)
This thesis explores the potential of using machine learning to superviseand diagnose a computer system by comparing how Multilayer Perceptron(MLP) and Random Forest (RF) perform at this task in a controlledenvironment. The base of comparison is primarily how accurate theyare in their predictions, but some thought is given to how cost effectivethey are regarding time. The specific system used is a content management system (CMS)called Polopoly. The thesis details how training samples were collectedby inserting Java proxys into the Polopoly system in order to time theinter-server method calls. Errors in the system were simulated by limitingindividual server’s bandwith, and a normal use case was simulatedthrough the use of a tool called Grinder. The thesis then delves into the setup of the two algorithms andhow the parameters were decided upon, before comparing their finalimplementations based on their accuracy. The accuracy is noted to bepoor, with both being correct roughly 20% of the time, but discussesif there could still be a use case for the algorithms with this level ofaccuracy. Finally, the thesis concludes that there is no significant difference(p 0.05) in the MLP and RF accuracies, and in the end suggeststhat future work should focus either on comparing the algorithms or ontrying to improve the diagnosing of errors in Polopoly. / Denna uppsats utforskar potentialen i att använda maskininlärning föratt övervaka och diagnostisera ett datorsystem genom att jämföra hureffektivt Multilayer Perceptron (MLP) respektive Random Forest (RF)gör detta i en kontrollerad miljö. Grunden för jämförelsen är främst hurträffsäkra MLP och RF är i sina klassifieringar, men viss tanke ges ocksååt hur kostnadseffektiva de är med hänseende till tid. Systemet som används är ett “content management system” (CMS)vid namn Polopoly. Uppsatsen beskriver hur träningsdatan samlades invia Java proxys, som injicerades i Polopoly systemet för att mäta hurlång tid metodanrop mellan servrarna tar. Fel i systemet simulerades genomatt begränsa enskilda servrars bandbredd, och normalt användandesimulerades med verktyget Grinder. Uppsatsen går sedan in på hur de två algoritmerna användes ochhur deras parametrar sattes, innan den fortsätter med att jämföra detvå slutgiltiga implementationerna baserat på deras träffsäkerhet. Detnoteras att träffsäkerheten är undermålig; både MLP:n och RF:n gissarrätt i ca 20% av fallen. En diskussion förs om det ändå finns en användningför algoritmerna med denna nivå av träffsäkerhet. Slutsatsen drasatt det inte finns någon signifikant skillnad (p 0.05) mellan MLP:nsoch RF:ns träffsäkerhet, och avslutningsvis så föreslås det att framtidaarbete borde fokusera antingen på att jämföra de två algoritmerna ellerpå att försöka förbättra feldiagnosiseringen i Polopoly.
53

Forecasting Codeword Errors in Networks with Machine Learning / Prognostisering av kodordsfel i nätverk med maskininlärning

Hansson Svan, Angus January 2023 (has links)
With an increasing demand for rapid high-capacity internet, the telecommunication industry is constantly driven to explore and develop new technologies to ensure stable and reliable networks. To provide a competitive internet service in this growing market, proactive detection and prevention of disturbances are key elements for an operator. Therefore, analyzing network traffic for forecasting disturbances is a well-researched area. This study explores the advantages and drawbacks of implementing a long short-term memory model for forecasting codeword errors in a hybrid fiber-coaxial network. Also, the impact of using multivariate and univariate data for training the model is explored. The performance of the long short-term memory model is compared with a multilayer perceptron model. Analysis of the results shows that the long short-term model, in the vast majority of the tests, performs better than the multilayer perceptron model. This result aligns with the hypothesis, that the long short-term memory model’s ability to handle sequential data would be superior to the multilayer perceptron. However, the difference in performance between the models varies significantly based on the characteristics of the used data set. On the set with heavy fluctuations in the sequential data, the long short-term memory model performs on average 44% better. When training the models on data sets with longer sequences of similar values and with less volatile fluctuations, the results are much more alike. The long short-term model still achieves a lower error on most tests, but the difference is never larger than 7%. If a low error is the sole criterion, the long short-term model is the overall superior model. However, in a production environment, factors such as data storage capacity and model complexity should be taken into consideration. When training the models on multivariate and univariate datasets, the results are unambiguous. When training on all three features, ratios of uncorrectable and correctable codewords, and signal-to-noise ratio, the models always perform better. That is, compared to using uncorrectable codewords as the only training data. This aligns with the hypothesis, which is based on the know-how of hybrid fiber-coaxial experts, that correctable codewords and signal-to-noise ratio have an impact on the occurrence of uncorrectable codewords. / På grund av den ökade efterfrågan av högkvalitativt internet, så drivs telekomindustrin till att konsekvent utforska och utveckla nya teknologier som kan säkerställa stabila och pålitliga nätverk. För att kunna erbjuda konkurrenskraftiga internettjänster, måste operatörerna kunna förutse och förhindra störningar i nätverken. Därför är forskningen kring hur man analyserar och förutser störningar i ett nätverk ett väl exploaterat område. Denna studie undersökte för- och nackdelar med att använda en long short-term memory (LSTM) för att förutse kodordsfel i ett hybridfiber-koaxialt nätverk. Utöver detta undersöktes även hur multidimensionell träningsdata påverkade prestandan. I jämförelsesyfte användes en multilayer perceptron (MLP) och dess resultat. Analysen av resultaten visade att LSTM-modellen presterade bättre än MLP-modellen i majoriteten av de utförda testerna. Men skillnaden i prestanda varierade kraftigt, beroende på vilken datauppsättning som användes vid träning och testning av modellerna. Slutsatsen av detta är att i denna studie så är LSTM den bästa modellen, men att det inte går att säga att LSTM presterar bättre på en godtycklig datauppsättning. Båda modellerna presterade bättre när de tränades på multidimensionell data. Vidare forskning krävs för att kunna determinera om LSTM är den mest självklara modellen för att förutse kodordsfel i ett hybridfiber-koaxialt nätverk.
54

An Investigation and Comparison of Machine Learning Methods for Selecting Stressed Value-at-Risk Scenarios

Tennberg, Moa January 2023 (has links)
Stressed Value-at-Risk (VaR) is a statistic used to measure an entity's exposure to market risk by evaluating possible extreme portfolio losses. Stressed VaR scenarios can be used as a metric to describe the state of the financial market and can be used to detect and counter procyclicality by allowing central clearing counterparities (CCP) to increase margin requirements. This thesis aims to implement and evaluate machine learning methods (e.g., neural networks) for selecting stressed VaR scenarios in price return stock datasets where one liquidity day is assumed. The models are implemented to counter the procyclical effects present in NASDAQ's dual lambda method such that the selection maximises the total margin metric. Three machine learning models are implemented together with a labelling algorithm, a supervised and unsupervised multilayer perceptron and a random forest model. The labelling algorithm employs a deviation metric to differentiate between stressed VaR and standard scenarios. The models are trained and tested using 5000 scenarios of price return values from historical stock datasets. The models are tested using visual results, confusion matrix, Cohen's kappa statistic, the adjusted rand index and the total margin metric. The total margin metric is computed using normalised profit and loss values from artificially generated portfolios. The implemented machine learning models and the labelling algorithm manage to counter the procyclical effects evident in the dual lambda method and selected stressed VaR scenarios such that the selection maximise the total margin metric. The random forest model shows the most promise in classifying stressed VaR scenarios, since it manages to maximise the total margin overall.
55

Utvärdering av Multilayer Perceptron modeller för underlagsdetektering / Evaluation of Multilayer Perceptron models for surface detection

Midhall, Ruben, Parmbäck, Amir January 2021 (has links)
Antalet enheter som är uppkopplade till internet, Internet of Things (IoT), ökar hela tiden. År 2035 beräknas det finnas 1000 miljarder Internet of Things-enheter. Samtidigt som antalet enheter ökar, ökar belastningen på internet-nätverken som enheterna är uppkopplade till. Internet of Things-enheterna som finns i vår omgivning samlar in data som beskriver den fysiska tillvaron och skickas till molnet för beräkning. För att hantera belastningen på internet-nätverket flyttas beräkningarna på datan till IoT-enheten, istället för att skicka datan till molnet. Detta kallas för edge computing. IoT-enheter är ofta resurssnåla enheter med begränsad beräkningskapacitet. Detta innebär att när man designar exempelvis "machine learning"-modeller som ska köras med edge computing måste algoritmerna anpassas utifrån de resurser som finns tillgängliga på enheten. I det här arbetet har vi utvärderat olika multilayer perceptron-modeller för mikrokontrollers utifrån en rad olika experiment. "Machine learning"-modellerna har varit designade att detektera vägunderlag. Målet har varit att identifiera hur olika parametrar påverkar "machine learning"-systemen. Vi har försökt att maximera prestandan och minimera den mängd fysiskt minne som krävs av modellerna. Vi har även behövt förhålla oss till att mikrokontrollern inte haft tillgång till internet. Modellerna har varit ämnade att köras på en mikrokontroller "on the edge". Datainsamlingen skedde med hjälp av en accelerometer integrerad i en mikrokontroller som monterades på en cykel. I studien utvärderas två olika "machine learning"-system, ett som är en kombination av binära klassificerings modeller och ett multiklass klassificerings system som framtogs i ett tidigare arbete. Huvudfokus i arbetet har varit att träna modeller för klassificering av vägunderlag och sedan utvärdera modellerna. Datainsamlingen gjordes med en mikrokontroller utrustad med en accelerometer monterad på en cykel. Ett av systemen lyckas uppnå en träffsäkerhet på 93,1\% för klassificering av 3 vägunderlag. Arbetet undersöker även hur mycket fysiskt minne som krävs av de olika "machine learning"-systemen. Systemen krävde mellan 1,78kB och 5,71kB i fysiskt minne. / The number of devices connected to the internet, the Internet of Things (IoT), is constantly increasing. By 2035, it is estimated to be 1,000 billion Internet of Things devices in the world. At the same time as the number of devices increase, the load on the internet networks to which the devices are connected, increases. The Internet of Things devices in our environment collect data that describes our physical environment and is sent to the cloud for computation. To reduce the load on the internet networks, the calculations are done on the IoT devices themselves instead of in the cloud. This way no data needs to be sent over the internet and is called edge computing. In edge computing, however, other challenges arise. IoT devices are often resource-efficient devices with limited computing capacity. This means that when designing, for example, machine learning models that are to be run with edge computing, the models must be designed based on the resources available on the device. In this work, we have evaluated different multilayer perceptron models for microcontrollers based on a number of different experiments. The machine learning models have been designed to detect road surfaces. The goal has been to identify how different parameters affect the machine learning systems. We have tried to maximize the performance and minimize the memory allocation of the models. The models have been designed to run on a microcontroller on the edge. The data was collected using an accelerometer integrated in a microcontroller mounted on a bicycle. The study evaluates two different machine learning systems that were developed in a previous thesis. The main focus of the work has been to create algorithms for detecting road surfaces. The data collection was done with a microcontroller equipped with an accelerometer mounted on a bicycle. One of the systems succeeds in achieving an accuracy of 93.1\% for the classification of 3 road surfaces. The work also evaluates how much physical memory is required by the various machine learning systems. The systems required between 1.78kB and 5,71kB of physical memory.
56

Mobile Machine Learning for Real-time Predictive Monitoring of Cardiovascular Disease

Boursalie, Omar January 2016 (has links)
Chronic cardiovascular disease (CVD) is increasingly becoming a burden for global healthcare systems. This burden can be attributed in part to traditional methods of managing CVD in an aging population that involves periodic meetings between the patient and their healthcare provider. There is growing interest in developing continuous monitoring systems to assist in the management of CVD. Monitoring systems can utilize advances in wearable devices and health records, which provides minimally invasive methods to monitor a patient’s health. Despite these advances, the algorithms deployed to automatically analyze the wearable sensor and health data is considered too computationally expensive to run on the mobile device. Instead, current mobile devices continuously transmit the collected data to a server for analysis at great computational and data transmission expense. In this thesis a novel mobile system designed for monitoring CVD is presented. Unlike existing systems, the proposed system allows for the continuous monitoring of physiological sensors, data from a patient’s health record and analysis of the data directly on the mobile device using machine learning algorithms (MLA) to predict an individual’s CVD severity level. The system successfully demonstrated that a mobile device can act as a complete monitoring system without requiring constant communication with a server. A comparative analysis between the support vector machine (SVM) and multilayer perceptron (MLP) to explore the effectiveness of each algorithm for monitoring CVD is also discussed. Both models were able to classify CVD risk with the SVM achieving the highest accuracy (63%) and specificity (76%). Finally, unlike current systems the resource requirements for each component in the system was evaluated. The MLP was found to be more efficient when running on the mobile device compared to the SVM. The results of thesis also show that the MLAs complexity was not a barrier to deployment on a mobile device. / Thesis / Master of Applied Science (MASc) / In this thesis, a novel mobile system for monitoring cardiovascular (CVD) disease is presented. The system allows for the continuous monitoring of both physiological sensors, data from a patient’s health record and analysis of the data directly on the mobile device using machine learning algorithms (MLA) to predict an individual’s CVD severity level. The system successfully demonstrated that a mobile device can act as a complete monitoring system without requiring constant communication with a remote server. A comparative analysis between the support vector machine (SVM) and multilayer perceptron (MLP) to explore the effectiveness of each MLA for monitoring CVD is also discussed. Both models were able to classify CVD severity with the SVM achieving the highest accuracy (63%) and specificity (76%). Finally, the resource requirements for each component in the system were evaluated. The results show that the MLAs complexity was not a barrier to deployment on a mobile device.
57

PREDICTING GENERAL VAGAL NERVE ACTIVITY VIA THE DEVELOPMENT OF BIOPHYSICAL ARTIFICIAL INTELLIGENCE

LeRayah Michelle Neely-Brown (17593539) 11 December 2023 (has links)
<p dir="ltr">The vagus nerve (VN) is the tenth cranial nerve that mediates most of the parasympathetic functions of the autonomic nervous system. The axons of the human VN comprise a mix of unmyelinated and myelinated axons, where ~80% of the axons are unmyelinated C fibers (Havton et al., 2021). Understanding that most VN axons are unmyelinated, there is a need to map the pathways of these axons to and from organs to understand their function(s) and whether C fiber morphology or signaling characteristics yield insights into their functions. Developing a machine learning model that detects and predicts the morphology of VN single fiber action potentials based on select fiber characteristics, e.g., diameter, myelination, and position within the VN, allows us to more readily categorize the nerve fibers with respect to their function(s). Additionally, the features of this machine learning model could help inform peripheral neuromodulation devices that aim to restore, replace, or augment one or more specific functions of the VN that have been lost due to injury, disease, or developmental abnormalities.</p><p dir="ltr">We designed and trained four types of Multi-layer Perceptron Artificial Deep Neural Networks (MLP-ANN) with 10,000 rat abdominal vagal C-fibers simulated via the peripheral neural interface model ViNERS. We analyze the accuracy of each MLP-ANN’s SFAP predictions by conducting normalized cross-correlation and morphology analyses with the ViNERS C-fiber SFAP counterparts. Our results showed that our best MLP predicted over 94% of the C-fiber SFAPs with strong normalized cross-correlation coefficients of 0.7 through 1 with the ViNERS SFAPs. Overall, this novel tool can use a C-fiber’s biophysical characteristics (i.e., fiber diameter size, fiber position on the x/y axis, etc.) to predict C-fiber SFAP morphology.</p>
58

Predicting Location-Dependent Structural Dynamics Using Machine Learning

Zink, Markus January 2022 (has links)
Machining chatter is an undesirable phenomenon of material removal processes and hardly to control or avoid. Its occurrence and extent essentially depend onthe kinematic, which alters with the position of the Tool Centre Point, of the machine tool. Research as to chatter was done widely but rarely with respect to changing structural dynamics during manufacturing. This thesis applies intelligent methods to learn the underlying functions of modal parameters – natural frequency, damping ratio, and mode shape – and defines the dynamic properties of a system firstly at this extent. To do so, it embraces three steps: first, the elaboration of the necessary dynamic parameters, second, the acquisition of the data via a simulation,and third, the prediction of the modal parameters with two kinds of Machine Learning techniques: Gradient Boosting Machine and Multilayer Perceptron. In total, it investigates three types of kinematics: cross bed, gantry, and overhead gantry. It becomes apparent that Light Gradient Boosting Machine outperforms Multilayer Perceptron throughout all studies. It achieves a prediction error of at most 1.7 % for natural frequency and damping ratio for all kinematics. However, it cannot really control the prediction of the participation factor yet which might originate in the complexity of the data and the data size. As expected, the error rises with noisy data and less amount of measurement points but at a tenable extent for both natural frequency and damping ratio. / 'Bearbetningsvibrationer är ett oönskat fenomen i materialborttagningsprocesser och är svåra att kontrollera eller undvika. Dess förekomst och omfattning beror i huvudsak på kinematiken, som förändras med positionen för verktygets centrumpunkt på verktygsmaskinen. Det har gjorts mycket forskning om bearbetningsvibrationer, men sällan om förändrad strukturell dynamik under tillverkningen. I denna avhandling tillämpas intelligenta metoder för att lära sig de underliggande funktionerna hos modalparametrar – egenfrekvens, dämpningsgrad och modalform – och definierar systemets dynamiska egenskaper för första gången i denna omfattning. För att göra detta omfattar den tre steg: för det första utarbetandet av de nödvändiga dynamiska parametrarna, för det andra insamling av data via en simulering och för det tredje förutsägelse av modalparametrarna med hjälp av två typer av tekniker för maskininlärning: Gradient Boosting Machine och Multilayer Perceptron. Sammanlagt undersöks tre typer av kinematik: crossbed, gantry och overhead gantry. Det framgår tydligt att Light Gradient Boosting Machine överträffar Multilayer Perceptron i alla studier. Den uppnår ett prediktionsfel på högst 1,7 % för egenfrekvens och dämpningsförhållande för alla kinematiker. Den kan dock ännu inte riktigt kontrollera förutsägelsen av deltagarfaktorn, vilket kan bero på datans komplexitet och datastorlek. Som väntat ökar felet med bullrig data och färre mätpunkter, men i en acceptabel omfattning för både naturfrekvens och dämpningsförhållande.
59

[en] MULTILAYER PERCEPTRON FOR CLASSIFYING POLYMERS FROM TENSILE TEST DATA / [pt] PERCEPTRON DE MÚLTIPLAS CAMADAS PARA A CLASSIFICAÇÃO DE POLÍMEROS A PARTIR DE DADOS DE ENSAIOS DE TRAÇÃO

HENRIQUE MONTEIRO DE ABREU 03 September 2024 (has links)
[pt] O ensaio de tração é o ensaio mecânico mais aplicado para a obtenção das propriedades mecânicas de polímeros. Por meio de um ensaio de tração é obtida a curva tensão-deformação, e é a partir desta curva que são obtidas propriedades mecânicas tais como o módulo de elasticidade, a tenacidade e a resiliência do material, as quais podem ser utilizadas na identificação de comportamentos mecânicos equivalentes em materiais poliméricos, seja para a diferenciação de resíduos plásticos para a reciclagem ou para a classificação de um material plástico reciclado quanto ao teor de um determinado polímero em sua composição. Porém, a obtenção das propriedades mecânicas a partir da curva tensão-deformação envolve cálculos e ajustes nos intervalos da curva em que essas propriedades são determinadas, tornando a obtenção das propriedades mecânicas um processo complexo sem a utilização de programas computacionais especializados. A partir da compreensão do padrão de comportamento da curva tensão-deformação de um material, algoritmos de aprendizagem de máquina (AM) podem ser ferramentas eficientes para automatizar a classificação de diferentes tipos de materiais poliméricos. Com o objetivo de verificar a acurácia de um algoritmo de AM na classificação de três tipos de polímeros, foram realizados ensaios de tração em corpos de prova de polietileno de alta densidade (PEAD), polipropileno (PP) e policloreto de vinila (PVC). O conjunto de dados obtido a partir das curvas tensão-deformação foi utilizado no treinamento de uma rede neural artificial perceptron de múltiplas camadas (PMC). Com uma acurácia de 0,9261 para o conjunto de teste, o modelo obtido a partir da rede PMC foi capaz de classificar os polímeros com base nos dados da curva tensão-deformação, indicando a possibilidade do uso de modelos de AM para automatizar a classificação de materiais poliméricos a partir de dados de ensaios de tração. / [en] The tensile test is the most applied mechanical test to obtain the mechanical properties of polymers, which can be used in polymeric materials classification. Through a tensile test is obtained the stress-strain curve, is from which mechanical properties such as the modulus of elasticity, tenacity, and resilience of the material are obtained, which can be used to identify equivalent mechanical behaviors in polymeric materials, whether for the distinguishing plastic waste for recycling or for classifying recycled plastic material according to the content of a polymer type in its composition. However, obtaining mechanical properties from the stress-strain curve involves calculations and adjustments in the intervals of the curve in which these properties are determined, turning it into a complex process without the use of specialized software. By understanding the behavior pattern of a material’s stress-strain curve, machine learning (ML) algorithms can be efficient tools to automate the classification of different types of polymeric materials. To verify the accuracy of an ML algorithm in classifying three types of polymers, tensile tests were performed on specimens made of high-density polyethylene (HDPE), polypropylene (PP), and polyvinyl chloride (PVC). The dataset obtained from the stress-strain curves was used in the training of a multilayer perceptron (MLP) neural network. With an accuracy of 0.9261 for the test set, the model obtained from the MLP neural network was able to classify the polymers based on the stress-strain curve data, thus indicating the possibility of using an ML algorithm to automate the classification of polymeric materials based on tensile test data.
60

Finding the QRS Complex in a Sampled ECG Signal Using AI Methods / Hitta QRS komplex in en samplad EKG signal med AI metoder

Skeppland Hole, Jeanette Marie Victoria January 2023 (has links)
This study aimed to explore the application of artificial intelligence (AI) and machine learning (ML) techniques in implementing a QRS detector forambulatory electrocardiography (ECG) monitoring devices. Three ML models, namely long short-term memory (LSTM), convolutional neural network (CNN), and multilayer perceptron (MLP), were compared and evaluated using the MIT-BIH arrhythmia database (MITDB) and the MIT-BIH noise stress test database (NSTDB). The MLP model consistently outperformed the other models, achieving high accuracy in R-peak detection. However, when tested on noisy data, all models faced challenges in accurately predicting R-peaks, indicating the need for further improvement. To address this, the study emphasized the importance of iteratively refining the input data configurations for achieving accurate R-peak detection. By incorporating both the MITDB and NSTDB during training, the models demonstrated improved generalization to noisy signals. This iterative refinement process allowed for the identification of the best models and configurations, consistently surpassing existing ML-based implementations and outperforming the current ECG analysis system. The MLP model, without shifting segments and utilizing both datasets, achieved an outstanding accuracy of 99.73 % in R-peak detection. This accuracy exceeded values reported in the literature, demonstrating the superior performance of this approach. Furthermore, the shifted MLP model, which considered temporal dependencies by incorporating shifted segments, showed promising results with an accuracy of 99.75 %. It exhibited enhanced accuracy, precision, and F1-score compared to the other models, highlighting the effectiveness of incorporating shifted segments. For future research, it is important to address challenges such as overfitting and validate the models on independent datasets. Additionally, continuous refinement and optimization of the input data configurations will contribute to further advancements in ECG signal analysis and improve the accuracy of R-peak detection. This study underscores the potential of ML techniques in enhancing ECG analysis, ultimately leading to improved cardiac diagnostics and better patient care. / Syftet med denna studie var att utforska användningen av AI- och ML-tekniker för att implementera en QRS-detektor i EKG-övervakningsenheter. Tre olika ML-modeller, LSTM, CNN och MLP jämfördes och utvärderades med hjälp av MITDB och NSTDB. Resultaten visade att MLP-modellen konsekvent presterade bättre än de andra modellerna och uppnådde hög noggrannhet vid detektion av R-toppar i EKG-signalen. Trots detta stötte alla modeller på utmaningar när de testades på brusig realtidsdata, vilket indikerade behovet av ytterligare förbättringar. För att hantera dessa utmaningar betonade studien vikten av att iterativt förbättra konfigurationen av indata för att uppnå noggrann detektering av R toppar. Genom att inkludera både MITDB och NSTDB under träningen visade modellerna förbättrad förmåga att generalisera till brusiga signaler. Denna iterativa process möjliggjorde identifiering av de bästa modellerna och konfigurationerna, vilka konsekvent överträffade befintliga ML-baserade implementeringar och presterade bättre än den nuvarande EKG-analysystemet. MLP-modellen, utan användning av skiftade segment och med båda databaserna, uppnådde en imponerande noggrannhet på 99,73 % vid detektion av R-toppar. Denna noggrannhet överträffade tidigare studier och visade på den överlägsna prestandan hos denna metod. Dessutom visade den skiftade MLP-modellen, som inkluderade skiftade segment för att beakta tidsberoenden, lovande resultat med en noggrannhet på 99,75 %. Modellen uppvisade förbättrad noggrannhet, precision och F1-score jämfört med de andra modellerna, vilket betonar vikten av att inkludera skiftade segment. För framtida studier är det viktigt att hantera utmaningar som överanpassning och att validera modellerna med oberoende datamängder. Dessutom kommer en kontinuerlig förfining och optimering av konfigurationen av indata att bidra till ytterligare framsteg inom EKG-signalanalys och förbättrad noggrannhet vid detektion av R-toppar. Denna studie understryker potentialen hos ML-modeller för att förbättra EKG-analysen och därigenom bidra till förbättrad diagnostik av hjärtsjukdomar och högre kvalitet inom patientvården.

Page generated in 0.0662 seconds