• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 164
  • 30
  • 10
  • 7
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 266
  • 102
  • 79
  • 77
  • 66
  • 49
  • 49
  • 48
  • 47
  • 44
  • 39
  • 38
  • 36
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Using Machine Learning techniques to understand glucose fluctuation in response to breathing signals

Karamichalis, Nikolaos January 2021 (has links)
Blood glucose (BG) prediction and classification plays big role in diabetic patients' daily lives. Based on International Diabetes Federation (IDF) in 2019, 463 million people are diabetic globally and the projection by 2045 is that the number will rise to 700 million people. Continuous glucose monitor (CGM) systems assist diabetic patients daily, by alerting them about their BG levels fluctuations continuously. The history of CGM systems started in 1999, when the Food and Drug Administration (FDA) approved the first CGM system, until nowadays where the developments of the system's accurate reading and delay on reporting are continuously improving. CGM systems are key elements in closed-loop systems, that are using BG monitoring in order to calculate and deliver with the patient's supervision the needed insulin to the patient automatically. Data quality and the feature variation are essential for CGM systems, therefore many studies are being conducted in order to support the developments and improvements of CGM systems and diabetics daily lives. This thesis aims to show that physiological signals retrieved from various sensors, can assist the classification and prediction of BG levels and more specifically that breathing rate can enhance the accuracy of CGM systems for diabetic patients and also healthy individuals. The results showed that physiological data can improve the accuracy of prediction and classification of BG levels and improve the performance of CGM systems during classification and prediction tasks. Finally, future improvements could include the use of predictive horizon (PH) regarding the data and also the selection and use of different models.
142

Parallel Coordinates Diagram Implementation in 3D Geometry

Suma, Christopher G. January 2018 (has links)
No description available.
143

Distribution-based Summarization for Large Scale Simulation Data Visualization and Analysis

Wang, Ko-Chih 11 July 2019 (has links)
No description available.
144

Text simplification in Swedish using transformer-based neural networks / Textförenkling på Svenska med transformer-baserade neurala nätverk

Söderberg, Samuel January 2023 (has links)
Textförenkling innebär modifiering av text så att den blir lättare att läsa genom ersättning av komplexa ord, ändringar av satsstruktur och/eller borttagning av onödig information. Forskning existerar kring textförenkling på svenska, men användandet av neurala nätverk inom området är begränsat. Neurala nätverk kräver storaskaliga och högkvalitativa dataset, men sådana dataset är sällsynta för textförenkling på svenska. Denna studie undersöker framtagning av dataset för textförenkling på svenska genom parafrasutvinning från webbsidor och genom översättning av existerande dataset till svenska, och hur neurala nätverk tränade på sådana dataset presterar. Tre dataset med sekvenspar av komplexa och motsvarande simpla sekvenser skapades, den första genom parafrasutvinning från web data, det andra genom översättning av ett dataset från engelska till svenska, och ett tredje genom att kombinera de framtagna dataseten till ett. Dessa dataset användes sedan för att finjustera ett neuralt vätverk av BART modell, förtränad på stora mängder svensk data. Utvärdering av de tränade modellerna utfördes sedan genom en manuell undersökning och kategorisering av output, och en automatiserad bedömning med mätverktygen SARI och LIX. Två olika dataset för testning skapades och användes i utvärderingen, ett översatt från engelska och ett manuellt framtaget från svenska texter. Den automatiska utvärderingen med SARI gav resultat nära, men inte lika bra, som liknande forskning inom textförenkling på engelska. Utvärderingen med LIX gav resultat på liknande nivå eller bättre än nuvarande forskning inom textförenkling på svenska. Den manuella utvärderingen visade att modellen tränad på datat från parafrasutvinningen oftast producerade korta sekvenser med många ändringar jämfört med originalet, medan modellen tränad på det översatta datasetet oftast producerade oförändrade sekvenser och/eller sekvenser med få ändringar. Dock visade det sig att modellen tränad på de utvunna paragraferna producerade många fler oanvändbara sekvenser än vad modellen tränad på det översatta datasetet gjorde. Modellen tränad på det kombinerade datasetet presterade mellan de två andra modellerna i dessa två avseenden, då den producerade färre oanvändbara sekvenser än modellen tränad på de utvunna paragraferna och färre oförändrade sekvenser jämfört med modellen tränad på det översatta datat. Många sekvenser förenklades bra med de tre modellerna, men den manuella utvärderingen visade att en signifikant andel av de genererade sekvenserna förblev oförändrade eller oanvändbara, vilket belyser behovet av ytterligare forskning, utforskning av metoder, och förfinande av de använda verktygen. / Text simplification involves modifying text to make it easier to read by replacing complex words, altering sentence structure, and/or removing unnecessary information. It can be used to make text more accessible to a larger crowd. While research in text simplification exists for Swedish, the use of neural networks in the field is limited. Neural networks require large-scale high-quality datasets, but such datasets are scarce for text simplification in Swedish. This study investigates the acquisition of datasets through paraphrase mining from web snapshots and translation of existing datasets for text simplification in English to Swedish and aims to assess the performance of neural network models trained on such acquired datasets. Three datasets with complex-to-simple sequence pairs were created, one through mining paraphrases from web data, another by translating a dataset from English to Swedish, and a third by combining the acquired mined and translated datasets into one. These datasets were then used to fine-tune a BART neural network model pre-trained on large amounts of Swedish data. An evaluation was conducted through manual examination and categorization of output, and automated assessment using the SARI and LIX metrics. Two different test sets were evaluated, one translated from English and one manually constructed from Swedish texts. The automatic evaluation produced SARI scores close to, but not as well as, similar research in text simplification in English. When considering LIX scores, the models perform on par or better than existing research into automatic text simplification in Swedish. The manual evaluation revealed that the model trained on the mined paraphrases generally produced short sequences that had many alterations compared to the original, while the translated dataset often produced unchanged sequences and sequences with few alterations. However, the model trained on the mined dataset produced many more sequences that were unusable, either with corrupted Swedish or by altering the meaning of the sequences, compared to the model trained on the translated dataset. The model trained on the combined dataset reached a middle ground in these two regards, producing fewer unusable sequences than the model trained on the mined dataset and fewer unchanged sequences compared to the model trained on the translated dataset. Many sequences were successfully simplified using the three models, but the manual evaluation revealed that a significant portion of the generated sequences remains unchanged or unusable, highlighting the need for further research, exploration of methods, and tool refinement.
145

Context-aware Swedish Lexical Simplification : Using pre-trained language models to propose contextually fitting synonyms / Kontextmedveten lexikal förenkling på svenska : Användningen av förtränade språkmodeller för att föreslå kontextuellt passande synonymer.

Graichen, Emil January 2023 (has links)
This thesis presents the development and evaluation of context-aware Lexical Simplification (LS) systems for the Swedish language. In total three versions of LS models, LäsBERT, LäsBERT-baseline, and LäsGPT, were created and evaluated on a newly constructed Swedish LS evaluation dataset. The LS systems demonstrated promising potential in aiding audiences with reading difficulties by providing context-aware word replacements. While there were areas for improvement, particularly in complex word identification, the systems showed agreement with human annotators on word replacements. The effects of fine-tuning a BERT model for substitution generation on easy-to-read texts were explored, indicating no significant difference in the number of replacements between fine-tuned and non-fine-tuned versions. Both versions performed similarly in terms of synonymous and simplifying replacements, although the fine-tuned version exhibited slightly reduced performance compared to the baseline model. An important contribution of this thesis is the creation of an evaluation dataset for Lexical Simplification in Swedish. The dataset was automatically collected and manually annotated. Evaluators assessed the quality, coverage, and complexity of the dataset. Results showed that the dataset had high quality and a perceived good coverage. Although the complexity of the complex words was perceived to be low, the dataset provides a valuable resource for evaluating LS systems and advancing research in Swedish Lexical Simplification. Finally, a more transparent and reader-empowering approach to Lexical Simplification isproposed. This new approach embraces the challenges with contextual synonymy and reduces the number of failure points in the conventional LS pipeline, increasing the chancesof developing a fully meaning-preserving LS system. Links to different parts of the project can be found here: The Lexical Simplification dataset: https://github.com/emilgraichen/SwedishLSdataset The lexical simplification algorithm: https://github.com/emilgraichen/SwedishLexicalSimplifier
146

Polarimetric Imagery for Object Pose Estimation

Siefring, Matthew D. 15 May 2023 (has links)
No description available.
147

Enhancing Efficiency and Trustworthiness of Deep Learning Algorithms

Isha Garg (15341896) 24 April 2023 (has links)
<p>This dissertation explore two major goals in Deep Learning algorithm design: efficiency and trustworthiness. We motivate these concerns in Chapter 1 and give relevant background in Chapter 2. We then discuss six works to target these two goals. </p> <p>The first of these discusses how to make the model compression methodology more efficient, so it can be done in a single shot. This allows us to create models with reduced size and layers, so we can have faster and more efficient inference, and is covered in Chapter 3. We then extend this to target efficiency in continual learning in Chapter 4, while mitigating the problem of catastrophic forgetting. The method discussed also allows us to circumvent the potential for data leakage by avoiding the need to store any data from the past tasks. Next, we consider brain-inspired computing as an alternative to traditional neural networks to improve compute efficiency of networks. The spiking neural networks discussed however have large inference latency due to the need for accumulating spikes over many timesteps. We tackle this by introducing a new scheme that distributes an image over time by breaking it down into a sum of its ranked sinusoidal bases in Chapter 5. This results in networks that are faster and more efficient to deploy. Chapter 6 targets mitigating both the communication expense and potential for data leakage in federated learning, by distilling the gradients to be communicated in a small number of images that resemble noise. Communicating these images is more efficient, and circumvents the potential for data leakage as they resemble noise. We then explore the applications of studying curvature of loss with respect to input data points in the last two chapters. We first utilize curvature to create performant coresets to reduce the size of datasets, to make training more efficient in Chapter 7. In Chapter 8, we use curvature as a metric for overfitting and use it to expose dataset integrity issues arising from memorization.</p>
148

Development of Artificial Intelligence-based In-Silico Toxicity Models. Data Quality Analysis and Model Performance Enhancement through Data Generation.

Malazizi, Ladan January 2008 (has links)
Toxic compounds, such as pesticides, are routinely tested against a range of aquatic, avian and mammalian species as part of the registration process. The need for reducing dependence on animal testing has led to an increasing interest in alternative methods such as in silico modelling. The QSAR (Quantitative Structure Activity Relationship)-based models are already in use for predicting physicochemical properties, environmental fate, eco-toxicological effects, and specific biological endpoints for a wide range of chemicals. Data plays an important role in modelling QSARs and also in result analysis for toxicity testing processes. This research addresses number of issues in predictive toxicology. One issue is the problem of data quality. Although large amount of toxicity data is available from online sources, this data may contain some unreliable samples and may be defined as of low quality. Its presentation also might not be consistent throughout different sources and that makes the access, interpretation and comparison of the information difficult. To address this issue we started with detailed investigation and experimental work on DEMETRA data. The DEMETRA datasets have been produced by the EC-funded project DEMETRA. Based on the investigation, experiments and the results obtained, the author identified a number of data quality criteria in order to provide a solution for data evaluation in toxicology domain. An algorithm has also been proposed to assess data quality before modelling. Another issue considered in the thesis was the missing values in datasets for toxicology domain. Least Square Method for a paired dataset and Serial Correlation for single version dataset provided the solution for the problem in two different situations. A procedural algorithm using these two methods has been proposed in order to overcome the problem of missing values. Another issue we paid attention to in this thesis was modelling of multi-class data sets in which the severe imbalance class samples distribution exists. The imbalanced data affect the performance of classifiers during the classification process. We have shown that as long as we understand how class members are constructed in dimensional space in each cluster we can reform the distribution and provide more knowledge domain for the classifier.
149

Comparison and performance analysis of deep learning techniques for pedestrian detection in self-driving vehicles

Botta, Raahitya, Aditya, Aditya January 2023 (has links)
Background: Self-driving cars, also known as automated cars are a form of vehicle that can move without a driver or human involvement to control it. They employ numerous pieces of equipment to forecast the car’s navigation, and the car’s path is determined depending on the output of these devices. There are numerous methods available to anticipate the path of self-driving cars. Pedestrian detection is critical for autonomous cars to avoid fatalities and accidents caused by self-driving cars. Objectives: In this research, we focus on the algorithms in machine learning and deep learning to detect pedestrians on the roads. Also, to calculate the most accurate algorithm that can be used in pedestrian detection in automated cars by performing a literature review to select the algorithms. Methods: The methodologies we use are literature review and experimentation, literature review can help us to find efficient algorithms for pedestrian detection in terms of accuracy, computational complexity, etc. After performing the literature review we selected the most efficient algorithms for evaluation and comparison. The second methodology includes experimentation as it evaluates these algorithms under different conditions and scenarios. Through experimentation, we can monitor the different factors that affect the algorithms. Experimentation makes it possible for us to evaluate the algorithms using various metrics such as accuracy and loss which are mainly used to provide a quantitative measure of performance. Results: Based on the literature study, we focused on pedestrian detection deep learning models such as CNN, SSD, and RPN for this thesis project. After evaluating and comparing the algorithms using performance metrics, the outcomes of the experiments demonstrated that RPN was the highest and best-performing algorithm with 95.63% accuracy &amp; loss of 0.0068 followed by SSD with 95.29% accuracy &amp; loss of 0.0142 and CNN with 70.84% accuracy &amp; loss of 0.0622. Conclusions: Among the three deep learning models evaluated for pedestrian identification, the CNN, RPN, and SSD, RPN is the most efficient model with the best performance based on the metrics assessed.
150

Support Vector Machine Classifiers Show High Generalizability in Automatic Fall Detection in Older Adults

Alizadeh, Jalal, Bogdan, Martin, Classen, Joseph, Fricke, Christopher 08 May 2023 (has links)
Falls are a major cause of morbidity and mortality in neurological disorders. Technical means of detecting falls are of high interest as they enable rapid notification of caregivers and emergency services. Such approaches must reliably differentiate between normal daily activities and fall events. A promising technique might be based on the classification of movements based on accelerometer signals by machine-learning algorithms, but the generalizability of classifiers trained on laboratory data to real-world datasets is a common issue. Here, three machine-learning algorithms including Support Vector Machine (SVM), k-Nearest Neighbors (kNN), and Random Forest (RF) were trained to detect fall events. We used a dataset containing intentional falls (SisFall) to train the classifier and validated the approach on a different dataset which included real-world accidental fall events of elderly people (FARSEEING). The results suggested that the linear SVM was the most suitable classifier in this cross-dataset validation approach and reliably distinguished a fall event from normal everyday activity at an accuracy of 93% and similarly high sensitivity and specificity. Thus, classifiers based on linear SVM might be useful for automatic fall detection in real-world applications.

Page generated in 0.0461 seconds