• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5598
  • 577
  • 282
  • 275
  • 167
  • 157
  • 83
  • 66
  • 50
  • 42
  • 24
  • 21
  • 20
  • 19
  • 12
  • Tagged with
  • 9041
  • 9041
  • 3028
  • 1688
  • 1534
  • 1522
  • 1416
  • 1358
  • 1192
  • 1186
  • 1157
  • 1128
  • 1113
  • 1024
  • 1020
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
591

Low-Power Edge-Enabled Sensor Platforms

De Oliveira Filho, José Ilton 10 August 2023 (has links)
On-site sensing systems provide fast and timely information about a myriad of applications ranging from chemical and biological to physical phenomena in the environment or the human body. Such systems are embedded in our daily life for detecting pollutants, monitoring health, and diagnosing diseases. Especially in the field of health care, the development of portable and affordable diagnosing systems, also known as point-of-care (PoC) devices, is a major challenge. Moreover, to this day, systems for therapeutic drug monitoring (TDM) have remained bulky and highly expensive, mostly due to the need for exceptionally precise, rapid, and highly accurate real-time on-site measurements. This dissertation focuses on the design, development, and implementation of miniaturized PoC devices for achieving high sensitivity, selectivity, and reliability through a combination of hardware and software strategies at the edge. The first part of the dissertation introduces the design of single and multi-channel electrochemical readout platforms with a high voltage range, fast scan rates, and with nano-ampere resolution, covering a broad range of electrochemical excitation techniques. These platforms were paired with electrochemical-based sensors to detect SARS‑CoV‑2, bisphenol A, and ascorbic acid. The low power feature of the proposed platforms is demonstrated by powering the complete detection system with energy harvested from natural and artificial ambient light. The second part of the dissertation introduces the design and development of a miniaturized wearable device with a pico-ampere resolution, high-speed electrochemical frequency interface, and highly stable sensing circuitry. A complete in-vivo system is demonstrated for long-term (>4 hours) measurement, wherein molecules are detected and monitored directly from a probe inserted in the subcutaneous abdomen region of a Sprague-Dawley rat. A solution for sensor drift due to biofouling and interference is demonstrated thought to the integration with real-time processing software. Furthermore, integrating the aforementioned platforms with highly reduced dense neural network models is demonstrated to increase the robustness of the sensors, allowing the detection of contaminants in complex samples, improving the sensor selectivity, and providing timely diagnoses in-situ.
592

Predicting Cryptocurrency Prices with Machine Learning Algorithms: A Comparative Analysis

Gudavalli, Harsha Nanda, Kancherla, Khetan Venkata Ratnam January 2023 (has links)
Background: Due to its decentralized nature and opportunity for substantial gains, cryptocurrency has become a popular investment opportunity. However, the highly unpredictable and volatile nature of the cryptocurrency market poses a challenge for investors looking to predict price movements and make profitable investments. Time series analysis, which recognizes trends and patterns in previous price data to create forecasts about future price movements, is one of the prominent and effective techniques for price prediction. Integrating Machine learning (ML) techniques and technical indicators along with time series analysis can enhance the prediction accuracy significantly. Objectives: The objective of this thesis is to identify an effective ML algorithm for making long-term predictions of Bitcoin prices, by developing prediction models using the ML algorithms and making predictions using the technical indicators(RelativeStrength Index (RSI), Exponential Moving Average (EMA), Simple Moving Average (SMA)) as input for these models. Method: A Systematic Literature Review (SLR) has been employed to identify effective ML algorithms for making long-term predictions of cryptocurrency prices and conduct an experiment on these identified algorithms. The selected algorithms are trained and tested using the technical indicators RSI, EMA, and SMA calculated using the historic price data over a period of May 2017 to May 2023 taken fromCoinGecko API. The models are then evaluated using various metrics and the effect of the indicators on the performance of the prediction models is found using permutation feature importance and correlation analysis. Results: After conducting SLR, the ML algorithms Random Forest (RF), GradientBoosting (GB), Long Short-Term Memory (LSTM), and Gated Recurrent Unit(GRU) have been identified as effective algorithms to conduct our experiment on. Out of these algorithms, LSTM has been found to be the most accurate model out of the 4 selected algorithms based on Root Mean Square Error (RMSE) score(0.01083), Mean Square Error (MSE) score (0.00011), Coefficient of Determination (R2) score (0.80618), Time-Weighted Average (TWAP) score (0.40507), and Volume-Weighted Average (VWAP) score (0.35660) respectively. Also, by performing permutation feature importance and correlation analysis it was found that the moving averages EMA and SMA had a greater impact on the performance of all the prediction models as compared to RSI. Conclusion: Prediction models were built using the chosen ML algorithms identified through the literature review. Based on the dataset built from the data collected through the CoinGecko database and taking technical indicators as the input features, models were trained and tested using the chosen ML algorithms. The LSTM prediction algorithm was found to be the most accurate out of the chosen algorithms based on the RMSE, R2, TWAP, and VWAP scores obtained.
593

Automated Rat Grimace Scale for the Assessment of Pain

Arnold, Brendan Elliot 21 June 2023 (has links)
Pain is a complex neuro-psychosocial experience that is internal and private making it difficult to assess in both humans and animals. In research approximately 95% of animal models use rodents, with rats being among the most common for pain studies [3]. However, traditional assessments of the pain response struggle to demonstrate that the behaviors are a direct measurement of pain. The rat grimace scale (RGS) was developed based on facial action coding systems (FACS) which have known utility in non-verbal humans [6, 9]. The RGS measures facial action units of orbital tightening, ear changes, nose flattening, and whisker changes in an attempt to quantify the pain behaviors of the rat. These action units are measured on frontal images of rats with their face in clear view on a scale of 0-2, then summed together. The total score is then averaged to find a final value for RGS between 0-2. Currently, the software program Rodent Face Finder® can extract frontal face images. However, the RGS scores are still manually recorded which is a labor-intensive process, requiring hours of training. Furthermore, the scoring can be subjective, with differences existing between researchers and lab groups. The primary aim of this study is to develop an automated system that can detect action unit regions and generate a RGS score for each image. To accomplish this objective, a YOLOv5 object detector and Vision Transformers (ViT) for classification were trained on a dataset of frontal-facing images extracted using Rodent Face Finder®. Subsequently, the model was then validated using a RGS test for blast traumatic brain injury (bTBI). The validation dataset consisted of 40 control images of uninjured rats, 40 images from the bTBI study on the day of injury, and 40 images 1-month post-injury. All 120 images in the validation set were then manually graded for RGS and tested using the automated RGS system. The results indicated that the automated RGS system accurately and efficiently graded the images with minimal variation in results compared to human graders in just 1/14th of the time. This system provides a fast and reliable method to extract meaningful information of rats' internal pain state. Furthermore, the study presents an avenue for future research into real-time pain monitoring. / Master of Science / Pain is a difficult experience to measure, both in humans and animals. It can be a subjective experience that is largely based on individual perception and interpretation. Furthermore, in animals, pain is even more challenging to assess because they cannot communicate their experience through language. Nonetheless, animal research plays an important role in understanding and treating the underlying mechanisms of pain. In animal research, rats are commonly used to study pain. However, traditional methods of assessing pain behaviors are not meant to observe the pain experience, but instead analyze a response to an external stimulus. The rat grimace scale (RGS) was developed as a direct measurement of the pain experience by analyzing the facial features. Currently, RGS scores are manually recorded by trained researchers which is time-consuming and can be subjective. This study aimed to develop an automated system to identify pain related facial expressions and generate a RGS score for frontal-images of rats. The system was trained using a dataset of frontal-facing rat images with varying levels of RGS scores and validated using images of rats from a traumatic brain injury study. The results showed that the automated RGS system accurately identified RGS pain level differences between recently injured rats, uninjured rats, and rats which were allowed to recover for 1-month. Furthermore, the system provided a fast and reliable method for measuring rat pain behavior when compared to manual grading. With this system, researchers will be able to efficiently perform RGS test. Additionally, this study presents an opportunity for future automation of other grimace scales as well as research into real-time pain monitoring.
594

Digital Twin Disease Diagnosis Using Machine Learning

Ferdousi, Rahatara 30 September 2021 (has links)
COVID-19 has led to a surge in the adoption of digital transformation in almost every sector. Digital health and well-being are no exception. For instance, now people get checkupsvia apps or websites instead of visiting a physician. The pandemic has pushed the health-care sector worldwide to advance the adoption of artificial intelligence (AI) capabilities.Considering the demand for AI in supporting the well-being of an individual, we presentthe real-life diagnosis as a digital twin(DT) diagnosis using machine learning. The MachineLearning (ML) technology enables DT to offer a prediction. Although several attemptsexist for predicting disease using ML and a few attempts through ML of DT frameworks,those do not deal with disease risk prediction. In addition, most of them deal with singledisease prediction after the occurrence and rely only on clinical test data like- ECG report,MRI scan, etc.To predict multiple disease/disease risks, we propose a dynamic machine learning algo-rithm (MLA) selection framework and a dynamic testing method. The proposed frameworkaccepts heterogeneous electronic health records (EHRs) or digital health status as datasetsand selects suitable MLA upon the highest similarity. Then it trains specific classifiers forpredicting a specific disease/disease risk. The dynamic testing method for prediction isused for predicting several diseases.We described three use cases: non-communicable disease(NCD) risk prediction, mentalwell-being prediction, and COVID-19 prediction. We selected diabetes, risk of diabetes,liver disease, thyroid, risk of stroke as NCDs, mental stress as a mental health issue, andCOVID-19. We employed seven datasets, including public and private datasets, with adiverse range of attributes, sizes, types, and formats to evaluate whether the proposedframework is suitable to data heterogeneity. Our experiment found that the proposed methods of dynamic MLA selection could select MLA for each dataset at cosine similarityscores ranging between 0.82-0.89. In addition, we predicted target disease/disease risks atan accuracy ranging from 94.5% to 98%.To verify the performance of the framework-selected predictor, we compared the accuracy measures individually for each of the three cases. We compared them with traditionalML disease prediction work in the literature. We found that the framework-selected algorithms performed with good accuracy compared to existing literature.
595

Robot Navigation in Cluttered Environments with Deep Reinforcement Learning

Weideman, Ryan 01 June 2019 (has links) (PDF)
The application of robotics in cluttered and dynamic environments provides a wealth of challenges. This thesis proposes a deep reinforcement learning based system that determines collision free navigation robot velocities directly from a sequence of depth images and a desired direction of travel. The system is designed such that a real robot could be placed in an unmapped, cluttered environment and be able to navigate in a desired direction with no prior knowledge. Deep Q-learning, coupled with the innovations of double Q-learning and dueling Q-networks, is applied. Two modifications of this architecture are presented to incorporate direction heading information that the reinforcement learning agent can utilize to learn how to navigate to target locations while avoiding obstacles. The performance of the these two extensions of the D3QN architecture are evaluated in simulation in simple and complex environments with a variety of common obstacles. Results show that both modifications enable the agent to successfully navigate to target locations, reaching 88% and 67% of goals in a cluttered environment, respectively.
596

Evaluating Projections and Developing Projection Models for Daily Fantasy Basketball

Evangelista, Eric C 01 June 2019 (has links) (PDF)
Daily fantasy sports (DFS) has grown in popularity with millions of participants throughout the world. However, studies have shown that most profits from DFS contests are won by only a small percentage of players. This thesis addresses the challenges faced by DFS participants by evaluating sources that provide player projections for NBA DFS contests and by developing machine learning models that produce competitive player projections. External sources are evaluated by constructing daily lineups based on the projections offered and evaluating those lineups in the context of all potential lineups, as well as those submitted by participants in competitive FanDuel DFS tournaments. Lineups produced by the machine learning models are also evaluated in the same manner. This work experiments with several machine learning techniques including automated machine learning and notes the top model developed was successful in 48% of all FanDuel NBA DFS tournaments and 51% of single-entry tournaments over a two-month period, surpassing the top external source evaluated by 9 percentage points and 10 percentage points, respectively.
597

Clinical Insights into Complex Intimate Partner Violence Treatment Outcomes through Machine Learning

MacKenzie, Kameron 23 May 2022 (has links)
No description available.
598

Real-Time Evaluation in Online Continual Learning: A New Hope

Ghunaim, Yasir 02 1900 (has links)
Current evaluations of Continual Learning (CL) methods typically assume that there is no constraint on training time and computation. This is an unrealistic assumption for any real-world setting, which motivates us to propose: a practical real-time evaluation of continual learning, in which the stream does not wait for the model to complete training before revealing the next data for predictions. To do this, we evaluate current CL methods with respect to their computational costs. We conduct extensive experiments on CLOC, a large-scale dataset containing 39 million time-stamped images with geolocation labels. We show that a simple baseline outperforms state-of-the-art CL methods under this evaluation, questioning the applicability of existing methods in realistic settings. In addition, we explore various CL components commonly used in the literature, including memory sampling strategies and regularization approaches. We find that all considered methods fail to be competitive against our simple baseline. This surprisingly suggests that the majority of existing CL literature is tailored to a specific class of streams that is not practical. We hope that the evaluation we provide will be the first step towards a paradigm shift to consider the computational cost in the development of online continual learning methods.
599

Heuristic Weighted Voting

Monteith, Kristine Perry 25 October 2007 (has links) (PDF)
Selecting an effective method for combining the votes of classifiers in an ensemble can have a significant impact on the overall classification accuracy an ensemble is able to achieve. With some methods, the ensemble cannot even achieve as high a classification accuracy as the most accurate individual classifying component. To address this issue, we present the strategy of Heuristic Weighted Voting, a technique that uses heuristics to determine the confidence that a classifier has in its predictions on an instance by instance basis. Using these heuristics to weight the votes in an ensemble results in an overall average increase in classification accuracy over when compared to the most accurate classifier in the ensemble. When considering performance over 18 data sets, Heuristic Weighted Voting compares favorably both in terms of average classification accuracy and algorithm-by-algorithm comparisons in accuracy when evaluated against three baseline ensemble creation strategies as well as the methods of stacking and arbitration.
600

Automatic Readability Detection for Modern Standard Arabic

Forsyth, Jonathan Neil 19 March 2014 (has links) (PDF)
Research for automatic readability prediction of text has increased in the last decade and has shown that various machine learning methods can effectively address this problem. Many researchers have applied machine learning to readability prediction for English, while Modern Standard Arabic (MSA) has received little attention. Here I describe a system which leverages machine learning to automatically predict the readability of MSA. I gathered a corpus comprising 179 documents that were annotated with the Interagency Language Roundtable (ILR) levels. Then, I extracted lexical and discourse features from each document. Finally, I applied the Tilburg Memory-Based Learning (TiMBL) machine learning system to read these features and predict the ILR level of each document using 10-fold cross validation for both 3-level and 5-level classification tasks and an 80/20 division for a 5-level classification task. I measured performance using the F-score. For 3-level and 5-level classifications my system achieved F-scores of 0.719 and 0.519 respectively. I discuss the implication of these results and the possibility of future development.

Page generated in 0.083 seconds