Spelling suggestions: "subject:"kamerafällor"" "subject:"flerakällor""
1 |
Computer Vision for Camera Trap Footage : Comparing classification with object detectionÖrn, Fredrik January 2021 (has links)
Monitoring wildlife is of great interest to ecologists and is arguably even more important in the Arctic, the region in focus for the research network INTERACT, where the effects of climate change are greater than on the rest of the planet. This master thesis studies how artificial intelligence (AI) and computer vision can be used together with camera traps to achieve an effective way to monitor populations. The study uses an image data set, containing both humans and animals. The images were taken by camera traps from ECN Cairngorms, a station in the INTERACT network. The goal of the project is to classify these images into one of three categories: "Empty", "Animal" and "Human". Three different methods are compared, a DenseNet201 classifier, a YOLOv3 object detector, and the pre-trained MegaDetector, developed by Microsoft. No sufficient results were achieved with the classifier, but YOLOv3 performed well on human detection, with an average precision (AP) of 0.8 on both training and validation data. The animal detections for YOLOv3 did not reach an as high AP and this was likely because of the smaller amount of training examples. The best results were achieved by MegaDetector in combination with an added method to determine if the detected animals were dogs, reaching an average precision of 0.85 for animals and 0.99 for humans. This is the method that is recommended for future use, but there is potential to improve all the models and reach even more impressive results.Teknisk-naturvetenskapliga
|
2 |
From Pixels to Predators: Wildlife Monitoring with Machine Learning / Från Pixlar till Rovdjur: Viltövervakning med MaskininlärningEriksson, Max January 2024 (has links)
This master’s thesis investigates the application of advanced machine learning models for the identification and classification of Swedish predators using camera trap images. With the growing threats to biodiversity, there is an urgent need for innovative and non-intrusive monitoring techniques. This study focuses on the development and evaluation of object detection models, including YOLOv5, YOLOv8, YOLOv9, and Faster R-CNN, aiming to enhance the surveillance capabilities of Swedish predatory species such as bears, wolves, lynxes, foxes, and wolverines. The research leverages a dataset from the NINA database, applying data preprocessing and augmentation techniques to ensure robust model training. The models were trained and evaluated using various dataset sizes and conditions, including day and night images. Notably, YOLOv8 and YOLOv9 underwent extended training for 300 epochs, leading to significant improvements in performance metrics. The performance of the models was evaluated using metrics such as mean Average Precision (mAP), precision, recall, and F1-score. YOLOv9, with its innovative Programmable Gradient Information (PGI) and GELAN architecture, demonstrated superior accuracy and reliability, achieving an F1-score of 0.98 on the expanded dataset. The research found that training models on images captured during both day and night jointly versus separately resulted in only minor differences in performance. However, models trained exclusively on daytime images showed slightly better performance due to more consistent and favorable lighting conditions. The study also revealed a positive correlation between the size of the training dataset and model performance, with larger datasets yielding better results across all metrics. However, the marginal gains decreased as the dataset size increased, suggesting diminishing returns. Among the species studied, foxes were the least challenging for the models to detect and identify, while wolves presented more significant challenges, likely due to their complex fur patterns and coloration blending with the background.
|
Page generated in 0.4531 seconds