• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 346
  • 42
  • 19
  • 13
  • 10
  • 8
  • 4
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 535
  • 535
  • 247
  • 204
  • 168
  • 129
  • 110
  • 110
  • 108
  • 87
  • 86
  • 79
  • 75
  • 74
  • 73
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Kameraövervakningssystem för fåglar med Raspberry Pi

Moza Orellana, Alfonso de Jesus January 2022 (has links)
Motivet bakom examensarbetet är att skapa ett kameraövervakningssystem, ämnadför att filma och fotografera fiskgjuseföräldrarna, när de är vid deras avkommor ifiskgjusebon. Inspelningarna och bilderna ska användas för att studera fiskgjuseför-äldrarnas beteende vid deras avkommor. Litteraturstudien visar att fiskgjusen skaparsina bon på höga plattkronade tallar. Dessutom att människors närvaro vid fiskgjuse-bona har en negativ påverkan, då fiskgjuseungar minskar i antal. Likväl har forskareskapat kameraövervakning ämnad för att filma fågelbo. Kameraövervakningssyste-met kommer att medföra att människors närvaro minimeras vid fiskgjuseboet. Nu-förtiden används generellt Raspberry Pi, vilket är en dator byggd på ett litetkretskort. Raspberry Pi kan anslutas med en kamera och lagra inspelningar på SD-minneskortet. Dessutom går det att ansluta diverse sensorer, installera diversemjukvaror och programmeras. Metoden för studien är att skriva koder för att skapa program som göra att Rasp-berry Pi startar inspelningar och tar foton. Dessa program skrivs i Python, biblioteksom Open CV och COCO datablad används. Studien omfattar en konstruktion av ett kameraövervakningssystem, med ett Rasp-berry Pi, högkvalitets kamera, infrarödsensor, ljudsensor och program för att ta fo-ton och starta filmningen. Filmerna sparas på SD-minneskortet. Kameraövervak-ningssystemet strömkälla blir en power bank. Resultatet blev ett program som tar ett foto när sensorerna detektera antingen infra-röd strålning eller ljud eller förändringar i pixlar från kamerabilden. Fotot analyserasav programmet för att se om det är några fåglar på fotot. I programmet går det attställa in hur många fåglar som ska vara med på bilden för att den ska starta inspelningoch spara fotot. Sedan gjordes ett program som hela tiden känner av om det finnsfåglar på kamerabilden. När önskvärda antal fåglar är med på kamerabilden, börjarkameran spela in och tar sedan ett foto. / The motive behind the thesis is to create a camera surveillance system, intended forfilming and photographing the osprey parents, when they are with their offspring, inthe osprey nest. The recordings and pictures will be used to study the osprey par-ents' behavior, nearby their offspring. The literature study shows that ospreys createtheir nests on tall, flat-crowned pines. In addition, the presence of humans near os-prey nests has a negative impact, as osprey chicks decrease in number. Nevertheless,researchers have created camera surveillance intended for filming bird nests. Thecamera surveillance system will minimize human presence at the osprey nest. Now-adays, the Raspberry Pi is used, which is a computer built on a small circuit board.The Raspberry Pi can be connected with a camera and store recordings on the SDmemory card. In addition, it is possible to connect various sensors, install varioussoftware and program. The method of the study is to write codes to create programs that make the Rasp-berry Pi start recording and taking photos. These programs are written in Python,libraries such as Open CV and COCO datasheets are used. The study includes the design of a camera surveillance system, with a Raspberry Pi,a high-quality camera, an infrared sensor, a sound sensor, a program for taking pho-tos and to start filming. The recordings and photos are saved on the SD memorycard. The camera surveillance system is powered by a power bank. The result was a program that takes a photo when the sensors detect either infraredradiation or sound or changes in pixels from the camera image. The photo is ana-lyzed by the program to see if there are any birds in the photo. In the program it ispossible to set how many birds should be in the picture for it to start recording andsave the photo. Then a program was made that continuously detects if there arebirds in the camera image. When the required number of birds is in the camera im-age, the camera starts recording and then takes a photo.
62

Training Images

Tahrir Ibraq Siddiqui (11173185) 23 July 2021 (has links)
500 of 690 training images used in optimized training runs.
63

Annotations

Tahrir Ibraq Siddiqui (11173185) 23 July 2021 (has links)
Annotations for 500 of the 690 images used for training.
64

Object Detection for Aerial View Images: Dataset and Learning Rate

Qi, Yunlong 05 1900 (has links)
In recent years, deep learning based computer vision technology has developed rapidly. This is not only due to the improvement of computing power, but also due to the emergence of high-quality datasets. The combination of object detectors and drones has great potential in the field of rescue and disaster relief. We created an image dataset specifically for vision applications on drone platforms. The dataset contains 5000 images, and each image is carefully labeled according to the PASCAL VOC standard. This specific dataset will be very important for developing deep learning algorithms for drone applications. In object detection models, loss function plays a vital role. Considering the uneven distribution of large and small objects in the dataset, we propose adjustment coefficients based on the frequencies of objects of different sizes to adjust the loss function, and finally improve the accuracy of the model.
65

ZERO-SHOT OBJECT DETECTION METHOD COMPARISON AND ANALYSIS

Che, Peining 30 August 2019 (has links)
No description available.
66

Advanced Feature Learning and Representation in Image Processing for Anomaly Detection

Price, Stanton Robert 09 May 2015 (has links)
Techniques for improving the information quality present in imagery for feature extraction are proposed in this thesis. Specifically, two methods are presented: soft feature extraction and improved Evolution-COnstructed (iECO) features. Soft features comprise the extraction of image-space knowledge by performing a per-pixel weighting based on an importance map. Through soft features, one is able to extract features relevant to identifying a given object versus its background. Next, the iECO features framework is presented. The iECO features framework uses evolutionary computation algorithms to learn an optimal series of image transforms, specific to a given feature descriptor, to best extract discriminative information. That is, a composition of image transforms are learned from training data to present a given feature descriptor with the best opportunity to extract its information for the application at hand. The proposed techniques are applied to an automatic explosive hazard detection application and significant results are achieved.
67

Fusion for Object Detection

Wei, Pan 10 August 2018 (has links)
In a three-dimensional world, for perception of the objects around us, we not only wish to classify them, but also know where these objects are. The task of object detection combines both classification and localization. In addition to predicting the object category, we also predict where the object is from sensor data. As it is not known ahead of time how many objects that we have interest in are in the sensor data and where are they, the output size of object detection may change, which makes the object detection problem difficult. In this dissertation, I focus on the task of object detection, and use fusion to improve the detection accuracy and robustness. To be more specific, I propose a method to calculate measure of conflict. This method does not need external knowledge about the credibility of each source. Instead, it uses the information from the sources themselves to help assess the credibility of each source. I apply the proposed measure of conflict to fuse independent sources of tracking information from various stereo cameras. Besides, I propose a computational intelligence system for more accurate object detection in real--time. The proposed system uses online image augmentation before the detection stage during testing and fuses the detection results after. The fusion method is computationally intelligent based on the dynamic analysis of agreement among inputs. Comparing with other fusion operations such as average, median and non-maxima suppression, the proposed methods produces more accurate results in real-time. I also propose a multi--sensor fusion system, which incorporates advantages and mitigate disadvantages of each type of sensor (LiDAR and camera). Generally, camera can provide more texture and color information, but it cannot work in low visibility. On the other hand, LiDAR can provide accurate point positions and work at night or in moderate fog or rain. The proposed system uses the advantages of both camera and LiDAR and mitigate their disadvantages. The results show that comparing with LiDAR or camera detection alone, the fused result can extend the detection range up to 40 meters with increased detection accuracy and robustness.
68

Whistler Waves Detection - Investigation of modern machine learning techniques to detect and characterise whistler waves

Konan, Othniel Jean Ebenezer Yao 17 February 2022 (has links)
Lightning strokes create powerful electromagnetic pulses that routinely cause very low frequency (VLF) waves to propagate across hemispheres along geomagnetic field lines. VLF antenna receivers can be used to detect these whistler waves generated by these lightning strokes. The particular time/frequency dependence of the received whistler wave enables the estimation of electron density in the plasmasphere region of the magnetosphere. Therefore the identification and characterisation of whistlers are important tasks to monitor the plasmasphere in real time and to build large databases of events to be used for statistical studies. The current state of the art in detecting whistler is the Automatic Whistler Detection (AWD) method developed by Lichtenberger (2009) [1]. This method is based on image correlation in 2 dimensions and requires significant computing hardware situated at the VLF receiver antennas (e.g. in Antarctica). The aim of this work is to develop a machine learning based model capable of automatically detecting whistlers in the data provided by the VLF receivers. The approach is to use a combination of image classification and localisation on the spectrogram data generated by the VLF receivers to identify and localise each whistler. The data at hand has around 2300 events identified by AWD at SANAE and Marion and will be used as training, validation, and testing data. Three detector designs have been proposed. The first one using a similar method to AWD, the second using image classification on regions of interest extracted from a spectrogram, and the last one using YOLO, the current state of the art in object detection. It has been shown that these detectors can achieve a misdetection and false alarm rate, respectively, of less than 15% on Marion's dataset. It is important to note that the ground truth (initial whistler label) for data used in this study was generated using AWD. Moreover, SANAE IV data was small and did not provide much content in the study.
69

Detection and tracking of unknown objects on the road based on sparse LiDAR data for heavy duty vehicles / Upptäckt och spårning av okända objekt på vägen baserat på glesa LiDAR-data för tunga fordon

Shilo, Albina January 2018 (has links)
Environment perception within autonomous driving aims to provide a comprehensive and accurate model of the surrounding environment based on information from sensors. For the model to be comprehensive it must provide the kinematic state of surrounding objects. The existing approaches of object detection and tracking (estimation of kinematic state) are developed for dense 3D LiDAR data from a sensor mounted on a car. However, it is a challenge to design a robust detection and tracking algorithm for sparse 3D LiDAR data. Therefore, in this thesis we propose a framework for detection and tracking of unknown objects using sparse VLP-16 LiDAR data which is mounted on a heavy duty vehicle. Experiments reveal that the proposed framework performs well detecting trucks, buses, cars, pedestrians and even smaller objects of a size bigger than 61x41x40 cm. The detection distance range depends on the size of an object such that large objects (trucks and buses) are detected within 25 m while cars and pedestrians within 18 m and 15 m correspondingly. The overall multiple objecttracking accuracy of the framework is 79%. / Miljöperception inom autonom körning syftar till att ge en heltäckande och korrekt modell av den omgivande miljön baserat på information från sensorer. För att modellen ska vara heltäckande måste den ge information om tillstånden hos omgivande objekt. Den befintliga metoden för objektidentifiering och spårning (uppskattning av kinematiskt tillstånd) utvecklas för täta 3D-LIDAR-data från en sensor monterad på en bil. Det är dock en utmaning att designa en robust detektions och spårningsalgoritm för glesa 3D-LIDAR-data. Därför föreslår vi ett ramverk för upptäckt och spårning av okända objekt med hjälp av gles VLP-16-LIDAR-data som är monterat på ett tungt fordon. Experiment visar att det föreslagna ramverket upptäcker lastbilar, bussar, bilar, fotgängare och även mindre objekt om de är större än 61x41x40 cm. Detekteringsavståndet varierar beroende på storleken på ett objekt så att stora objekt (lastbilar och bussar) detekteras inom 25 m medan bilar och fotgängare detekteras inom 18 m respektive 15 m på motsvarande sätt. Ramverkets totala precision för objektspårning är 79%.
70

Detecting Curved Objects Against Cluttered Backgrounds

Prokaj, Jan 01 January 2008 (has links)
Detecting curved objects against cluttered backgrounds is a hard problem in computer vision. We present new low-level and mid-level features to function in these environments. The low-level features are fast to compute, because they employ an integral image approach, which makes them especially useful in real-time applications. The mid-level features are built from low-level features, and are optimized for curved object detection. The usefulness of these features is tested by designing an object detection algorithm using these features. Object detection is accomplished by transforming the mid-level features into weak classifiers, which then produce a strong classifier using AdaBoost. The resulting strong classifier is then tested on the problem of detecting heads with shoulders. On a database of over 500 images of people, cropped to contain head and shoulders, and with a diverse set of backgrounds, the detection rate is 90% while the false positive rate on a database of 500 negative images is less than 2%.

Page generated in 0.0942 seconds