• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 348
  • 42
  • 20
  • 13
  • 10
  • 8
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 541
  • 541
  • 253
  • 210
  • 173
  • 134
  • 113
  • 111
  • 108
  • 89
  • 87
  • 80
  • 75
  • 74
  • 73
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Fusion of Evolution Constructed Features for Computer Vision

Price, Stanton Robert 04 May 2018 (has links)
In this dissertation, image feature extraction quality is enhanced through the introduction of two feature learning techniques and, subsequently, feature-level fusion strategies are presented that improve classification performance. Two image/signal processing techniques are defined for pre-conditioning image data such that the discriminatory information is highlighted for improved feature extraction. The first approach, improved Evolution-COnstructed features, employs a modified genetic algorithm to learn a series of image transforms, specific to a given feature descriptor, for enhanced feature extraction. The second method, Genetic prOgramming Optimal Feature Descriptor (GOOFeD), is a genetic programming-based approach to learning the transformations of the data for feature extraction. GOOFeD offers a very rich and expressive solution space due to is ability to represent highly complex compositions of image transforms through binary, unary, and/or the combination of the two, operators. Regardless of the two techniques employed, the goal of each is to learn a composition of image transforms from training data to present a given feature descriptor with the best opportunity to extract its information for the application at hand. Next, feature-level fusion via multiple kernel learning (MKL) is utilized to better combine the features extracted and, ultimately, improve classification accuracy performance. MKL is advanced through the introduction of six new indices for kernel weight assignment. Five of the indices are measured directly from the kernel matrix proximity values, making them highly efficient to compute. The calculation of the sixth index is performed explicitly on distributions in the reproducing kernel Hilbert space. The proposed techniques are applied to an automatic buried explosive hazard detection application and significant results are achieved.
292

Evaluating use of Domain Adaptation for Data Augmentation Applications : Implementing a state-of-the-art Domain Adaptation module and testing it on object detection in the landscape domain. / Utvärdering av användningen av domänanpassning för en djupinlärningstillämpning : Implementering av en toppmodern domänanpassningsmodul och testning av den på objektdetektion i en landskapsdomän.

Jamal, Majd January 2022 (has links)
Machine learning models are becoming popular in the industry since the technology has developed to solve numerous problems, such as classification [1], detection [2], and segmentation [3]. These algorithms require training with a large dataset which includes correct class labels to perform well on unseen data. One way to get access to large sets of annotated data is to use data from simulation engines. However this data is often not as complex and rich as real data, and for images, for examples, there can be a need to make these look more photorealistic. One approach to do this is denoted Domain adaptation. In collaboration with SAAB Aeronautics, which funds this research, this study aims to explore available domain adaptation frameworks, implement a framework and use it to make a transformation from simulation to real- life. A state-of-the-art framework CyCADA was re-implemented from scratch using Python and TensorFlow as a Deep Learning package. The CyCADA implementation was successfully verified by reproducing the digit adaptation result demonstrated in the original paper, making domain adaptations between MNIST, USPS, and SVHN. CyCADA was used to domain adapt landscape images from simulation to real-life. Domain-adapted images were used to train an object detector to evaluate whether CyCADA allows a detector to perform more accurately in real-life data. Statistical measurements, unfortunately, showed that domain-adapted images became less photorealistic with CyCADA, 88.68 FID on domain-adapted images compared to 80.43 FID on simulations, and object detection performed better on real-life data without CyCADA, 0.131 mAP with a detector trained on domain-adapted images compared to 0.681 mAP with simulations. Since CyCADA produced effective domain adaptation results between digits, there remains a possibility to try multiple hyperparameter settings and neural network architecture to produce effective results with landscape images. / Denna studie genomfördes i ett samarbete med SAAB Aeronautics och handlar om att utveckla en Domain Adaptation-modul som förbättrar prestandan av ett nätverk för objektdetektering. När ett objektdetekteringsnätverk är tränat med data från en domän så är det inte givet att samma nätverk presterar bra på en annan domän. Till exempel, ritningar och fotografier av frukter. Forskare löser problemet genom att samla data från varje domän och träna flera maskininlärningsalgoritmer, vilket är en lösning som kräver tid och energi. Detta problem kallas för domänskiftesproblem. Ett hett ämne inom djupinlärning handlar om att lösa just detta problem med domänskift och det finns en rad algoritmer som faller i kategorin Domain Adaptation. Denna studie utvecklar CyCADA som metod att evaluera en toppmodern Domain Adaptation-algoritm. Återimplementering av CyCADA blev lyckad, eftersom flera resultat var återskapade från den originala artikeln. CyCADA producerade effektiva domänskiften på bilder av siffror. CyCADA användes med landskapsbilder från en simulator för att öka verklighetsförankringen på bilderna. Domänskiftade landskapsbilder blev suddiga med CyCADA. FID värdet av domänskiftade bilder, ett utvärderingsmått som evaluerar fotorealism av bilder, blev lägre i jämförelse med endast simulerade bilder. Objektdetekteringsnätverket presterade bättre utan användning av CyCADA. Givet att CyCADA presterade bra i att transformera bilder av siffror från en domän till en annan finns det hopp om att ramverket kan prestera bra med landskapsbilder med fler försök i att ställa in hyperparameterar.
293

Objektföljning med roterbar kamera / Object tracking with rotatable camera

Zetterlund, Joel January 2021 (has links)
Idag är det vanligt att det sker filmning av evenemang utan att man använder sig av en professionell videofotograf. Det kan vara knatteligans fotbollsmatch, konferensmöten, undervisning eller YouTube-klipp. För att slippa ha en kameraman kan man använda sig av något som kallas för objektföljningskameror. Det är en kamera som kan följa ett objekts position över tid utan att en kameraman styr kameran. I detta examensarbete beskrivs hur objektföljning fungerar, samt görs en jämförelse mellan objektföljningskameror med datorseende och en kameraman. För att kunna jämföra de mot varandra har en prototyp byggts. Prototypen består av en Raspberry Pi 4B med MOSSE som är en objektföljningsalgoritm och SSD300 som är en detekteringsalgoritm inom datorseende. Styrningen består av en gimbal som består av tre borstlösa motorer som styr kameran med en regulator. Resultatet blev en prototyp som klarar av att följa en person som promenerar i maximalt 100 pixlar per sekund eller 1 meter per sekund i helbild, med en maxdistans på 11,4 meter utomhus. Medan en kameraman klarar av att följa en person i 300–800 pixlar per sekund eller 3 meter per sekund. Prototypen är inte lika bra som en ka-meraman men kan användas för att följa en person som undervisar och går långsamt. Under förutsättningen att prototypen är robust vilket inte är fallet. För att få bättre resultat behövs starkare processor och bättre algoritmer än som använts med prototypen. Då ett stort problem var att uppdateringshastigheten var låg för detekteringsalgoritmen. / Today, it is common for events to be filmed without the use of a professional video photographer. It can be the little league football game, conference meetings, teaching or YouTube clips. To film without a cameraman, you can use something called object tracking cameras. It is a camera that can follow an object's position without a cameraman.This thesis describes how object tracking works as well as comparison between ob-ject tracking cameras with computer vision and a cameraman. In order to compare them against each other, a prototype has been developed. The prototype consists of a Raspberry Pi 4B with MOSSE which is an object tracking algorithm and SSD300 which is a detection algorithm in computer vision. The steering consists of a gimbal consisting of three brushless motors that control the camera with a regulator. The result was a prototype capable of following a person walking at a maximum speed 100 pixels per second or 1 meter per second in full screen, with a maximum distance of 11.4 meters outdoors. While a cameraman managed to follow a person at 300-800 pixels per second or 3 meters per second. The prototype is not as good as a cameraman but can be used to follow a person who teaches and walks slowly. Under basis that the prototype is robust, which is not the case. To get better results, stronger processor and better algorithms are needed than used with the prototype. That’s because a big problem was that the refresh rate was low for the detection algorithm.
294

MACHINE LEARNING FOR RESILIENT AND SUSTAINABLE ENERGY SYSTEMS UNDER CLIMATE CHANGE

Min Soo Choi (16790469) 07 August 2023 (has links)
<p>Climate change is recognized as one of the most significant challenge of the 21st century. Anthropogenic activities have led to a substantial increase in greenhouse gases (GHGs) since the Industrial Revolution, with the energy sector being one the biggest contributors globally. The energy sector is now facing unique challenges not only due to decarbonization goals but also due to increased risks of climate extremes under climate change. </p><p>This dissertation focuses on leveraging machine learning, specifically utilizing unstructured data such as images, to address many of the unprecedented challenges faced by the energy systems. The dissertation begins (Chapter 1) by providing an overview of the risks posed by climate change to modern energy systems. It then explains how machine learning applications can help with addressing these risks. By harnessing the power of machine learning and unstructured data, this research aims to contribute to the development of more resilient and sustainable energy systems, as described briefly below. </p><p>Accurate forecasting of generation is essential for mitigating the risks associated with the increased penetration of intermittent and non-dispatchable variable renewable energy (VRE). In Chapters 2 and 3, deep learning techniques are proposed to predict solar irradiance, a crucial factor in solar energy generation, in order to address the uncertainty inherent in solar energy. Specifically, Chapter 2 introduces a cost-efficient fully exogenous solar irradiance forecasting model that effectively incorporates atmospheric cloud dynamics using satellite imagery. Building upon the work of Chapter 2, Chapter 3 extends the model to a fully probabilistic framework that not only forecasts the future point value of irradiance but also quantifies the uncertainty of the prediction. This is particularly important in the context of energy systems, as it relates to high-risk decision making.</p><p>While the energy system is a major contributor to GHG emissions, it is also vulnerable to climate change risks. Given the essential role of energy systems infrastructure in modern society, ensuring reliable and sustainable operations is of utmost importance. However, our understanding of reliability analysis in electricity transmission networks is limited due to the lack of access to large-scale transmission network topology datasets. Previous research has mostly relied on proxy or synthetic datasets. Chapter 4 addresses this research gap by proposing a novel deep learning-based object detection method that utilizes satellite images to construct a comprehensive large-scale transmission network dataset.</p>
295

DETECTION AND SUB-PIXEL LOCALIZATION OF DIM POINT OBJECTS

Mridul Gupta (15426011) 08 May 2023 (has links)
<p>Detection of dim point objects plays an important role in many imaging applications such as early warning systems, surveillance, astronomy, and microscopy. In satellite imaging, natural phenomena, such as clouds, can confound object detection methods. We propose an object detection method that uses spatial, spectral, and temporal information to reject detections that are not consistent with a moving object and achieve a high probability of detection with a low false alarm rate. We propose another method for dim object detection using convolutional neural networks (CNN). The method augments a conventional space-based detection processing chain with a lightweight CNN to improve detection performance. For evaluation of the performance of our proposed methods,</p> <p>we used a set of curated satellite images and generated receiver operating characteristics (ROC).</p> <p><br></p> <p>Most satellite images have adequate spatial resolution and signal-to-noise ratio (SNR) for the detection and localization of common large objects, such as buildings. In many applications, the spatial resolution of the imaging system is not enough to localize a point object or two closely-spaced objects (CSOs) that are described by only a few pixels (or less than one pixel). A low signal-to-noise ratio (SNR) increases the difficulty such as when the objects are dim. We describe a method to estimate the objects’ amplitudes and spatial locations with sub-pixel accuracy using non-linear optimization and information from multiple spectral bands. We also propose a machine</p> <p>learning method that minimizes a cost function derived from the maximum likelihood estimation of the observed image to determine an object’s sub-pixel spatial location and amplitude. We derive the Cramer-Rao Lower Bound and compare the proposed estimators’ variance with this bound.</p>
296

Multitask Deep Learning models for real-time deployment in embedded systems / Deep Learning-modeller för multitaskproblem, anpassade för inbyggda system i realtidsapplikationer

Martí Rabadán, Miquel January 2017 (has links)
Multitask Learning (MTL) was conceived as an approach to improve thegeneralization ability of machine learning models. When applied to neu-ral networks, multitask models take advantage of sharing resources forreducing the total inference time, memory footprint and model size. Wepropose MTL as a way to speed up deep learning models for applicationsin which multiple tasks need to be solved simultaneously, which is par-ticularly useful in embedded, real-time systems such as the ones foundin autonomous cars or UAVs.In order to study this approach, we apply MTL to a Computer Vi-sion problem in which both Object Detection and Semantic Segmenta-tion tasks are solved based on the Single Shot Multibox Detector andFully Convolutional Networks with skip connections respectively, usinga ResNet-50 as the base network. We train multitask models for twodifferent datasets, Pascal VOC, which is used to validate the decisionsmade, and a combination of datasets with aerial view images capturedfrom UAVs.Finally, we analyse the challenges that appear during the process of train-ing multitask networks and try to overcome them. However, these hinderthe capacity of our multitask models to reach the performance of the bestsingle-task models trained without the limitations imposed by applyingMTL. Nevertheless, multitask networks benefit from sharing resourcesand are 1.6x faster, lighter and use less memory compared to deployingthe single-task models in parallel, which turns essential when runningthem on a Jetson TX1 SoC as the parallel approach does not fit intomemory. We conclude that MTL has the potential to give superior per-formance as far as the object detection and semantic segmentation tasksare concerned in exchange of a more complex training process that re-quires overcoming challenges not present in the training of single-taskmodels.
297

Partially Observable Markov Decision Processes for Faster Object Recognition

Olafsson, Björgvin January 2016 (has links)
Object recognition in the real world is a big challenge in the field of computer vision. Given the potentially enormous size of the search space it is essential to be able to make intelligent decisions about where in the visual field to obtain information from to reduce the computational resources needed. In this report a POMDP (Partially Observable Markov Decision Process) learning framework, using a policy gradient method and information rewards as a training signal, has been implemented and used to train fixation policies that aim to maximize the information gathered in each fixation. The purpose of such policies is to make object recognition faster by reducing the number of fixations needed. The trained policies are evaluated by simulation and comparing them with several fixed policies. Finally it is shown that it is possible to use the framework to train policies that outperform the fixed policies for certain observation models.
298

Detecting Faulty Tape-around Weatherproofing Cables by Computer Vision

Sun, Ruiwen January 2020 (has links)
More cables will be installed owing to setting up more radio towers when it comes to 5G. However, a large proportion of radio units are constructed high in the open space, which makes it difficult for human technicians to maintain the systems. Under these circumstances, automatic detections of errors among radio cabinets are crucial. Cables and connectors are usually covered with weatherproofing tapes, and one of the most common problems is that the tapes are not closely rounded on the cables and connectors. This makes the tape go out of the cable and look like a waving flag, which may seriously damage the radio systems. The thesis aims at detecting this flagging-tape and addressing the issues. This thesis experiments two methods for object detection, the convolutional neural network as well as the OpenCV and image processing. The former uses YOLO (You Only Look Once) network for training and testing, while in the latter method, the connected component method is applied for the detection of big objects like the cables and line segment detector is responsible for the flagging-tape boundary extraction. Multiple parameters, structurally and functionally unique, were developed to find the most suitable way to meet the requirement. Furthermore, precision and recall are used to evaluate the performance of the system output quality, and in order to improve the requirements, larger experiments were performed using different parameters. The results show that the best way of detecting faulty weatherproofing is with the image processing method by which the recall is 71% and the precision reaches 60%. This method shows better performance than YOLO dealing with flagging-tape detection. The method shows the great potential of this kind of object detection, and a detailed discussion regarding the limitation is also presented in the thesis. / Fler kablar kommer att installeras på grund av installation av fler radiotorn när det gäller 5G. En stor del av radioenheterna är dock konstruerade högt i det öppna utrymmet, vilket gör det svårt för mänskliga tekniker att underhålla systemen. Under dessa omständigheter är automatiska upptäckter av fel bland radioskåp avgörande. Kablar och kontakter täcks vanligtvis med väderbeständiga band, och ett av de vanligaste problemen är att banden inte är rundade på kablarna och kontakterna. Detta gör att tejpen går ur kabeln och ser ut som en viftande flagga, vilket allvarligt kan skada radiosystemen. Avhandlingen syftar till att upptäcka detta flaggband och ta itu med frågorna. Den här avhandlingen experimenterar två metoder för objektdetektering, det invändiga neurala nätverket såväl som OpenCV och bildbehandling. Den förstnämnda använder YOLO (You Only Look Once) nätverk för träning och testning, medan i den senare metoden används den anslutna komponentmetoden för detektering av stora föremål som kablarna och linjesegmentdetektorn är ansvarig för utvinning av bandbandgränsen. Flera parametrar, strukturellt och funktionellt unika, utvecklades för att hitta det mest lämpliga sättet att uppfylla kravet. Dessutom används precision och återkallande för att utvärdera prestandan för systemutgångskvaliteten, och för att förbättra kraven utfördes större experiment med olika parametrar. Resultaten visar att det bästa sättet att upptäcka felaktigt väderbeständighet är med bildbehandlingsmetoden genom vilken återkallelsen är 71% och precisionen når 60%. Denna metod visar bättre prestanda än YOLO som hanterar markering av flaggband. Metoden visar den stora potentialen för denna typ av objektdetektering, och en detaljerad diskussion om begränsningen presenteras också i avhandlingen.
299

A Path Planning Approach for Context Aware Autonomous UAVs used for Surveying Areas in Developing Regions / En Navigeringsstrategi för Autonoma Drönare för Utforskning av Utvecklingsregioner

Kringberg, Fredrika January 2018 (has links)
Developing regions are often characterized by large areas that are poorly reachable or explored. The mapping and census of roaming populations in these areas are often difficult and sporadic. A recent spark in the development of small aerial vehicles has made them the perfect tool to efficiently and accurately monitor these areas. This paper presents an approach to aid area surveying through the use of Unmanned Aerial Vehicles. The two main components of this approach are an efficient on-device deep learning object identification component to capture and infer images with acceptable performance (latency andaccuracy), and a dynamic path planning approach, informed by the object identification component. In particular, this thesis illustrates the development of the path planning component, which exploits potential field methods to dynamically adapt the path based on inputs from the vision system. It also describes the integration work that was performed to implement the approach on a prototype platform, with the aim to achieve autonomous flight with onboard computation. The path planning component was developed with the purpose of gaining information about the populations detected by the object identification component, while considering the limited resources of energy and computational power onboard a UAV. The developed algorithm was compared to navigation using a predefined path, where the UAV does not react to the environment. Results from the comparison show that the algorithm provide more information about the objects of interest, with a very small change in flight time. The integration of the object identification and the path planning components on the prototype platform was evaluated in terms of end-to-end latency, power consumption and resource utilization. Results show that the proposed approach is feasible for area surveying in developing regions. Parts of this work has been published in the workshop of DroNet, collocated with MobiSys, with the title Surveying Areas in Developing Regions Through Context Aware Drone Mobility. Thework was carried out in collaboration with Alessandro Montanari, Alice Valentini, Cecilia Mascoloand Amanda Prorok. / Utvecklingsländer är ofta karaktäriserade av vidsträcka områden som är svåråtkomliga och outforskade. Kartläggning och folkräkning av populationen i dessa områden är svåra uppgifter som sker sporadiskt. Nya framsteg i utvecklingen av små, luftburna fordon har gjort dem till perfekta verktyg för att effektivt och noggrant bevaka dessa områden. Den här rapporten presenterar en strategi för att underlätta utforskning av dessa områden med hjälp av drönare. De två huvudkomponenterna i denna strategi är en effektiv maskininlärningskomponent för objektidentifiering med acceptabel prestanda i avseende av latens och noggrannhet, samt en dynamisk navigeringskomponent som informeras av objektidentifieringskomponenten. I synnerhet illustrerar denna avhandling utvecklingen av navigeringskomponenten, som utnyttjar potentialfält för att dynamiskt anpassa vägen baserat på information från objektidentifieringssystemet. Dessutom beskrivs det integrationsarbete som utförts för att implementera strategin på en prototypplattform, med målet att uppnå autonom flygning med all beräkning utförd ombord. Navigeringskomponenten utvecklades i syfte att maximera informationen om de populationer som upptäckts av objektidentifieringskomponenten, med hänsyn till de begränsade resurserna av energi och beräkningskraft ombord på en drönare. Den utvecklade algoritmen jämfördes med navigering med en fördefinierad väg, där drönaren inte reagerar på omgivningen. Resultat från jämförelsen visar att algoritmen ger mer information om objekten av intresse, med en mycket liten förändring i flygtiden. Integreringen av objektidentifieringskomponenten och navigeringskomponenten på prototypplattformen utvärderades med avseende på latens, strömförbrukning och resursutnyttjande. Resultaten visar att den föreslagna strategin är genomförbar för kartläggning och utforskning av utvecklingsregioner. Delar av detta arbete har publicerats i DroNets workshop, samlokaliserad med MobiSys, med titeln Surveying Areas in Developing Regions Through Context Aware Drone Mobility. Arbetet utfördes i samarbete med Alessandro Montanari, Alice Valentini, Cecilia Mascolo och Amanda Prorok.
300

Low-Cost UAV Swarm for Real-Time Object Detection Applications

Valdovinos Miranda, Joel 01 June 2022 (has links) (PDF)
With unmanned aerial vehicles (UAVs), also known as drones, becoming readily available and affordable, applications for these devices have grown immensely. One type of application is the use of drones to fly over large areas and detect desired entities. For example, a swarm of drones could detect marine creatures near the surface of the ocean and provide users the location and type of animal found. However, even with the reduction in cost of drone technology, such applications result costly due to the use of custom hardware with built-in advanced capabilities. Therefore, the focus of this thesis is to compile an easily customizable, low-cost drone design with the necessary hardware for autonomous behavior, swarm coordination, and on-board object detection capabilities. Additionally, this thesis outlines the necessary network architecture to handle the interconnection and bandwidth requirements of the drone swarm. The drone on-board system uses a PixHawk 4 flight controller to handle flight mechanics, a Raspberry Pi 4 as a companion computer for general-purpose computing power, and a NVIDIA Jetson Nano Developer Kit to perform object detection in real-time. The implemented network follows the 802.11s standard for multi-hop communications with the HWMP routing protocol. This topology allows drones to forward packets through the network, significantly extending the flight range of the swarm. Our experiments show that the selected hardware and implemented network can provide direct point-to-point communications at a range of up to 1000 feet, with extended range possible through message forwarding. The network also provides sufficient bandwidth for bandwidth intensive data such as live video streams. With an expected flight time of about 17 minutes, the proposed design offers a low-cost drone swarm solution for mid-range aerial surveillance applications.

Page generated in 0.3479 seconds