• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 224
  • 10
  • 10
  • 8
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 303
  • 303
  • 138
  • 115
  • 111
  • 94
  • 69
  • 65
  • 58
  • 54
  • 54
  • 51
  • 50
  • 48
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Camera Based Deep Learning Algorithms with Transfer Learning in Object Perception

Hu, Yujie January 2021 (has links)
The perception system is the key for autonomous vehicles to sense and understand the surrounding environment. As the cheapest and most mature sensor, monocular cameras create a rich and accurate visual representation of the world. The objective of this thesis is to investigate if camera-based deep learning models with transfer learning technique can achieve 2D object detection, License Plate Detection and Recognition (LPDR), and highway lane detection in real time. The You Only Look Once version 3 (YOLOv3) algorithm with and without transfer learning is applied on the Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) dataset for cars, cyclists, and pedestrians detection. This application shows that objects could be detected in real time and the transfer learning boosts the detection performance. The Convolutional Recurrent Neural Network (CRNN) algorithm with a pre-trained model is applied on multiple License Plate (LP) datasets for real-time LP recognition. The optimized model is then used to recognize Ontario LPs and achieves high accuracy. The Efficient Residual Factorized ConvNet (ERFNet) algorithm with transfer learning and a cubic spline model are modified and implemented on the TuSimple dataset for lane segmentation and interpolation. The detection performance and speed are comparable with other state-of-the-art algorithms. / Thesis / Master of Applied Science (MASc)
82

ML-Based Optimization of Large-Scale Systems: Case Study in Smart Microgrids and 5G RAN

Zhou, Hao 10 August 2023 (has links)
The recent advances in machine learning (ML) have brought revolutionary changes to every field. Many novel applications, such as face recognition and natural language processing, have demonstrated the great potential of ML techniques. Indeed, ML can significantly enhance the intelligence of many existing systems, including smart grid, wireless communications, mechanical engineering, and so on. For instance, microgrid (MG), a distribution-level power system, can exchange energy with the main grid or work in islanded mode, which enables higher flexibility for the smart grid. However, it suffers considerable management complexity by including multiple entities such as renewable energy resources, energy storage system (ESS), loads, etc. In addition, each entity may have unique observations and policies to make autonomous decisions. Similarly, 5G networks are designed to provide lower latency, higher throughput and reliability for a large number of user devices, but the evolving network architecture also leads to great complexity for network management. The 5G network management should jointly consider various user types and network resources in a dynamic wireless environment. In addition, the integration of new techniques, such as reconfigurable intelligent surfaces (RISs), requires more efficient algorithms for network optimization. Consequently, intelligent management schemes are crucial to schedule network resources. In this work, we aim to develop state-of-the-art ML techniques to improve the performance of large-scale systems. As case studies, we focus on MG energy management and 5G radio access network (RAN) management. Multi-agent reinforcement learning (MARL) is presumed to be an ideal solution for MG energy management by considering each entity as an independent agent. We further investigate how communication failures will affect MG energy trading by using Bayesian deep reinforcement learning (BA-DRL). On the 5G side, we use MARL, transfer reinforcement learning (TRL), and hierarchical reinforcement learning (HRL) to improve network performance. In particular, we study the performance of those algorithms under various scenarios, including radio resource allocation for network slicing, joint radio and computation resource for mobile edge computing (MEC), joint radio and cache resource allocation for edge caching. Additionally, we further investigate how HRL can improve the energy efficiency (EE) of RIS-aided heterogeneous networks. The findings of this research highlight the capabilities of various ML techniques under different application domains. Firstly, different MG entities can be well coordinated by applying MARL, enabling intelligent decision-making for each agent. Secondly, Bayesian theory can be used to solve partially observable Markov decision process (POMDP) problems caused by communication failures in MARL. Thirdly, MARL is capable of balancing the heterogeneous requirements of different slices in 5G networks, guaranteeing satisfactory overall network performance. Then, we find that TRL can significantly improve the convergence performance of conventional reinforcement learning or deep reinforcement learning by transferring the knowledge from experts to learners, which is demonstrated over a 5G network slicing case study. Finally, we find that long-term and short-term decisions are well coordinated by HRL, and the proposed cooperative hierarchical architecture achieves higher throughput and EE than conventional algorithms.
83

Object Discovery in Novel Environments for Efficient Deterministic Planning

Frank, Ethan 26 May 2023 (has links)
No description available.
84

Breast Abnormality Diagnosis Using Transfer and Ensemble Learning

Azour, Farnoosh 02 June 2022 (has links)
Breast cancer is the second fatal disease among cancers both in Canada and across the globe. However, in the case of early detection, it can raise the survival rate. Thus, researchers and scientists have been practicing to develop Computer-Aided Diagnosis (CAD)x systems. Traditional CAD systems depend on manual feature extraction, which has provided radiologists with poor detection and diagnosis tools. However, recently the application of Convolutional Neural Networks (CNN)s as one of the most impressive deep learning-based methods and one of its interesting techniques, Transfer Learning, has revolutionized the performance and development of these systems. In medical diagnosis, one issue is distinguishing between breast mass lesions and calcifications (little deposits of calcium). This work offers a solution using transfer learning and ensemble learning (majority voting) at the first stage and later replacing the voting strategy with soft voting. Also, regardless of the abnormality's type (mass or calcification), the severeness of the abnormality plays a key role. Nevertheless, in this study, we went further and made an effort to create a (CAD)x pathology diagnosis system. More specifically, after comparing multi-classification results with a two-staged abnormality diagnosis system, we propose the two-staged binary classifier as our final model. Thus, we offer a novel breast cancer diagnosis system using a wide range of pre-trained models in this study. To the best of our knowledge, we are the first who integrate the application of a wide range of state-of-the-art pre-trained models, particularly including EfficientNet for the transfer learning part, and subsequently, employ ensemble learning. With the application of pre-trained CNN-based models or transfer learning, we are able to overcome the lack of large-size datasets. Moreover, with the EfficientNet family offering better results with fewer parameters, we achieved promising results in terms of accuracy and AUC-score, and later ensemble learning was applied to provide robustness for the network. After performing 10-fold cross-validation, our experiments yielded promising results; while constructing the breast abnormality classifier 0.96 ± 0.03 and 0.96 for accuracy and AUC-score, respectively. Similarly, it resulted in 0.85 ± 0.08 for accuracy and 0.81 for AUC-score when constructing pathology diagnosis.
85

Performance enhancement of wide-range perception issues for autonomous vehicles

Sharma, Suvash 13 May 2022 (has links) (PDF)
Due to the mission-critical nature of the autonomous driving application, underlying algorithms for scene understanding should be given special care during their development. Mostly, they should be designed with precise consideration of accuracy and run-time. Accuracy should be considered strictly which if compromised leads to faulty interpretation of the environment that may ultimately result in accidental scenarios. On the other hand, run-time holds an important position as the delayed understanding of the scene would hamper the real-time response of the vehicle which again leads to unforeseen accidental cases. These factors come as the functions of several factors such as the design and complexity of the algorithms, nature of the encountered objects or events in the environment, weather-induced effects, etc. In this work, several novel scene understanding algorithms in terms- of semantic segmentation are devised. First, a transfer learning technique is proposed in order to transfer the knowledge from the data-rich domain to a data-scarce off-road driving domain for semantic segmentation such that the learned information is efficiently transferred from one domain to another while reducing run-time and increasing the accuracy. Second, the performance of several segmentation algorithms is assessed under the easy-to-severe rainy condition and two methods for achieving the robustness are proposed. Third, a new method of eradicating the rain from the input images is proposed. Since autonomous vehicles are rich in sensors and each of them has the capability of representing different types of information, it is worth fusing the information from all the possible sensors. Forth, a fusion mechanism with a novel algorithm that facilitates the use of local and non-local attention in a cross-modal scenario with RGB camera images and lidar-based images for road detection using semantic segmentation is executed and validated for different driving scenarios. Fifth, a conceptually new method of off-road driving trail representation, called Traversability, is introduced. To establish the correlation between a vehicle’s capability and the level of difficulty of the driving trail, a new dataset called CaT (CAVS Traversability) is introduced. This dataset is very helpful for future research in several off-road driving applications including military purposes, robotic navigation, etc.
86

Marine Habitat Mapping Using Image Enhancement Techniques & Machine Learning

Mureed, Mudasar January 2022 (has links)
AbstractThe mapping of habitats is the first step that is done in policies that target theenvironment, as well as in spatial planning and management. The biodiversityplans are always centered around habitats. Therefore, constant monitoring ofthese delicate species in terms of health, changes, and extinction is a must inbiodiversity plans. Human activities are constantly growing, resulting in theextinction of land and marine habitats. Land habitats are being destroyed using airpollution and the cutting of forests. At the same time, marine habitats are beingdestroyed due to acidification of ocean waters and waste materials from theindustries and pollution. The author has focused on aquatic habitats in thisdissertation, mainly coral reefs. An estimate of 27% of coral reef ecosystems havebeen destroyed, and a further 30% are at risk of being damaged in the comingyears. Coral reefs occupy 1% of the ocean floor, and yet they provide a home to30% of marine organisms. To analyze the health of these aquatic habitats, theyneed to be assessed through habitat mapping. Habitat mapping shows thegeographic distribution of different habitats within a particular area. Marinehabitats are typically mapped using camera imagery. The quality of underwaterimages suffers from the characteristics of the marine environment. This results inblurry images or containing particles that cover many parts of an image. Toovercome this, underwater image enhancement algorithms are used to preprocessimages beforehand. Now, there are many underwater image enhancementalgorithms that target different characteristics of the marine environment, butthere is no consensus among researchers about a single underwater technique thatcan be used for any marine dataset. In this dissertation, multiple experiments onvarious popular image enhancement techniques (seven) were conducted and usedto reach a decision about a single underwater approach for all datasets. Thedatasets include EILAT, EILAT2, RSMAS, and MLC08. Also, two state-of-the-artdeep convolutional neural networks for habitat mapping, i.e., DenseNet andMobileNet tested. Maximum results from the combination of Contrast LimitedAdaptive Histogram Equalization (CLAHE) achieved as underwater imageenhancement technique and DenseNet as deep convolutional network. / Not applicable
87

TwinLossGAN: Domain Adaptation Learning for Semantic Segmentation

Song, Yuehua 19 August 2022 (has links)
Most semantic segmentation methods based on Convolutional Neural Networks (CNNs) rely on supervised pixel-level labelling, but because pixel-level labelling is time-consuming and laborious, synthetic images are generated by software, and their label information is already embedded inside the data; therefore, labelling can be done automatically. This advantage makes synthetic datasets widely used in training deep learning models for real-world cases. Still, compared to supervised learning with real-world labelled images, the accuracy of the models trained using synthetic datasets is not high when applied to real-world data. So, researchers have turned their interest to Unsupervised Domain Adaptation (UDA), which is mainly used to transfer knowledge learned from one domain to another. That is why we can use synthetic data to train the model. Then, the model can use what it learned to deal with real-world problems. UDA is an essential part of transfer learning. It aims to make two domain feature distributions as close as possible. In other words, UDA is mainly used to migrate the learned knowledge from one domain to another, so the knowledge and distribution learned from the source domain feature space can be migrated to the target space to improve the prediction accuracy of the target domain. However, compared with the traditional supervised learning model, the accuracy of UDA is not high when the trained UDA is used for scene segmentation of real images. The reason for the low accuracy of UDA is that the domain gap between the source and target domains is too large. The image distribution information learned by the model from the source domain cannot be applied to the target domain, which limits the development of UDA. Therefore we propose a new UDA model called TwinLossGAN, which will reduce the domain gap in two steps. The first step is to mix images from the source and target domains. The purpose is to allow the model to learn the features of images from both domains well. Mixing is performed by selecting a synthetic image on the source domain and then selecting a real-world image on the target domain. The two selected images are input to the segmenter to obtain semantic segmentation results separately. Then, the segmentation results are fed into the mixing module. The mixing model uses the ClassMix method to copy and paste some segmented objects from one image into another using segmented masks. Additionally, it generates inter-domain composite images and the corresponding pseudo-label. Then, in the second step, we modify a Generative Adversarial Network (GAN) to reduce the gap between domains further. The original GAN network has two main parts: generator and discriminator. In our proposed TwinLossGAN, the generator performs semantic segmentation on the source domain images and the target domain images separately. Segmentations are trained in parallel. The source domain synthetic images are segmented, and the loss is computed using synthetic labels. At the same time, the generated inter-domain composite images are fed to the segmentation module. The module compares its semantic segmentation results with the pseudo-label and calculates the loss. These calculated twin losses are used as generator loss for the GAN cycle for iterations. The GAN discriminator examines whether the semantic segmentation results originate from the source or target domain. The premise was that we retrieved data from GTA5 and SYNTHIA as the source domain data and images from CityScapes as the target domain data. The result was that the accuracy indicated by the TwinLossGAN that we proposed was much higher than the base UDA models.
88

Adapting multiple datasets for better mammography tumor detection / Anpassa flera dataset för bättre mammografi-tumördetektion

Tao, Wang January 2018 (has links)
In Sweden, women of age between of 40 and 74 go through regular screening of their breasts every 18-24 months. The screening mainly involves obtaining a mammogram and having radiologists analyze them to detect any sign of breast cancer. However reading a mammography image requires experienced radiologist, and the lack of radiologist reduces the hospital's operating efficiency. What's more, mammography from different facilities increases the difficulty of diagnosis. Our work proposed a deep learning segmentation system which could adapt to mammography from various facilities and locate the position of the tumor. We train and test our method on two public mammography datasets and do several experiments to find the best parameter setting for our system. The test segmentation results suggest that our system could play as an auxiliary diagnosis tool for breast cancer diagnosis and improves diagnostic accuracy and efficiency. / I Sverige går kvinnor i åldrarna mellan 40 och 74 igenom regelbunden screening av sina bröst med 18-24 månaders mellanrum. Screeningen innbär huvudsakligen att ta mammogram och att låta radiologer analysera dem för att upptäcka tecken på bröstcancer. Emellertid krävs det en erfaren radiolog för att tyda en mammografibild, och bristen på radiologer reducerar sjukhusets operativa effektivitet. Dessutom, att mammografin kommer från olika anläggningar ökar svårigheten att diagnostisera. Vårt arbete föreslår ett djuplärande segmenteringssystem som kan anpassa sig till mammografi från olika anläggningar och lokalisera tumörens position. Vi tränar och testar vår metod på två offentliga mammografidataset och gör flera experiment för att hitta den bästa parameterinställningen för vårt system. Testsegmenteringsresultaten tyder på att vårt system kan fungera som ett hjälpdiagnosverktyg vid diagnos av bröstcancer och förbättra diagnostisk noggrannhet och effektivitet.
89

Mobile Object Detection using TensorFlow Lite and Transfer Learning / Objektigenkänning i mobila enheter med TensorFlow Lite

Alsing, Oscar January 2018 (has links)
With the advancement in deep learning in the past few years, we are able to create complex machine learning models for detecting objects in images, regardless of the characteristics of the objects to be detected. This development has enabled engineers to replace existing heuristics-based systems in favour of machine learning models with superior performance. In this report, we evaluate the viability of using deep learning models for object detection in real-time video feeds on mobile devices in terms of object detection performance and inference delay as either an end-to-end system or feature extractor for existing algorithms. Our results show a significant increase in object detection performance in comparison to existing algorithms with the use of transfer learning on neural networks adapted for mobile use. / Utvecklingen inom djuplärning de senaste åren innebär att vi är kapabla att skapa mer komplexa maskininlärningsmodeller för att identifiera objekt i bilder, oavsett objektens attribut eller karaktär. Denna utveckling har möjliggjort forskare att ersätta existerande heuristikbaserade algoritmer med maskininlärningsmodeller med överlägsen prestanda. Den här rapporten syftar till att utvärdera användandet av djuplärningsmodeller för exekvering av objektigenkänning i video på mobila enheter med avseende på prestanda och exekveringstid. Våra resultat visar på en signifikant ökning i prestanda relativt befintliga heuristikbaserade algoritmer vid användning  av djuplärning och överförningsinlärning i artificiella neurala nätverk.
90

DevOps for Data Science System

Zhang, Zhongjian January 2020 (has links)
Commercialization potential is important to data science. Whether the problems encountered by data science in production can be solved determines the success or failure of the commercialization of data science. Recent research shows that DevOps theory is a great approach to solve the problems that software engineering encounters in production. And from the product perspective, data science and software engineering both need to provide digital services to customers. Therefore it is necessary to study the feasibility of applying DevOps in data science. This paper describes an approach of developing a delivery pipeline line for a data science system applying DevOps practices. I applied four practices in the pipeline: version control, model server, containerization, and continuous integration and delivery. However, DevOps is not a theory designed specifically for data science. This means the currently available DevOps practices cannot cover all the problems of data science in production. I expended the set of practices of DevOps to handle that kind of problem with a practice of data science. I studied and involved transfer learning in the thesis project. This paper describes an approach of parameterbased transfer where parameters learned from one dataset are transferred to another dataset. I studied the effect of transfer learning on model fitting to a new dataset. First I trained a convolutional neural network based on 10,000 images. Then I experimented with the trained model on another 10,000 images. I retrained the model in three ways: training from scratch, loading the trained weights and freezing the convolutional layers. The result shows that for the problem of image classification when the dataset changes but is similar to the old one, transfer learning a useful practice to adjust the model without retraining from scratch. Freezing the convolutional layer is a good choice if the new model just needs to achieve a similar level of performance as the old one. Loading weights is a better choice if the new model needs to achieve better performance than the original one. In conclusion, there is no need to be limited by the set of existing practices of DevOps when we apply DevOps to data science. / Kommersialiseringspotentialen är viktig för datavetenskapen. Huruvida de problem som datavetenskapen möter i produktionen kan lösas avgör framgången eller misslyckandet med kommersialiseringen av datavetenskap. Ny forskning visar att DevOps-teorin är ett bra tillvägagångssätt för att lösa de problem som programvaruteknik möter i produktionen. Och ur produktperspektivet behöver både datavetenskap och programvaruteknik tillhandahålla digitala tjänster till kunderna. Därför är det nödvändigt att studera genomförbarheten av att tillämpa DevOps inom datavetenskap. Denna artikel beskriver en metod för att utveckla en leverans pipeline för ett datavetenskapssystem som använder DevOps-metoder. Jag använde fyra metoder i pipeline: versionskontroll, modellserver, containerisering och kontinuerlig integration och leverans. DevOps är dock inte en teori som utformats specifikt för datavetenskap. Detta innebär att de för närvarande tillgängliga DevOps-metoderna inte kan täcka alla problem med datavetenskap i produktionen. Jag spenderade uppsättningen av DevOps för att hantera den typen av problem med en datavetenskap. Jag studerade och involverade överföringslärande i avhandlingsprojektet. I det här dokumentet beskrivs en metod för parameterbaserad överföring där parametrar lärda från en datasats överförs till en annan datasats. Jag studerade effekten av överföringsinlärning på modellanpassning till ett nytt datasystem. Först utbildade jag ett invecklat neuralt nätverk baserat på 10 000 bilder. Sedan experimenterade jag med den tränade modellen på ytterligare 10 000 bilder. Jag omskolade modellen på tre sätt: träna från grunden, ladda de tränade vikterna och frysa de invändiga lagren. Resultatet visar att för problemet med bildklassificering när datasättet ändras men liknar det gamla, överföra lärande en användbar praxis för att justera modellen utan omskolning från början. Att frysa det invändiga lagret är ett bra val om den nya modellen bara behöver uppnå en liknande prestanda som den gamla. Att ladda vikter är ett bättre val om den nya modellen behöver uppnå bättre prestanda än den ursprungliga. Sammanfattningsvis finns det inget behov att begränsas av uppsättningen av befintliga metoder för DevOps när vi tillämpar DevOps på datavetenskap.

Page generated in 0.0386 seconds