• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 454
  • 96
  • 8
  • 2
  • Tagged with
  • 560
  • 518
  • 472
  • 468
  • 458
  • 446
  • 443
  • 443
  • 443
  • 150
  • 97
  • 91
  • 90
  • 81
  • 77
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Unsupervised Feature Extraction of Clothing Using Deep Convolutional Variational Autoencoders / Oövervakad extrahering av kännetecknande drag av kläder genom djupa självkodande neurala faltningsnätverk

Blom, Fredrik January 2018 (has links)
As online retail continues to grow, large amounts of valuable data, such as transaction and search history, and, specifically for fashion retail, similarly structured images of clothing, is generated. By using unsupervised learning, it is possible to tap into this almost unlimited supply of data. This thesis set out to determine to what extent generative models – in particular, deep convolutional variational autoencoders – can be used to automatically extract representative features from images of clothing in a completely unsupervised manner. In reviewing variations of the autoencoder, both in terms of reconstruction quality and the ability to generate new realistic samples, results suggest that there exists an optimal size of the latent vector in relation to the image data complexity. Furthermore, by weighting the latent loss and generation loss in the loss function, it was possible to disentangle the learned features such that each feature captured a unique defining characteristic of clothing items (here t-shirts and tops). / I takt med att E-handeln fortsätter att växa och kunderna i ökad utsträckning rör sig online, genereras stora mängder värdefull data, exempelvis transaktions- och sökhistorik, och specifikt för klädeshandeln, välstrukturerade bilder av kläder. Genom att använda oövervakad maskininlärning (unsupervised machine learning) är det möjligt att utnyttja denna, nästan obegränsade mängd data. Detta arbete syftar till att utreda i vilken utsträckning generativa modeller, särskilt djupa självkodande neurala faltningsnätverk (deep convolutional variational autoencoders), kan användas för att automatiskt extrahera definierande drag från bilder av kläder. Genom att granska olika varianter av självkodaren framträder en optimal relation mellan storleken på den latenta vektorn och komplexiteten på den bilddata som nätverket tränades på. Vidare noterades att dragen kan fördeladas unikt på variablerna, i detta fall t-shirts och toppar, genom att vikta den latenta förlustfunktionen.
132

Evaluating rain removal image processing solutions for fast and accurate object detection / Utvärdering av regnborttagningsalgoritmer för snabboch pålitlig objektigenkänning

Köylüoglu, Tugay, Hennicks, Lukas January 2019 (has links)
Autonomous vehicles are an important topic in modern day research, both for the private and public sector. One of the reasons why self-driving cars have not yet reached consumer market is because of levels of uncertainty. This is often tackled with multiple sensors of different kinds which helps gaining robust- ness in the vehicle’s system. Radars, lidars and cameras are often the sensors used and the expenses can rise up quickly, which is not always feasible for different markets. This could be addressed with using fewer, but more robust sensors for visualization. This thesis addresses the issue of one particular failure mode for camera sensors, which is reduced view range affected by rainy weather. Kalman filter and discrete wavelet transform with bilateral filtering are evaluated as rain removal algorithms and tested with the state-of-the-art object detection algorithm, You Only Look Once (YOLOv3). Filtered videos in daylight and evening light were tested with YOLOv3 and results show that the accuracy is not improved enough to be worth implementing in autonomous vehicles. With the graphics card available for this thesis YOLOv3 is not fast enough for a vehicle to stop in time when driving in 110km/h and an obstacle appears 80m ahead, however an Nvidia Titan X is assumed to be fast enough. There is potential within the research area and this thesis suggests that other object detection methods are evaluated as future work. / Autonoma fordon är för privat samt offentlig sektor ett viktigt område i modern forskning. Osäkerheten med autonoma fordon är en viktig anledning till varför de idag inte nått konsumentmarknaden. Systemen för autonoma fordon blir mer robusta med inkludering av flera sensorer av olika typer, vilka oftast är kameror, radar och lidars. Fordon med dessa sensorer kan snabbt öka i pris vilket gör dem mindre tillgängliga för olika marknader. Detta skulle kunna lösas med färre sensorer som däremot är mer robusta. Denna avhandling diskuterar problemet med en specific felmodell för kameror, vilket är minskat synfält som påverkas av regnigt väder. Kalman filter och diskret vågkomponent-transformation med bilateral filtrering utvärderades som regnborttagningsalgoritmer och testades med You Only Look Once (YOLOv3), en modern objektigenkänningsmetod. Filtrerade videofilmer i dagstid och kvällstid testades med YOLOv3 och resultaten visade att noggrannheten inte ökade tillräckligt mycket för att vara användbara för autonoma fordon. Med grafikkorten tillgängliga för denna avhandling är inte YOLOv3 snabb nog för ett fordon att hinna stanna i tid före kollision om bilen kör i 110km/h och ett föremål dyker upp 80m framför. Däremot antas det att fordon utrustade med Nvidias Titan X borde hinna stanna i tid före kollision. Avhandlingen ser däremot potential inom detta forskningsområde och föreslår att liknande test fast med andra objektigenkänningsmetoder bör utföras.
133

Autonoma fordon - Dess funktion, möjligheter och risker / Autonomous vehicles - its function, possibilities and risks

Khoogar, Alireza January 2018 (has links)
Dagens samhälle är starkt beroende av ett flertal kritiska infrastrukturer varav elproduktion, telekommunikationssystem och transportsystem är några. Dessa kritiska infrastrukturer är mycket hopflätade och internt beroende av varandra. De är alla mer eller mindre sårbara och riskerar att förlora delar eller hela deras funktionalitet. Samhället har som mål att skydda all sin kritiska infrastruktur på bästa sätt mot möjliga hot och försöker minimera risken för negativa händelser. Det som ständigt bör eftersträvas är bättre metoder för att kvantifiera och i förlängningen hantera dessa risker. I ett samhälle som är stark beroende av elektroniska system kan det finnas antagonister som har till avsikt att med hjälp av elektromagnetiska störning slå ut eller begränsa funktioner i infrastrukturen. Detta gör exempelvis autonoma fordon sårbara om inga säkerhetsåtgärder vidtas. Det innebär att risknivåer och acceptabla risker i samband med autonoma fordon behöver identifieras och fastställas. Syftet med detta examensarbete är att beskriva vad ett autonomt fordon är, identifiera vilka risker det finns samt hur stor påverkan dessa risker har på autonoma fordon vid en IEMI-attack. Examensarbetet omfattar en bakgrundsstudie som behandlar hur autonoma fordon fungerar, samt vilka tänkbara risker avsiktlig elektromagnetiska störningar kan visa för autonoma fordon. Resultatet från examensarbetet visar att de komponenter som misstänks vara utsatt för elektromagnetisk störning bör genomgå en riskanalys. Detta för att validera risken samt den elektromagnetiska störningens konsekvenser. / Today's society is heavily dependent on a number of critical infrastructures, of which electricity generation, telecommunications systems and transport systems are few. These critical infrastructures are highly interlinked and interdependent. They are all more or less vulnerable and risk losing parts or their entire functionality. The society's goal is to safeguard all critical infrastructure in the best possible way against possible threats and try to minimize the risk of fallout. What is constantly being sought is better methods of quantifying and managing these risks in the long run. The risks are linked to verified risk levels that deal with an acceptance limit. In a society that is highly dependent on electronic systems, there may also be antagonists who intend to disable or limit functions in the infrastructure by means of electromagnetic interference. This makes for example autonomous vehicles vulnerable if no safety measures are taken. This means that risk levels and acceptable risks associated with autonomous vehicles need to be identified and established. The purpose of this study is to describe what an autonomous vehicle is, identify what risks there are and how much impact these risks have on autonomous vehicles. The following thesis work includes a background study that deals with the functioning of autonomous vehicles, and what possible risks involve intentional electromagnetic interference for autonomous vehicles? The result of the study indicates that all components suspected of being exposed to electromagnetic interference should undergo a risk analysis. This to validate the risk and its consequence.
134

Deep Perceptual Loss for Improved Downstream Prediction

Grund Pihlgren, Gustav January 2021 (has links)
No description available.
135

[pt] DETECÇÃO VISUAL DE FILEIRA DE PLANTAÇÃO COM TAREFA AUXILIAR DE SEGMENTAÇÃO PARA NAVEGAÇÃO DE ROBÔS MÓVEIS / [en] VISUAL CROP ROW DETECTION WITH AUXILIARY SEGMENTATION TASK FOR MOBILE ROBOT NAVIGATION

IGOR FERREIRA DA COSTA 07 November 2023 (has links)
[pt] Com a evolução da agricultura inteligente, robôs autônomos agrícolas têm sido pesquisados de forma extensiva nos últimos anos, ao passo que podem resultar em uma grande melhoria na eficiência do campo. No entanto, navegar em um campo de cultivo aberto ainda é um grande desafio. O RTKGNSS é uma excelente ferramenta para rastrear a posição do robô, mas precisa de mapeamento e planejamento precisos, além de ser caro e dependente de qualidade do sinal. Como tal, sistemas on-board que podem detectar o campo diretamente para guiar o robô são uma boa alternativa. Esses sistemas detectam as linhas com técnicas de processamento de imagem e estimam a posição aplicando algoritmos à máscara obtida, como a transformada de Hough ou regressão linear. Neste trabalho, uma abordagem direta é apresentada treinando um modelo de rede neural para obter a posição das linhas de corte diretamente de uma imagem RGB. Enquanto a câmera nesses sistemas está, geralmente, voltada para o campo, uma câmera próxima ao solo é proposta para aproveitar túneis ou paredes de plantas formadas entre as fileiras. Um ambiente de simulação para avaliar o desempenho do modelo e o posicionamento da câmera foi desenvolvido e disponibilizado no Github. Também são propostos quatro conjuntos de dados para treinar os modelos, sendo dois para as simulações e dois para os testes do mundo real. Os resultados da simulação são mostrados em diferentes resoluções e estágios de crescimento da planta, indicando as capacidades e limitações do sistema e algumas das melhores configurações são verificadas em dois tipos de ambientes agrícolas. / [en] Autonomous robots for agricultural tasks have been researched to great extent in the past years as they could result in a great improvement of field efficiency. Navigating an open crop field still is a great challenge. RTKGNSS is a excellent tool to track the robot’s position, but it needs precise mapping and planning while also being expensive and signal dependent. As such, onboard systems that can sense the field directly to guide the robot are a good alternative. Those systems detect the rows with adequate image processing techniques and estimate the position by applying algorithms to the obtained mask, such as the Hough transform or linear regression. In this work, a direct approach is presented by training a neural network model to obtain the position of crop lines directly from an RGB image. While, usually, the camera in these kinds of systems is looking down to the field, a camera near the ground is proposed to take advantage of tunnels or walls of plants formed between rows. A simulation environment for evaluating both the model’s performance and camera placement was developed and made available on Github, also four datasets to train the models are proposed, being two for the simulations and two for the real world tests. The results from the simulation are shown across different resolutions and stages of plant growth, indicating the system’s capabilities and limitations. Some of the best configurations are then verified in two types of agricultural environments.
136

Autonoma fordons beslutsfattande i nödsituatioiner : Ett etsikt dilemma / Autonomous vehicles decisionmaking in emergency situations : Aan ethical dilemma

Dahlström, Viktor, Stenlund, Sebastian January 2022 (has links)
Many major car manufacturers predict that autonomous vehicles controlled by AI (Artificial Intelligence) will be the future of the automotive industry. This means that AI makes the decisions even in emergency situations where people can be injured. Therefore, policies and guidelines are required for how autonomous vehicles should act in emergency situations (Faggella, 2020). This study will focus on how young adults view this dilemma.   The purpose of this study is to investigate how young adults believe that an AV (Autonomous Vehicle) should act in emergency situations, and understand what gives rise to their way of thinking.   To achieve the purpose of the survey and answer the research questions, a qualitative data collection through semi-structured interviews has been performed. The data collected was analyzed  with the help of a quantitative data analysis where the themes, patterns and connections that were identified are presented in the results and analyzed in the discussion (chapter 7).   The results generated by the empirical evidence show that young adults' view on how autonomous vehicles should act in emergency situations are largely in line with what previous studies have concluded. The result indicates that young adults find it most important to save as many lives as possible but also to follow the law. / Många stora biltillverkare förutspår att autonoma fordon som styrs med hjälp av AI (Artificial Intelligence) kommer att vara framtiden för bilindustrin. Detta medför att det är AI som fattar besluten även i nödsituationer där människor kan komma till skada. Därför krävs det policys och riktlinjer för hur autonoma fordon ska agera i nödsituationer (Faggella, 2020).  Denna undersökning fokuserar på hur unga vuxna ser på detta dilemma.    Syftet med detta examensarbete är att undersöka hur unga vuxna anser att en AV (Autonomous Vehicle) ska agera i nödsituationer, samt förstå vad som ger upphov till deras tankesätt.    För att uppnå undersökningens syfte och besvara två forskningsfrågor har kvalitativa data samlats in genom semistrukturerade intervjuer. Den data som samlades in analyserades med hjälp av en kvantitativ dataanalys där de teman, mönster och samband som hittades redovisas i resultatet och analyseras i diskussionen (kapitel 7).    De resultat som empirin genererade visar på att unga vuxnas synsätt på hur autonoma fordon skall agera i nödsituationer stämmer till stor del överens med vad tidigare undersökningar har kommit fram till. Resultatet pekar på att unga vuxna tycker är allra viktigast är att rädda så många liv som möjligt men även att följa lagen.
137

Benchmarking structure from motion algorithms with video footage taken from a drone against laser-scanner generated 3D models

Martell, Angel Alfredo January 2017 (has links)
Structure from motion is a novel approach to generate 3D models of objects and structures. The dataset simply consists of a series of images of an object taken from different positions. The ease of the data acquisition and the wide array of available algorithms makes the technique easily accessible. The structure from motion method identifies features in all the images from the dataset, like edges with gradients in multiple directions, and tries to match these features between all the images and then computing the relative motion that the camera was subject to between any pair of images. It builds a 3D model with the correlated features. It then creates a 3D point cloud with colour information of the scanned object. There are different implementations of the structure from motion method that use different approaches to solve the feature-correlation problem between the images from the data set, different methods for detecting the features and different alternatives for sparse reconstruction and dense reconstruction as well. These differences influence variations in the final output across distinct algorithms. This thesis benchmarked these different algorithms in accuracy and processing time. For this purpose, a terrestrial 3D laser scanner was used to scan structures and buildings to generate a ground truth reference to which the structure from motion algorithms were compared. Then a video feed from a drone with a built-in camera was captured when flying around the structure or building to generate the input for the structure from motion algorithms. Different structures are considered taking into account how rich or poor in features they are, since this impacts the result of the structure from motion algorithms. The structure from motion algorithms generated 3D point clouds, which then are analysed with a tool like CloudCompare to benchmark how similar it is to the laser scanner generated data, and the runtime was recorded for comparing it across all algorithms. Subjective analysis has also been made, such as how easy to use the algorithm is and how complete the produced model looks in comparison to the others. In the comparison it was found that there is no absolute best algorithm, since every algorithm highlights in different aspects. There are algorithms that are able to generate a model very fast, managing to scale the execution time linearly in function of the size of their input, but at the expense of accuracy. There are also algorithms that take a long time for dense reconstruction, but generate almost complete models even in the presence of featureless surfaces, like COLMAP modified PatchMacht algorithm. The structure from motion methods are able to generate models with an accuracy of up to \unit[3]{cm} when scanning a simple building, where Visual Structure from Motion and Open Multi-View Environment ranked among the most accurate. It is worth highlighting that the error in accuracy grows as the complexity of the scene increases. Finally, it was found that the structure from motion method cannot reconstruct correctly structures with reflective surfaces, as well as repetitive patterns when the images are taken from mid to close range, as the produced errors can be as high as \unit[1]{m} on a large structure.
138

Online Camera-IMU Calibration

Karlhede, Arvid January 2022 (has links)
This master thesis project was done together with Saab Dynamics in Linköping the spring of 2022 and aims to perform an online IMU-camera calibration using an AprilTag board. Experiments are conducted on two different types of datasets, the public dataset Euroc and internal datasets from Saab. The calibration is done iteratively by solving a series of nonlinear optimization problems without any initial knowledge of the sensor configuration. The method is largely based on work by Huang and collaborators. Other than just finding the transformation between the IMU and the camera, the biases in the IMU, and the time delay between the two sensors are also explored. By comparing the resulting transformation with Kalibr, the current state of the art offline calibration toolbox, it is possible to conclude that the model can find and correct for the biases in the gyroscope. Therefore it is important to include these biases in the model. The model is able to roughly find the time shift between the two sensors but has more difficulties correcting for it. The thesis also aims to explore ways of compiling a good dataset for calibration. Results show that it is desirable to avoid rapid movements as well as images gathered at distances from the AprilTag board that very a lot. Also, having a shorter exposure time is useful to not lose AprilTag detections.
139

AUTONOMA VAPENSYSTEM : ARGUMENTATIONSANALYS AV DEN DEONTOLOGISKA ARGUMENTATIONEN

Olausson, Per January 2022 (has links)
The ethical implications of autonomous weapon systems is a highly debated topic. While research and development of autonomous weapon systems is ongoing, non-governmental organizations seek to ban the technology. Ethicists give conflicting answers as to what is right and what is wrong. Although, arguments opposing the use of autonomous weapon systems seem to dominate the debate, particularly when balancing deontological arguments that oppose autonomous weapon systems against those who advocate the technology.  The purpose of this study is to evaluate deontological arguments opposing the use of autonomous weapon systems using argument analysis. This is done in order to assess the deontological case for opposing autonomous weapon systems.  The findings of this study are that, although influential deontological arguments opposing autonomous weapon systems are more numerous than supporting ones, the deontological case for opposing autonomous weapon systems is weak in both tenability and relevance. The main tenability concerns are the application of theory in premises and conceptual incoherence. The main relevance concern is variations in the way autonomous weapon systems is defined. These weaknesses show that the analysed deontological arguments opposing the use of autonomous weapon systems should not alone dictate the direction of the ethical debate.
140

Event-Based Visual SLAM : An Explorative Approach

Rideg, Johan January 2023 (has links)
Simultaneous Localization And Mapping (SLAM) is an important topic within the field of roboticsaiming to localize an agent in a unknown or partially known environment while simultaneouslymapping the environment. The ability to perform robust SLAM is especially important inhazardous environments such as natural disasters, firefighting and space exploration wherehuman exploration may be too dangerous or impractical. In recent years, neuromorphiccameras have been made commercially available. This new type of sensor does not outputconventional frames but instead an asynchronous signal of events at a microsecond resolutionand is capable of capturing details in complex lightning scenarios where a standard camerawould be either under- or overexposed, making neuromorphic cameras a promising solution insituations where standard cameras struggle. This thesis explores a set of different approachesto virtual frames, a frame-based representation of events, in the context of SLAM.UltimateSLAM, a project fusing events, gray scale and IMU data, is investigated using virtualframes of fixed and varying frame rate both with and without motion compensation. Theresulting trajectories are compared to the trajectories produced when using gray scale framesand the number of detected and tracked features are compared. We also use a traditional visualSLAM project, ORB-SLAM, to investigate the Gaussian weighted virtual frames and gray scaleframes reconstructed from the event stream using a recurrent network model. While virtualframes can be used for SLAM, the event camera is not a plug and play sensor and requires agood choice of parameters when constructing virtual frames, relying on pre-existing knowledgeof the scene.

Page generated in 0.0802 seconds