• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 502
  • 62
  • 49
  • 17
  • 3
  • Tagged with
  • 631
  • 528
  • 472
  • 456
  • 453
  • 451
  • 445
  • 443
  • 441
  • 156
  • 93
  • 92
  • 88
  • 83
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Texture Enhancement in 3D Maps using Generative Adversarial Networks

Birgersson, Anna, Hellgren, Klara January 2019 (has links)
In this thesis we investigate the use of GANs for texture enhancement. To achievethis, we have studied if synthetic satellite images generated by GANs will improvethe texture in satellite-based 3D maps. We investigate two GANs; SRGAN and pix2pix. SRGAN increases the pixelresolution of the satellite images by generating upsampled images from low resolutionimages. As for pip2pix, the GAN performs image-to-image translation bytranslating a source image to a target image, without changing the pixel resolution. We trained the GANs in two different approaches, named SAT-to-AER andSAT-to-AER-3D, where SAT, AER and AER-3D are different datasets provided bythe company Vricon. In the first approach, aerial images were used as groundtruth and in the second approach, rendered images from an aerial-based 3D mapwere used as ground truth. The procedure of enhancing the texture in a satellite-based 3D map was dividedin two steps; the generation of synthetic satellite images and the re-texturingof the 3D map. Synthetic satellite images generated by two SRGAN models andone pix2pix model were used for the re-texturing. The best results were presentedusing SRGAN in the SAT-to-AER approach, in where the re-textured 3Dmap had enhanced structures and an increased perceived quality. SRGAN alsopresented a good result in the SAT-to-AER-3D approach, where the re-textured3D map had changed color distribution and the road markers were easier to distinguishfrom the ground. The images generated by the pix2pix model presentedthe worst result. As for the SAT-to-AER approach, even though the syntheticsatellite images generated by pix2pix were somewhat enhanced and containedless noise, they had no significant impact in the re-texturing. In the SAT-to-AER-3D approach, none of the investigated models based on the pix2pix frameworkpresented any successful results. We concluded that GANs can be used as a texture enhancer using both aerialimages and images rendered from an aerial-based 3D map as ground truth. Theuse of GANs as a texture enhancer have great potential and have several interestingareas for future works.
152

Entwurf und Implementierung einer echtzeitfähigen Entwicklungsumgebung für Lern- und Evolutionsexperimente mit autonomen Roboter

Pantzer, Thomas 20 October 2017 (has links)
Bausteine auf dem Weg zur in-vivo Roboterevolution. Ein in unbekannter Umgebung agierender autonomer Roboter muss Probleme der Positionsbestimmung, Navigation, Rückkehr zur Basisstation etc. sicher lösen, um seine eigentliche Aufgabe durchführen zu können. In der Vergangenheit wurden verschiedene Ansätze entwickelt, um z.B. die Positionsbestimmung [HPG97][SG98] und Navigation [MLP+98][MM80] zu verbessern oder um effiziente Controller zu entwickeln. Dabei zeigte sich, dass viele Verfahren, die mit Computersimulation entwickelt wurden, nur nach starken Anpassungen für eine bestimmte Hardware ausreichend gut funktionierten. Die bei den meist vereinfachten Computermodellen weggelassenen Eigenschaften der Hardware und die Differenzen von simulierter und realer Welt verursachen oft unvorhergesehene Nebeneffekte. Deshalb ist es notwendig, die neuen Algorithmen und Verfahren auch an 'richtiger' Hardware zu erproben. Im folgenden wird eine Entwicklungsumgebung vorgestellt, mit der neue Algorithmen an konkreter Hardware getestet werden können.
153

Ein Billardroboter: Praktische Realisierung von ausgewählten Konzepten der Robotik

Müller, Arnd 20 October 2017 (has links)
Die Arbeit beschreibt Konzipierung, Bau und Untersuchung eines Robotersystems, welches die grundlegenden Züge des Billardspiels beherrscht. Die Realisierung erfolgte in Form eines an die Charakteristika von mobilen Robotern angelehnten Fahrzeuges, welches sich auf einem Billardtisch bewegt und mit einer Art Queue Billardstöße ausführt. Alle Abläufe werden mit Hilfe einer über dem Tisch montierten Digitalkamera erfaßt und überwacht. Die Steuerung erfolgt durch einen Personalcomputer. Schwerpunkte der Arbeit bilden Probleme der Bilderkennung, Wegplanung und technischen Umsetzung. Die Lösung wurde mit Hilfe relativ einfacher technischer Mittel und geringem finanziellen Budget erstellt.
154

15 Jahre Künstliche Intelligenz an der TU Chemnitz

Steinmüller, Johannes, Langner, Holger, Ritter, Marc, Zeidler, Jens 11 July 2008 (has links)
Der vorliegende Band der Informatikberichte ist dem wissenschaftlichen Lebenswerk von Prof. Werner Dilger gewidmet. Seit Oktober 1993 hat er an der Fakultät für Informatik der TU Chemnitz hervorragende Arbeit in Forschung und Lehre geleistet. Dank der Mitarbeit zahlreicher Autoren beleuchtet der vorliegende Band eine große Vielfalt unterschiedlicher Aspekte der Künstlichen Intelligenz.
155

To drone, or not to drone : A qualitative study in how and when information from UxV should be distributed in rescue missions at sea

Laine, Rickard January 2020 (has links)
Swedish maritime rescue consists of a number of resources from various organizations that will work together and achieve a common goal, to save people in need. It turns out that information is a significant factor in maritime rescue missions. Whether you are rescuer at the accident scene or coordinating the rescue mission from the control center, information provides you better situation awareness and knowledge of the situation, which creates better conditions in order achieve the goal for the mission. Applying Unmanned Vehicles (UxV) for Swedish maritime rescue means another resource that can provide additional necessary information. In this study, several methods have been used to find out where in the mission information from UxVs can conceivably contribute. The study identifies three critical situations where there is a need for UxV. This result, in turn, leads to other questions, such as who should be the recipient of the new information and how it affects the information flow as a whole? Information visualization proves to be an important factor in this. Where you can help the recipient of the information in their work with the help of clear and easily understood visualization without affecting the flow or coordination in their work.
156

Automatic Gait Recognition : using deep metric learning / Automatisk gångstilsigenkänning

Persson, Martin January 2020 (has links)
Recent improvements in pose estimation has opened up the possibility of new areas of application. One of them is gait recognition, the task of identifying persons based on their unique style of walking, which is increasingly being recognized as an important method of biometric indentification. This thesis has explored the possibilities of using a pose estimation system, OpenPose, together with deep Recurrent Neural Networks (RNNs) in order to see if there is sufficient information in sequences of 2D poses to use for gait recognition. For this to be possible, a new multi-camera dataset consisting of persons walking on a treadmill was gathered, dubbed the FOI dataset. The results show that this approach has some promise. It achieved an overall classification accuracy of 95,5 % on classes it had seen during training and 83,8 % for classes it had not seen during training. It was unable to recognize sequences from angles it had not seen during training, however. For that to be possible, more data pre-processing will likely be required.
157

Evaluation of Face Recognition Accuracy in Surveillance Video

Tuvskog, Johanna January 2020 (has links)
Automatic Face Recognition (AFR) can be useful in the forensic field when identifying people in surveillance footage. In AFR systems it is common to use deep neural networks which perform well if the quality of the images keeps a certain level. This is a problem when applying AFR on surveillance data since the quality of those images can be very poor. In this thesis the CNN FaceNet has been used to evaluate how different quality parameters influence the accuracy of the face recognition. The goal is to be able to draw conclusions about how to improve the recognition by using and avoiding certain parameters based on the conditions. Parameters that have been experimented with are angle of the face, image quality, occlusion, colour and lighting. This has been achieved by using datasets with different properties or by alternating the images. The parameters are meant to simulate different situations that can occur in surveillance footage that is difficult for the network to recognise. Three different models have been evaluated with different amount of embeddings and different training data. The results show that the two models trained on the VGGFace2 dataset performs much better than the one trained on CASIA-WebFace. All models performance drops on images with low quality compared to images with high quality because of the training data including mostly high-quality images. In some cases, the recognition results can be improved by applying some alterations in the images. This could be by using one frontal and one profile image when trying to identify a person or occluding parts of the shape of the face if it gets recognized as other persons with similar face shapes. One main improvement would be to extend the training datasets with more low-quality images. To some extent, this could be achieved by different kinds of data augmentation like artificial occlusion and down-sampled images.
158

NAVIGATION AND PLANNED MOVEMENT OF AN UNMANNED BICYCLE

Baaz, Hampus January 2020 (has links)
A conventional bicycle is a stable system given adequate forward velocity. However, the velocity region of stability is limited and depends on the geometric parameters of the bicycle. An autonomous bicycle is just not about maintaining the balance but also controlling where the bicycle is heading. Following paths has been accomplished with bicycles and motorcycles in simulation for a while. Car-like vehicles have followed paths in the real world but few bicycles or motorcycles have done so. The goal of this work is to follow a planned path using a physical bicycle without overcoming the dynamic limitations of the bicycle. Using an iterative design process, controllers for direction and position are developed and improved. Kinematic models are also compared in their ability to simulate the bicycle movement and how controllers in simulation translate to outdoors driving. The result shows that the bicycle can follow a turning path on a residential road without human interaction and that some simulation behaviours do not translate to the real world.
159

Tracking motion in mineshafts : Using monocular visual odometry

Suikki, Karl January 2022 (has links)
LKAB has a mineshaft trolley used for scanning mineshafts. It is suspended down into a mineshaft by wire, scanning the mineshaft on both descent and ascent using two LiDAR (Light Detection And Ranging) sensors and an IMU (Internal Measurement Unit) used for tracking the position. With good tracking, one could use the LiDAR scans to create a three-dimensional model of the mineshaft which could be used for monitoring, planning and visualization in the future. Tracking with IMU is very unstable since most IMUs are susceptible to disturbances and will drift over time; we strive to track the movement using monocular visual odometry instead. Visual odometry is used to track movement based on video or images. It is the process of retrieving the pose of a camera by analyzing a sequence of images from one or multiple cameras. The mineshaft trolley is also equipped with one camera which is filming the descent and ascent and we aim to use this video for tracking. We present a simple algorithm for visual odometry and test its tracking on multiple datasets being: KITTI datasets of traffic scenes accompanied by their ground truth trajectories, mineshaft data intended for the mineshaft trolley operator and self-captured data accompanied by an approximate ground truth trajectory. The algorithm is feature based, meaning that it is focused on tracking recognizable keypoints in sequent images. We compare the performance of our algortihm by tracking the different datasets using two different feature detection and description systems, ORB and SIFT. We find that our algorithm performs well on tracking the movement of the KITTI datasets using both ORB and SIFT whose largest total errors of estimated trajectories are $3.1$ m and $0.7$ m for ORB and SIFT respectively in $51.8$ m moved. This was compared to their ground truth trajectories. The tracking of the self-captured dataset shows by visual inspection that the algorithm can perform well on data which has not been as carefully captured as the KITTI datasets. We do however find that we cannot track the movement with the current data from the mineshaft. This is due to the algorithm finding too few matching features in sequent images, breaking the pose estimation of the visual odometry. We make a comparison of how ORB and SIFT finds features in the mineshaft images and find that SIFT performs better by finding more features. The mineshaft data was never intended for visual odometry and therefore it is not suitable for this purpose either. We argue that the tracking could work in the mineshaft if the visual conditions are made better by focusing on more even lighting and camera placement or if it can be combined with other sensors such as an IMU, that assist the visual odometry when it fails.
160

Multi-Modal Deep Learning with Sentinel-1 and Sentinel-2 Data for Urban Mapping and Change Detection

Hafner, Sebastian January 2022 (has links)
Driven by the rapid growth in population, urbanization is progressing at an unprecedented rate in many places around the world. Earth observation has become an invaluable tool to monitor urbanization on a global scale by either mapping the extent of cities or detecting newly constructed urban areas within and around cities. In particular, the Sentinel-1 (S1) Synthetic Aperture Radar (SAR) and Sentinel-2 (S2) MultiSpectral Instrument (MSI) missions offer new opportunities for urban mapping and urban Change Detection (CD) due to the capability of systematically acquiring wide-swath high-resolution images with frequent revisits globally. Current trends in both urban mapping and urban CD have shifted from employing traditional machine learning methods to Deep Learning (DL) models, specifically Convolutional Neural Networks (CNNs). Recent urban mapping efforts achieved promising results by training CNNs on available built-up data using S2 images. Likewise, DL models have been applied to urban CD problems using S2 data with promising results. However, the quality of current methods strongly depends on the availability of local reference data for supervised training, especially since CNNs applied to unseen areas often produce unsatisfactory results due to their insufficient across-region generalization ability. Since multitemporal reference data are even more difficult to obtain, unsupervised learning was suggested for urban CD. While unsupervised models may perform more consistently across different regions, they often perform considerably worse than their supervised counterparts. To alleviate these shortcomings, it is desirable to leverage Semi-Supervised Learning (SSL) that exploits unlabeled data to improve upon supervised learning, especially because satellite data is plentiful. Furthermore, the integration of SAR data into the current optical frameworks (i.e., data fusion) has the potential to produce models with better generalization ability because the representation of urban areas in SAR images is largely invariant across cities, while spectral signatures vary greatly.  In this thesis, a novel Domain Adaptation (DA) approach using SSL is first presented. The DA approach jointly exploits Multi-Modal (MM) S1 SAR and S2 MSI to improve across-region generalization for built-up area mapping. Specifically, two identical sub-networks are incorporated into the proposed model to perform built-up area segmentation from SAR and optical images separately. Assuming that consistent built-up area segmentation should be obtained across data modalities, an unsupervised loss for unlabeled data that penalizes inconsistent segmentation from the two sub-networks was designed. Therefore, the use of complementary data modalities as real-world perturbations for Consistency Regularization (CR) is proposed. For the final prediction, the model takes both data modalities into account. Experiments conducted on a test set comprised of sixty representative sites across the world showed that the proposed DA approach achieves strong improvements (F1 score 0.694) upon supervised learning from S1 SAR data (F1 score 0.574), S2 MSI data (F1 score 0.580) and their input-level fusion (F1 score 0.651). The comparison with two state-of-the-art global human settlement maps, namely GHS-S2 and WSF2019, showed that our model is capable of producing built-up area maps with comparable or even better quality. For urban CD, a new network architecture for the fusion of SAR and optical data is proposed. Specifically, a dual stream concept was introduced to process different data modalities separately, before combining extracted features at a later decision stage. The individual streams are based on the U-Net architecture. The proposed strategy outperformed other U-Net-based approaches in combination with uni-modal data and MM data with feature level fusion. Furthermore, our approach achieved state-of-the-art performance on the problem posed by a popular urban CD dataset (F1 score 0.600). Furthermore, a new network architecture is proposed to adapt Multi-Modal Consistency Regularization (MMCR) for urban CD. Using bi-temporal S1 SAR and S2 MSI image pairs as input, the MM Siamese Difference (Siam-Diff) Dual-Task (DT) network not only predicts changes using a difference decoder, but also segments buildings for each image with a semantic decoder. The proposed network is trained in a semi-supervised fashion using the underlying idea of MMCR, namely that building segmentation across sensor modalities should be consistent, to learn more robust features. The proposed method was tested on an urban CD task using the 60 sites of the SpaceNet7 dataset. A domain gap was introduced by only using labels for sites located in the Western World, where geospatial data are typically less sparse than in the Global South. MMCR achieved an average F1 score of 0.444 when applied to sites located outside of the source domain, which is a considerable improvement to several supervised models (F1 scores between 0.107 and 0.424). The combined findings of this thesis contribute to the mapping and monitoring of cities on a global scale, which is crucial to support sustainable planning and urban SDG indicator monitoring. / Vår befolkningstillväxt ligger till stor grund för den omfattande urbanise-ringstakt som kan observeras runt om i världen idag. Jordobservationer harblivit ett betydelsefullt verktyg för att bevaka urbaniseringen på en globalskala genom att antingen kartlägga städernas omfattning eller upptäcka ny-byggda stadsområden inom eller runtom städer. Tillföljd av satellituppdragenSentinel-1 (S1) Synthetic Aperture Radar (SAR) och Sentinel-2 (S2) MultiS-pectral Instrument (MSI) och dess förmåga att systematiskt tillhandahållabreda och högupplösta bilder, har vi fått nya möjligheter att kartlägga urbanaområden och upptäcka förändringar inom dem, även på frekvent åter besöktaplatser. Samtida trender inom både urban kartläggning och för att upptäcka ur-bana förändringar har gått från att använda traditionella maskininlärnings-metoder till djupinlärning (DL), särskilt Convolutional Neural Nets (CNNs).De nytillkomna urbana kartläggningsmetoderna har gett lovande resultat ge-nom att träna CNNs med redan tillgänglig urban data och S2-bilder. Likasåhar DL-modeller, i kombination med S2-data, tillämpats på de problem somkan uppkomma vid analyser av urbana förändringar. Kvaliteten på de nuvarande metoderna beror dock i stor utsträckning påtillgången av lokal referensdata förövervakad träning. CNNs som tillämpaspå nya områden ger ofta otillräckliga resultat på grund av deras oförmågaatt generalisera över regioner. Eftersom multitemporala referensdata kan va-ra svåra att erhålla föreslås oövervakad inlärning för upptäckter av urbanaförändringar. även om oövervakade modeller kan prestera mer konsekvent iolika regioner, generar de ofta betydligt sämre än dess övervakade motsva-righeter. För att undvika de brister som kan uppkomma är det önskvärt attanvända semi-övervakad inlärning (SSL) som nyttjar omärkta data för attförbättraövervakad inlärning eftersom tillgången på satellitdata är så stor.Dessutom har integrationen av SAR-data i de nuvarande optiska ramverken(så kallad datafusion) potential att producera modeller med bättre generali-seringsförmåga då representationen av stadsområden i SAR-bilder är i stortsett oföränderlig mellan städer, medan spektrala signaturer varierar mycket. Denna avhandling presenterar först en ny metod för domänanpassning(DA) som använder SSL. Den DA-metoden som presenteras kombinerar Multi-Modal (MM) S1 SAR och S2 MSI för att förbättra generaliseringen av re-gioner som används vid kartläggning av bebyggda områden. Två identiskaundernätverk är inkorporerade i den föreslagna modellen för att få separataurbana kartläggningar från SAR och optiska data. För att erhålla en kon-sekvent segmentering av bebyggda områden över datamodalitet utformadesen oövervakad komponent för att motverka inkonsekvent segmentering frånde två undernätverken. Således föreslås användningen av kompletterande da-tamodaliteter som använder sig av verkliga störningar för konsistensregula-riseringar (CR). För det slutgiltiga resultatet tar modellen hänsyn till bådadatamodaliteterna. Experiment utförda på en testuppsättning bestående av60 representativa platseröver världen visar att den föreslagna DA-metodenuppnår starka förbättringar (F1 score 0,694) vidövervakad inlärning från S1SAR-data (F1 score 0,574), S2 MSI-data (F1 score 0,580) och deras samman-slagning på ingångsnivå (F1 score 0,651). I jämförelse med de två främstaglobala kartorna över mänskliga bosättningar, GHS-S2 och WSF2019, visadesig vår modell kapabel till att producera bebyggelsekartor med jämförbar ellerbättre kvalitet. Gällande metoder för upptäckter av urbana förändringar i städer föreslårdenna avhandling en ny nätverksarkitektur som sammanslår SAR och op-tisk data. Mer specifikt presenteras ett dubbelströmskoncept för att bearbetaolika datamodaliteter separat, innan de extraherade funktionerna kombine-ras i ett senare beslutsstadium. De enskilda strömmarna baseras på U-Netarkitektur. Strategin överträffade andra U-Net-baserade tillvägagångssätt ikombination med uni-modala data och MM-data med funktionsnivåfusion.Dessutom uppnådde tillvägagångssättet hög prestanda på problem som or-sakas vid en frekvent använd datauppsättning för urbana förändringar (F1score 0,600). Därtill föreslås en ny nätverksarkitektur som anpassar multi-modala kon-sistensregulariseringar (MMCR) för att upptäcka urbana förändringar. Ge-nom att använda bi-temporala S1 SAR- och S2 MSI-bildpar som indata,förutsäger nätverket MM Siamese Difference (Siam-Diff) Dual-Task (DT) intebara förändringar med hjälp av en skillnadsavkodare, utan kan även segmen-tera byggnader för varje bild med en semantisk avkodare. Nätverket tränaspå ett semi-övervakat sätt med hjälp av MMCR, nämligen att byggnadsseg-mentering över sensormodaliteter ska vara konsekvent, för att lära sig merrobusta funktioner. Den föreslagna metoden testades på en CD-uppgift medanvändning av de 60 platserna i SpaceNet7-datauppsättningen. Ett domängapintroducerades genom att endast använda etiketter för platser i västvärlden,där geospatiala data vanligtvis är mindre glest än i Globala Syd. MMCRuppnådde ett genomsnittligt F1 score på 0,444 när det applicerades på plat-ser utanför källdomänen, vilket är en avsevärd förbättring för flera övervakademodeller (F1 score mellan 0,107 och 0,424).Samtliga resultat från avhandlingen bidrar till kartläggning och över-vakning av städer på en global skala, vilket är väsentligt för att kunna bedrivahållbar stadsplanering och övervakning av FN:s globala mål för hållbar ut-veckling. / <p>QC220530</p>

Page generated in 0.0348 seconds