• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 404
  • 43
  • Tagged with
  • 447
  • 447
  • 446
  • 445
  • 443
  • 442
  • 441
  • 441
  • 441
  • 141
  • 91
  • 77
  • 72
  • 64
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Reinforcement learning for robotic manipulation / Reinforcement learning för manipulering med robot

Arnekvist, Isac January 2017 (has links)
Reinforcement learning was recently successfully used for real-world robotic manipulation tasks, without the need for human demonstration, usinga normalized advantage function-algorithm (NAF). Limitations on the shape of the advantage function however poses doubts to what kind of policies can be learned using this method. For similar tasks, convolutional neural networks have been used for pose estimation from images taken with fixed position cameras. For some applications however, this might not be a valid assumption. It was also shown that the quality of policies for robotic tasks severely deteriorates from small camera offsets. This thesis investigates the use of NAF for a pushing task with clear multimodal properties. The results are compared with using a deterministic policy with minimal constraints on the Q-function surface. Methods for pose estimation using convolutional neural networks are further investigated, especially with regards to randomly placed cameras with unknown offsets. By defining the coordinate frame of objects with respect to some visible feature, it is hypothesized that relative pose estimation can be accomplished even when the camera is not fixed and the offset is unknown. NAF is successfully implemented to solve a simple reaching task on a real robotic system where data collection is distributed over several robots, and learning is done on a separate server. Using NAF to learn a pushing task fails to converge to a good policy, both on the real robots and in simulation. Deep deterministic policy gradient (DDPG) is instead used in simulation and successfully learns to solve the task. The learned policy is then applied on the real robots and accomplishes to solve the task in the real setting as well. Pose estimation from fixed position camera images is learned and the policy is still able to solve the task using these estimates. By defining a coordinate frame from an object visible to the camera, in this case the robot arm, a neural network learns to regress the pushable objects pose in this frame without the assumption of a fixed camera. However, the precision of the predictions were too inaccurate to be used for solving the pushing task. Further modifications to this approach could however show to be a feasible solution to randomly placed cameras with unknown poses. / Reinforcement learning har nyligen använts framgångsrikt för att lära icke-simulerade robotar uppgifter med hjälp av en normalized advantage function-algoritm (NAF), detta utan att använda mänskliga demonstrationer. Restriktioner på funktionsytorna som använts kan dock visa sig vara problematiska för generalisering till andra uppgifter. För poseestimering har i liknande sammanhang convolutional neural networks använts med bilder från kamera med konstant position. I vissa applikationer kan dock inte kameran garanteras hålla en konstant position och studier har visat att kvaliteten på policys kraftigt förvärras när kameran förflyttas.   Denna uppsats undersöker användandet av NAF för att lära in en ”pushing”-uppgift med tydliga multimodala egenskaper. Resultaten jämförs med användandet av en deterministisk policy med minimala restriktioner på Q-funktionsytan. Vidare undersöks användandet av convolutional neural networks för pose-estimering, särskilt med hänsyn till slumpmässigt placerade kameror med okänd placering. Genom att definiera koordinatramen för objekt i förhållande till ett synligt referensobjekt så tros relativ pose-estimering kunna utföras även när kameran är rörlig och förflyttningen är okänd. NAF appliceras i denna uppsats framgångsrikt på enklare problem där datainsamling är distribuerad över flera robotar och inlärning sker på en central server. Vid applicering på ”pushing”- uppgiften misslyckas dock NAF, både vid träning på riktiga robotar och i simulering. Deep deterministic policy gradient (DDPG) appliceras istället på problemet och lär sig framgångsrikt att lösa problemet i simulering. Den inlärda policyn appliceras sedan framgångsrikt på riktiga robotar. Pose-estimering genom att använda en fast kamera implementeras också framgångsrikt. Genom att definiera ett koordinatsystem från ett föremål i bilden med känd position, i detta fall robotarmen, kan andra föremåls positioner beskrivas i denna koordinatram med hjälp av neurala nätverk. Dock så visar sig precisionen vara för låg för att appliceras på robotar. Resultaten visar ändå att denna metod, med ytterligare utökningar och modifikationer, skulle kunna lösa problemet.
192

Topical Classification of Images in Wikipedia : Development of topical classification models followed by a study of the visual content of Wikipedia / Ämneklassificering av bilder i Wikipedia : Utveckling av ämneklassificeringsmodeller följd av studier av Wikipedias bilddata

Vieira Bernat, Matheus January 2023 (has links)
With over 53 million articles and 11 million images, Wikipedia is the greatest encyclopedia in history. The number of users is equally significant, with daily views surpassing 1 billion. Such an enormous system needs automation of tasks to make it possible for the volunteers to maintain. When it comes to textual data, there is a system based on machine learning called ORES providing automation to tasks such as article quality estimation and article topic routing. A visual counterpart system also needs to be developed to support tasks such as vandalism detection in images and for a better understanding of the visual data of Wikipedia. Researchers from the Wikimedia Foundation identified a hindrance to implementing the visual counterpart of ORES: the images of Wikipedia lack topical metadata. Thus, this work aims to develop a deep learning model that classifies images into a set of topics, which have been pre-determined in parallel work. State-of-the-art image classification models and other methods to mitigate the existing class imbalance are used. The conducted experiments show, among others, that: using the data that considers the hierarchy of labels performs better; resampling techniques are ineffective at mitigating imbalance due to the high label concurrence; sample-weighting improves metrics; and that initializing parameters as pre-trained on ImageNet rather than randomly yields better metrics. Moreover, we find interesting outlier labels that, despite having fewer samples, obtain better performance metrics, which is believed to be either due to bias from pre-training or simply more signal in the label. The distribution of the visual data predicted by the models displayed. Finally, some qualitative examples of the model predictions to some images are presented, proving the ability of the model to find correct labels that are missing in the ground truth
193

Operation and Area Restriction of Autonomous Wheel Loaders Using Colour Markings

Fernkvist, Jonathan, Hamzic, Inas January 2023 (has links)
This thesis aims to create a system using colour markings for Volvo’s autonomous wheel loaders which determines their restricted area and operation using sensors available on the machine. The wheel loader shall be able to interpret and distinguish different colours of spray paint, and depending on the colour, act accordingly. Six different colours are evaluated across two different colour types to find the most suitable ones for the system. Multiple tests are presented throughout the thesis to find the approach with the most optimal performance that meets the system's requirements. The system is evaluated in various weather conditions to determine how the weather affects the performance of the system. The thesis also compares two different line-following approaches, where one is based on edge detection using Canny Edge and Hough transform, and the other uses histogram analysis and sliding window search, to distinguish and track the colour markings. While the wheel loader is in operation, it collects GPS coordinates to create a map of the path taken by the wheel loader and the location of various tasks. The evaluation shows that red, green and blue in fluorescent colour type are the most suitable colours for such a system. The line-following algorithm that utilises perspective warp, histogram and a sliding window search was the most prominent for accurate line detection and tracking. Furthermore, the evaluation showed that the performance of the system was affected depending on the weather condition.
194

Learning a Grasp Prediction Model for Forestry Applications

Olofsson, Elias January 2024 (has links)
Since the advent of machine learning and machine vision methods, progress has been made in tackling the long-standing research question of autonomous grasping of arbitrary objects using robotic end-effectors. Building on these efforts, we focus on a subset of the general grasping problem concerning the automation of a forwarder. This forestry vehicle collects and transports felled and cut tree logs in a forest environment to a nearby roadside landing. The forwarder must safely and energy-efficiently grip logs to minimize fuel consumption and reduce loading times. In this thesis project, we develop a data-driven model for predicting the expected outcome of grasping attempts made by the forwarder's crane. For a given pile of logs, such a model can estimate the optimal horizontal location and angle for applying the claw grapple, enabling effective grasp planning. We utilize physics-based simulations to create a ground truth dataset of 12 500 000 simulated grasps distributed across 5000 randomly generated log piles. Our semi-generative, supervised model is a fully convolutional network that inputs the orthographic depth image of a pile and returns images predicting the corresponding grasps' initial grapple angle and outcome metrics as a function of position. Over five folds of cross-validation, our model predicted the number of grasped logs and the initial grapple angle with a normalized root mean squared error of 15.77(2)% and 2.64(4)%, respectively. The grasps' energy efficiency and energy waste were similarly predicted with a relative error of 14.43(2)% and 21.06(3)%. / Sedan tillkomsten av maskininlärnings- och maskinseendebaserade metoder har betydande framsteg gjorts inom forskningsområdet för autonom greppning av godtyckliga objekt med en robotisk sluteffektor. Vi bygger vidare på dessa resultat och fokuserar på en del av det generella greppningsproblemet gällande automatisering av en skotare. Denna skogsmaskin samlar in och transporterar fällda och kapade trädstammar från avverkningsplats till upplag intill närliggande skogsbilväg. Skotaren måste greppa och lyfta stockarna på ett säkert och energieffektivt sätt för att minimera bränsleförbrukningen samt minska lastningstiderna. I detta examensarbete utvecklar vi en datadriven modell för att förutsäga det förväntade resultatet av gripförsök utförda av skotarens kran. För en given timmerstockshög kan en sådan modell uppskatta den optimala positionen och vinkeln för att applicera skotarens gripklo, vilket möjliggör effektiv planering av lastningen. Vi använder fysikbaserade simuleringar för att skapa ett dataset med 12 500 000 simulerade gripförsök fördelade över 5000 slumpmässigt genererade timmerhögar. Vår semi-generativa, övervakade modell är ett djupt faltningsnätverk utan helt sammankopplade neuronlager som tar in en ortografisk djupbild av en timmerhög och returnerar bilder som predikterar de motsvarande gripförsökens initiala gripvinkel och resultatmått som en funktion av position. Vid en femfaldig korsvalidering förutsåg vår modell antalet greppade stockar och den initiala gripvinkeln med ett normaliserat rotmedelkvadratfel på 15.77(2)% respektive 2.64(4)%. Gripförsökens energieffektivitet och energiförlust predikterades på liknande sätt med ett relativt fel på 14.43(2)% och 21.06(3)%.
195

Localization of Combat Aircraft at High Altitude using Visual Odometry

Nilsson Boij, Jenny January 2022 (has links)
Most of the navigation systems used in today’s aircraft rely on Global Navigation Satellite Systems (GNSS). However, GNSS is not fully reliable. For example, it can be jammed by attacks on the space or ground segments of the system or denied at inaccessible areas. Hence to ensure successful navigation it is of great importance to continuously be able to establish the aircraft’s location without having to rely on external reference systems. Localization is one of many sub-problems in navigation and will be the focus of this thesis. This brings us to the field of visual odometry (VO), which involves determining position and orientation with the help of images from one or more camera sensors. But to date, most VO systems have primarily been established on ground vehicles and low flying multi-rotor systems. This thesis seeks to extend VO to new applications by exploring it in a fairly new context; a fixed-wing piloted combat aircraft, for vision-only pose estimation in applications of extremely large scene depth. A major part of this research work is the data gathering, where the data is collected using the flight simulator X-Plane 11. Three different flight routes are flown; a straight line, a curve and a loop, for two types of visual conditions; in clear weather with daylight and during sunset. The method used in this work is ORB-SLAM3, an open-source library for visual simultaneous localization and mapping (SLAM). It has shown excellent results in previous works and has become a benchmark method often used in the field of visual pose estimation. ORB-SLAM3 tracks the straight line of 78 km very well at an altitude over 2700 m. The absolute trajectory error (ATE) is 0.072% of the total distance traveled in daylight and 0.11% during sunset. These results are of the same magnitude as ORB-SLAM3 on the EuRoC MAV dataset. For the curved trajectory of 79 km ATE is 2.0% and 1.2% of total distance traveled in daylight and sunset respectively.  The longest flight route of 258 km shows the challenges of visual pose estimation. Although it is managing to close loops in daylight, it has an ATE of 3.6% during daylight. During sunset the features do not possess enough invariant characteristics to close loops, resulting in an even larger ATE of 14% of total distance traveled. Hence to be able to use and properly rely on vision in localization, more sensor information is needed. But since all aircraft already possess an inertial measurement unit (IMU), the future work naturally includes IMU data in the system. Nevertheless, the results from this research show that vision is useful, even at the high altitudes and speeds used by a combat aircraft.
196

Generation of Synthetic Data for Sustainable Fashion Using a Diffusion Model

Jonsson, Simon January 2024 (has links)
The fashion industry is a significant contributor to greenhouse gas emissions and textile waste, prompting the need for sustainable practices. This thesis explores the use of diffusion models for generating synthetic data to enhance datasets used in machine learning, specifically focusing on second-hand fashion. Diffusion models, known for their ability to create high-quality images, offer potential solutions to the imbalance and quality issues in existing datasets. The study investigates how image generation and editing through diffusion models can improve datasets, the effectiveness of different prompting strategies, and the performance of synthetic data in machine learning models compared to real data. The methodology involves using the Kandinsky 2.2 inpainting model to generate and edit images, followed by manual and automated classification to evaluate image quality. Experiments demonstrate that diffusion models can plausibly improve dataset quality by adding and removing damage in images, although fully automating this process remains challenging. The results indicate that augmenting the datasets with synthetic images can potentially enhance the performance of the model, although the variability of the results suggests the need for further research. This thesis contributes to the field of sustainable fashion by proposing innovative methods for dataset augmentation using state-of-the-art generative models, aiming to support the development of efficient and automated sorting processes in the textile industry.
197

Can I open it? : Robot Affordance Inference using a Probabilistic Reasoning Approach

Aguirregomezcorta Aina, Jorge January 2024 (has links)
Modern autonomous systems should be able to interact with their surroundings in a flexible yet safe manner. To guarantee this behavior, such systems must learn how to approach unseen entities in their environment through the inference of relationships between actions and objects, called affordances. This research project introduces a neuro-symbolic AI system capable of inferring affordances using attribute detection and knowledge representation as its core principles. The attribute detection module employs a visuo-lingual image captioning model to extract the key object attributes of a scene, while the cognitive knowledge module infers the affordances of those attributes using conditional probability. The practical capabilities of the neuro-symbolic AI system are assessed by implementing a simulated robot system that interacts within the problem space of jars and bottles. The neuro-symbolic AI system is evaluated through its caption-inferring capabilities using image captioning and machine translation metrics. The scores registered in the evaluation show a successful attribute captioning rate of more than 71%. The robot simulation is evaluated within a Unity virtual environment by interacting with 50 jars and bottles, equally divided between lifting and twisting affordances. The robot system successfully interacts with all the objects in the scene due to the robustness of the architecture but fails in the inference process 24 out of the 50 iterations. Contrary to previous works approaching the problem as a classification task, this study shows that affordance inference can be successfully implemented using a cognitive visuo-lingual method. The study’s results justify further study into the use of neuro-symbolic AI approaches to affordance inference.
198

Cutting Tool Container Inspection : Stereo vision and monocular artificial intelligence depth estimation at Sandvik Coromant

Benkowski, Gustav January 2024 (has links)
This thesis explores and evaluates solutions for the inspection of cutting tool containers at Sandvik Coromant, focusing on the transition from current vision systems utilizing infrared (IR) light to new methods compatible with recycled polypropylene (PP) plastic containers. The primary goal is to evaluate the effectiveness of stereo vision and artificial intelligence (AI) for depth estimation, ensuring that the containers are properly populated with cutting tools. Various methods and algorithms are tested to determine their accuracy and speed, to meet the time requirements of the production line at Sandvik Coromant. The results indicate that, while traditional IR-based systems excel in processing speed and robustness, monocular artificial intelligence methods offer adaptability that could be utilized with the new container material. Future work will involve further optimization and real-world testing to confirm these findings.
199

Konceptuell utveckling av interiören hos en framtida fullt autonom bil / Conceptual development of an interior in a future fully autonomous car

Edvardsson, Felicia, Warberg, Therése January 2016 (has links)
Målet med examensarbetet har varit att samla information åt ett tekniskt konsultföretag för att öka deras kunskap om autonoma system och fordonskommunikation. Statusen på arbetet kring dessa aktiva säkerhetssystem hos olika aktörer och hur systemen implementeras i dagens och framtidens fordon har undersökts genom omfattande litteraturstudier, intervjuer och marknadsanalyser. De autonoma systemen kan samla information från omgivningen genom sensorer och bidra till ett jämnare trafikflöde, ökad säkerhet, lättare bilar och bättre miljö. Genom fordonskommunikationen kan fordon kommunicera med varandra samt infrastrukturen och garantera en säker bilfärd. År 2030 utgörs innerstaden av autonom, elektrifierad kollektivtrafik för att transportera människor på begäran, samtidigt som personbilar till viss del förbjuds. Potentiella behov för människan i en fullt autonom bil har identifierats och diverse produktutvecklingsmetoder har tillämpats för att utforma två konceptuella lösningar för en framtida bilinteriör. Lösningarna visar interaktionen mellan människa och system eftersom underhållning och bekvämlighet blir viktigt i en fullt autonom bil. Respektive lösning är statsägd och rymmer fyra passagerare. I lösningarna är sittplatserna placerade på ett sätt som underlättar kommunikation mellan passagerarna. Passagerarna kan underhållas eller informeras individuellt eller gemensamt via text, ljud och bild. / The goal with this thesis project has been to collect information for a technical consulting company in order to increase their knowledge about autonomous systems and vehicular communication. The status of how various operators work with active safety systems and how the systems are implemented in current and future vehicles has been investigated through extensive literature studies, interviews and market research. The autonomous systems can collect information from the surrounding through sensors and contribute to better traffic efficiency, increased safety, lighter cars and a better environment. Through vehicle communication, the vehicle can communicate with each other in order to guarantee a safe ride. In 2030 the inner city constitutes of autonomous, electrified public transport to transport people on demand, meanwhile private cars are prohibited. Potential needs for the human in a fully, autonomous car has been identified and various product development methods has been applied in order to develop two conceptual solutions for a future car interior. The solutions show the interaction between human and system since entertainment and comfort becomes important in a fully, autonomous car. Each solution is state-owned and holds four passengers. In the solutions, the seats are placed in regard to facilitate communication between the passengers. The passengers can be entertained or informed individually or collectively by text, sound and images.
200

Object Tracking Achieved by Implementing Predictive Methods with Static Object Detectors Trained on the Single Shot Detector Inception V2 Network / Objektdetektering Uppnådd genom Implementering av Prediktiva Metoder med Statiska Objektdetektorer Tränade på Entagningsdetektor Inception V2 Nätverket

Barkman, Richard Dan William January 2019 (has links)
In this work, the possibility of realising object tracking by implementing predictive methods with static object detectors is explored. The static object detectors are obtained as models trained on a machine learning algorithm, or in other words, a deep neural network. Specifically, it is the single shot detector inception v2 network that will be used to train such models. Predictive methods will be incorporated to the end of improving the obtained models’ precision, i.e. their performance with respect to accuracy. Namely, Lagrangian mechanics will be employed to derived equations of motion for three different scenarios in which the object is to be tracked. These equations of motion will be implemented as predictive methods by discretising and combining them with four different iterative formulae. In ch. 1, the fundamentals of supervised machine learning, neural networks, convolutional neural networks as well as the workings of the single shot detector algorithm, approaches to hyperparameter optimisation and other relevant theory is established. This includes derivations of the relevant equations of motion and the iterative formulae with which they were implemented. In ch. 2, the experimental set-up that was utilised during data collection, and the manner by which the acquired data was used to produce training, validation and test datasets is described. This is followed by a description of how the approach of random search was used to train 64 models on 300×300 datasets, and 32 models on 512×512 datasets. Consecutively, these models are evaluated based on their performance with respect to camera-to-object distance and object velocity. In ch. 3, the trained models were verified to possess multi-scale detection capabilities, as is characteristic of models trained on the single shot detector network. While the former is found to be true irrespective of the resolution-setting of the dataset that the model has been trained on, it is found that the performance with respect to varying object velocity is significantly more consistent for the lower resolution models as they operate at a higher detection rate. Ch. 3 continues with that the implemented predictive methods are evaluated. This is done by comparing the resulting deviations when they are let to predict the missing data points from a collected detection pattern, with varying sampling percentages. It is found that the best predictive methods are those that make use of the least amount of previous data points. This followed from that the data upon which evaluations were made contained an unreasonable amount of noise, considering that the iterative formulae implemented do not take noise into account. Moreover, the lower resolution models were found to benefit more than those trained on the higher resolution datasets because of the higher detection frequency they can employ. In ch. 4, it is argued that the concept of combining predictive methods with static object detectors to the end of obtaining an object tracker is promising. Moreover, the models obtained on the single shot detector network are concluded to be good candidates for such applications. However, the predictive methods studied in this thesis should be replaced with some method that can account for noise, or be extended to be able to account for it. A profound finding is that the single shot detector inception v2 models trained on a low-resolution dataset were found to outperform those trained on a high-resolution dataset in certain regards due to the higher detection rate possible on lower resolution frames. Namely, in performance with respect to object velocity and in that predictive methods performed better on the low-resolution models. / I detta arbete undersöks möjligheten att åstadkomma objektefterföljning genom att implementera prediktiva metoder med statiska objektdetektorer. De statiska objektdetektorerna erhålls som modeller tränade på en maskininlärnings-algoritm, det vill säga djupa neurala nätverk. Specifikt så är det en modifierad version av entagningsdetektor-nätverket, så kallat entagningsdetektor inception v2 nätverket, som används för att träna modellerna. Prediktiva metoder inkorporeras sedan för att förbättra modellernas förmåga att kunna finna ett eftersökt objekt. Nämligen används Lagrangiansk mekanik för härleda rörelseekvationer för vissa scenarion i vilka objektet är tänkt att efterföljas. Rörelseekvationerna implementeras genom att låta diskretisera dem och därefter kombinera dem med fyra olika iterationsformler. I kap. 2 behandlas grundläggande teori för övervakad maskininlärning, neurala nätverk, faltande neurala nätverk men också de grundläggande principer för entagningsdetektor-nätverket, närmanden till hyperparameter-optimering och övrig relevant teori. Detta inkluderar härledningar av rörelseekvationerna och de iterationsformler som de skall kombineras med. I kap. 3 så redogörs för den experimentella uppställning som användes vid datainsamling samt hur denna data användes för att producera olika data set. Därefter följer en skildring av hur random search kunde användas för att träna 64 modeller på data av upplösning 300×300 och 32 modeller på data av upplösning 512×512. Vidare utvärderades modellerna med avseende på deras prestanda för varierande kamera-till-objekt avstånd och objekthastighet. I kap. 4 så verifieras det att modellerna har en förmåga att detektera på flera skalor, vilket är ett karaktäristiskt drag för modeller tränade på entagninsdetektor-nätverk. Medan detta gällde för de tränade modellerna oavsett vilken upplösning av data de blivit tränade på, så fanns detekteringsprestandan med avseende på objekthastighet vara betydligt mer konsekvent för modellerna som tränats på data av lägre upplösning. Detta resulterade av att dessa modeller kan arbeta med en högre detekteringsfrekvens. Kap. 4 fortsätter med att de prediktiva metoderna utvärderas, vilket de kunde göras genom att jämföra den resulterande avvikelsen de respektive metoderna innebar då de läts arbeta på ett samplat detektionsmönster, sparat från då en tränad modell körts. I och med denna utvärdering så testades modellerna för olika samplingsgrader. Det visade sig att de bästa iterationsformlerna var de som byggde på färre tidigare datapunkter. Anledningen för detta är att den insamlade data, som testerna utfördes på, innehöll en avsevärd mängd brus. Med tanke på att de implementerade iterationsformlerna inte tar hänsyn till brus, så fick detta avgörande konsekvenser. Det fanns även att alla prediktiva metoder förbättrade objektdetekteringsförmågan till en högre utsträckning för modellerna som var tränade på data av lägre upplösning, vilket följer från att de kan arbeta med en högre detekteringsfrekvens. I kap. 5, argumenteras det, bland annat, för att konceptet att kombinera prediktiva metoder med statiska objektdetektorer för att åstadkomma objektefterföljning är lovande. Det slutleds även att modeller som erhålls från entagningsdetektor-nätverket är lovande kandidater för detta applikationsområde, till följd av deras höga detekteringsfrekvenser och förmåga att kunna detektera på flera skalor. Metoderna som användes för att förutsäga det efterföljda föremålets position fanns vara odugliga på grund av deras oförmåga att kunna hantera brus. Det slutleddes därmed att dessa antingen bör utökas till att kunna hantera brus eller ersättas av lämpligare metoder. Den mest väsentliga slutsats detta arbete presenterar är att lågupplösta entagninsdetektormodeller utgör bättre kandidater än de tränade på data av högre upplösning till följd av den ökade detekteringsfrekvens de erbjuder.

Page generated in 0.0826 seconds