Spelling suggestions: "subject:"datorsystem"" "subject:"aktorsystem""
471 |
Pose Estimation in an Outdoors Augmented Reality Mobile ApplicationNordlander, Rickard January 2018 (has links)
This thesis proposes a solution to the pose estimation problem for mobile devices in an outdoors environment. The proposed solution is intended for usage within an augmented reality application to visualize large objects such as buildings. As such, the system needs to provide both accurate and stable pose estimations with real-time requirements. The proposed solution combines inertial navigation for orientation estimation with a vision-based support component to reduce noise from the inertial orientation estimation. A GNSS-based component provides the system with an absolute reference of position. The orientation and position estimation were tested in two separate experiments. The orientation estimate was tested with the camera in a static position and orientation and was able to attain an estimate that is accurate and stable down to a few fractions of a degree. The position estimation was able to achieve centimeter-level stability during optimal conditions. Once the position had converged to a location, it was stable down to a couple of centimeters, which is sufficient for outdoors augmented reality applications.
|
472 |
Correspondence-based pairwise depth estimation with parallel accelerationBartosch, Nadine January 2018 (has links)
This report covers the implementation and evaluation of a stereo vision corre- spondence-based depth estimation algorithm on a GPU. The results and feed- back are used for a Multi-view camera system in combination with Jetson TK1 devices for parallelized image processing and the aim of this system is to esti- mate the depth of the scenery in front of it. The performance of the algorithm plays the key role. Alongside the implementation, the objective of this study is to investigate the advantages of parallel acceleration inter alia the differences to the execution on a CPU which are significant for all the function, the imposed overheads particular for a GPU application like memory transfer from the CPU to the GPU and vice versa as well as the challenges for real-time and concurrent execution. The study has been conducted with the aid of CUDA on three NVIDIA GPUs with different characteristics and with the aid of knowledge gained through extensive literature study about different depth estimation algo- rithms but also stereo vision and correspondence as well as CUDA in general. Using the full set of components of the algorithm and expecting (near) real-time execution is utopic in this setup and implementation, the slowing factors are in- ter alia the semi-global matching. Investigating alternatives shows that results for disparity maps of a certain accuracy are also achieved by local methods like the Hamming Distance alone and by a filter that refines the results. Further- more, it is demonstrated that the kernel launch configuration and the usage of GPU memory types like shared memory is crucial for GPU implementations and has an impact on the performance of the algorithm. Just concurrency proves to be a more complicated task, especially in the desired way of realization. For the future work and refinement of the algorithm it is therefore recommended to invest more time into further optimization possibilities in regards of shared memory and into integrating the algorithm into the actual pipeline.
|
473 |
The evolution and erosion of a service-oriented architecture in enterprise software : A study of a service-oriented architecture and its transition to a microservice architecture / En Service-orienterad arkitekturs erosion och evolutionKarlsson, Eric January 2018 (has links)
In this thesis project, a company’s continuously evolved service-oriented software architecture was studied for signs of architectural erosion. The architecture has been continuously developed over some time and the company have experienced a reduction in architectural quality and felt that it no longer fulfilled its design goals and therefore decided to start working on a replacement architecture based on the microservice archi-tectural style. This thesis project therefore aimed to study how the current architectures quality have changed during its evolution, find the causes of these changes in quality, andestimate how the planned microservice migration will effect these changes in quality. This study was performed in three steps. First, a suite of suitable quality metrics where gathered based on the stated architectural design goals and what information can be ex-tracted from the history of the implemented architecture. A tool was developed to model the architecture and to gather the quality metrics from the current architecture and how ithas changed over one year’s worth of development and evolution. Secondly, the causes ofthese changes in architectural quality was investigated through developer interviews with a wide range of developers that had worked on the architecture and the web application that it provides the structure for. The interviews focused on the topics of architectural knowledge, what consideration is taken to its design during component development, maintenance of existing components and architecture, as well as questions regardingspecific components and anomalies. Thirdly and finally, the migration to a microserviceand its effects on the quality of the current architecture is estimated through performing microservice reengineering on the model used to evaluate the current architecture. The tools developed during this thesis allowed for an analysis of the architecture didshow an increase in consistency violations, structural problems and level the of coupling have substantially increased over the version history that the model tracked. It was discov-ered by the developer interviews that some of the causes of this erosion was due to among other reasons an abandonment of some architectural deign decisions, lack of architectural knowledge on certain topics, and none-optimal development conditions and priorities. The microservice reengineering showed how the migration could be used to improve themeasured quality metrics and that a migration alongside some other architectural erosionprevention and repair methods could create an architecture that are more modular and erosion tolerant.
|
474 |
Design and Development of a Wireless Multipoint E-stop System for Autonomous HaulersAlexander, Karlsson January 2018 (has links)
Safety-related functions are important in autonomous industrial applications and are featured in an extensive body of work contained within the standards. The implementation of safety-related systems is commonly done by an external company at a great cost and with limited flexibility. Thus, the objective of this thesis was to develop and implement a safety-related system using o-the-shelf products and to analyse how well it can comply with the established standards of safety-related functions. This work has sought to review the current standards for safety-functions, the eectsof harsh radio environments on safety-related systems, and how to validate the safety-function.The system development process was used to gain knowledge by rst building the concept based on pre-study. After the pre-study was nished, the process moved to the development of software, designed to maintain a wireless heartbeat as well as to prevent collisions between the autonomous and manual-driven vehicles at a quarry, and implementation of the system in real hardware. Finally, a set of software (simulations) and hardware (measurements in an open-pit mine) tests were performed to test the functionality of the system. The wireless tests showed that the system adhered to the functional requirements set by the company, however, the evaluated performance level according to ISO 13849-1 resulted in performance level B which is insucient for a safety-related function. This work demonstrates that it is not possible to develop a safety-related system using the off-the-shelf products chosen, without hardware redundancy.
|
475 |
Röstigenkänning med Movidius Neural Compute Stick / Voice recognition with Movidius Neural Compute StickVidmark, Stefan January 2018 (has links)
Företaget Omicron Ceti AB köpte en Intel Movidius Neural Compute Stick (NCS), som är en usb-enhet där neurala nätverk kan laddas in för att processa data. Min uppgift blev att studera hur NCS används och göra en guide med exempel. Med TensorFlow och hjälpbiblioteket TFLearn gjordes först ett testnätverk för att prova hela kedjan från träning till användning med NCS. Sedan tränades ett nätverk att kunna klassificera 14 olika ord. En mängd olika utformningar på nätverket testades, men till slut hittades ett exempel som blev en bra utgångspunkt och som efter lite justering gav en träffsäkerhet på 86% med testdatat. Vid inläsning i mikrofon så blev resultatet lite sämre, med 67% träffsäkerhet. Att processa data med NCS tog längre tid än med TFLearn men använde betydligt mindre CPU-kraft. I mindre system såsom en Raspberry Pi går det däremot inte ens att använda TensorFlow/TFLearn, så huruvida det är värt att använda NCS eller inte beror på det specifika användningsscenariot. / Omicron Ceti AB company had an Intel Movidius Neural Compute Stick (NCS), which is a usb device that may be loaded with neural networks to process data. My assignment was to study how NCS is used and to make a guide with examples. Using TensorFlow and the TFLearn help library a test network was made for the purpose of trying the work pipeline, from network training to using the NCS. After that a network was trained to classify 14 different words. Many different configurations of the network were tried, until a good example was found that was expanded upon until an accuracy of 86% with the test data was reached. The accuracy when speaking into a microphone was a bit worse at 67%. To process data with the NCS took a longer time than with TFLearn but used a lot less CPU power. However it’s not even possible to use TensorFlow/TFLearn in smaller systems like a Raspberry Pi, so whether it’s worth using the NCS depends on the specific usage scenario.
|
476 |
Engineering Content-Centric Future Internet ApplicationsPerkaz, Alain January 2018 (has links)
The Internet as we know it today has sustained continuous evolution since its creation, radically changing means of communication and ways in which commerce is globally operated. From the World Wide Web to the two-way video calls, it has shifted the ways people communicate and societies function. The Internet itself was first conceived as a network that would enable the communication between multiple trusted and known hosts, but as the time passed, it has notably evolved. Due to the significant adoption of Internet-connected devices (phones, personal computers, tablets...), the initial device homogeneity has shifted towards an extremely heterogeneous environment in which many different devices consume and publish resources, also referred as services. As the number of connected devices and resources increases, it becomes critical to building systems that enable the autonomic publication, consumption, and retrieval of those resources. As the inherent complexity of systems continues to grow, it is essential to set boundaries to their achievable capabilities. The traditional approaches to network-based computing are not sufficient, and new reference approaches should be presented. In this context the Future Internet (FI) term emerges, a worldwide execution environment connecting large sets of heterogeneous and autonomic devices and resources. In such environments, systems leverage service annotations to fulfil emerging goals and dynamically organise resources based on interests. Although research has been conducted in those areas, active research is being carried out in the following areas: extensible machine-readable annotation of services, dynamic service discovery, architectural approaches for decentralised systems, and interest-focused dynamic service organisations. These concepts will be explained in the next section, as they will serve to contextualise the later presented problem statement and research questions.
|
477 |
Can gameification motivate exercise : A user experiment regarding a normal exercise app compared to a gamified exercise appSundberg, Jonathan January 2018 (has links)
Background. Regularly exercising is difficult, some people stop exercising either due to it not being fun or they might not see any results of the effort they put in. Exercise and gaming or also know as exergaming, is a way to combine the fun entertainment of games with the health benefits of exercise. Objectives. The objective of this study is to conduct an experiment to find out if people are more interested in an exercising app which has been gamified compared to a normal exercising app. Gamifyingsomethingsuggeststhatsomethingwhichisnotconsideredagamemedium is taken, and then influence with game related aspects. Methods. A prototype was created to show the participants both sides of the exercise apps, one normal app which resembles an everyday exercising app, and one gamified app which shows the user their progress in a fashion similar to role-playing games with levels and quests. The participants of the test will try both apps and later vote in a survey whichever they liked the most. Results. While only about 60% of the participants had prior experience with exercise apps, 90% would rather choose the gamified app over the normal app. 95% of the participants were regular gamers. Conclusions. The vast majority of the participants preferred the gamified version of the app over the normal one. Specifically mentioning that they find it more interesting and that they enjoy the upfront progression system a lot more since they are used to it from the games they play on their free time
|
478 |
Chinese Text Classification Based On Deep LearningWang, Xutao January 2018 (has links)
Text classification has always been a concern in area of natural language processing, especially nowadays the data are getting massive due to the development of internet. Recurrent neural network (RNN) is one of the most popular method for natural language processing due to its recurrent architecture which give it ability to process serialized information. In the meanwhile, Convolutional neural network (CNN) has shown its ability to extract features from visual imagery. This paper combine the advantages of RNN and CNN and proposed a model called BLSTM-C for Chinese text classification. BLSTM-C begins with a Bidirectional long short-term memory (BLSTM) layer which is an special kind of RNN to get a sequence output based on the past context and the future context. Then it feed this sequence to CNN layer which is utilized to extract features from the previous sequence. We evaluate BLSTM-C model on several tasks such as sentiment classification and category classification and the result shows our model’s remarkable performance on these text tasks.
|
479 |
FTTX-Analysverktyg anpassat för Telias nät / FTTX-Analysis tool designed for Telia's networkBrännback, Andreas January 2018 (has links)
Ett verktyg har utvecklats i programmeringsspråket Python, som analyserar status för uppkopplingar hos Fibre to the X (FTTX)-kunder i Telias nät. Systemet består av en moduluppdelad struktur, där alla analysfunktioner av samhörande typer är uppbyggda i egna moduler. Alla moduler lagras som individuella kodfiler. Systemet är designat för att enkelt kunna vidareutvecklas genom att tillägga fler analysmoduler i framtida projekt. För att utföra en analys på en specifik kund, hämtar systemet tekniska dataparametrar via den switch som kunden sitter uppkopplad mot. Dessa parametrar jämförs därefter med förbestämda värden för att hitta avvikelser. Simple network management protocol (SNMP) och Telnet är de primära protokollen som används för att hämta relevant data. Systemet har enbart Hypertext Transfer Protocol (HTTP) som input och output. Resultatet av en analys, redovisas som Extensible Markup Language (XML) mot den server som ursprungligen ställde förfrågan till att starta en analys. XML svaret innehåller både tekniska dataparametrar kring kundens uppkoppling samt ett analyssvar baserat på dessa tekniska parametrar. Utförligheten i svaret på en utförd analys varierar en aning beroende på switchtypen kunden sitter uppkopplad mot. Switchar av äldre hårdvarutyper presenterar generellt sett mindre kundportsdata jämfört med modernare varianter. Mindre kundportsdata leder till sämre utförlighet i analyssvaret. Därför lämpar sig detta analysverktyg bättre mot de modernare switcharna som finns i Telias nät. / A tool for analyzing the status of Fiber to the X (FTTX) customers in Telia’s network has been programmed in the Python programming language. The system consists of a module divided structure where analysis functions of similar types are bundled into module files. The system is designed to be easily further developed by adding more analysis modules in future projects. To perform an analysis on a specific customer, the system retrieves technical data parameters from the switch which the customer is connected to, and compares these parameters against predetermined values to find deviations. Simple Network Management Protocol (SNMP) and Telnet are the primary protocols used to retrieve data. Hypertext Transfer Protocol (HTTP) is used to transfer data as system input and output. The result of an analysis is sent as Extensible Markup Language (XML) back to the server that originally requested the start of an analysis. The XML reply contains technical data parameters describing the customer’s connection status and an analytical response based on these technical parameters. The amount of data presented in the XML response varies slightly depending on the type of switch the customer is connected to. Switches of older hardware types generally presents less customer port data compared to more modern switches. Less customer port data leads to poor detail in the analytical response, and therefore, this analysis tool is better suited to the modern switches found in Telia's network.
|
480 |
Evaluation of a Centralized Substation Protection and Control System for HV/MV SubstationLjungberg, Jens January 2018 (has links)
Today, conventional substation protection and control systems are of a widely distributed character. One substation can easily have as many as 50 data processing points that all perform similar algorithms on voltage and current data. There is also only limited communication between protection devices, and each device is only aware of the bay in which it is installed. With the intent of implementing a substation protection system that is simpler, more efficient and better suited for future challenges, Ellevio AB implemented a centralized system in a primary substation in 2015. It is comprised of five components that each handle one type of duty: Data processing, communication, voltage measurements, current measurements and breaker control. Since its implementation, the centralized system has been in parallel operation with the conventional, meaning that it performs station wide data acquisition, processing and communication, but is unable to trip the station breakers. The only active functionality of the centralized system is the voltage regulation. This work is an evaluation of the centralized system and studies its protection functionality, voltage regulation, fault response and output signal correlation with the conventional system. It was found that the centralized system required the implementation of a differential protection function and protection of the capacitor banks and busbar coupling to provide protection equivalent to that of the conventional system. The voltage regulation showed unsatisfactory long regulation time lengths, which could have been a result of low time resolution. The fault response and signal correlation were deemed satisfactory.
|
Page generated in 0.1188 seconds