• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 2
  • 1
  • Tagged with
  • 9
  • 9
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Increasing productivity in software testing : Visualizing and managingarbitrarily structured messages and message queues to increase productivity andusability

Pedersen, Jakob January 2015 (has links)
This thesis investigates how data sources, message queues, and messages can be generalized in such way that it allows for easy configuration and setup in a frontend visualization application. It also includes increasing the productivity of the application testers and the usability of the user interface. An analysis of one of Dewire’s test tools gave insightful information to identify what was needed in the proof-of-concept application and resulted in a list of requirements. The information gained from Dewire also indicated what technologies to use and resulted in a research phase. Different design proposals were presented and one was chosen to be implemented. An agile approach was chosen as method for the implementation phase to emphasize flexibility. It was set to be iterative and in close communication with people at Dewire. The implementation resulted in a proof-of-concept application with a GUI that allows user to configure data sources, message queues and messages. The messages are uploaded in XML format and the GUI allows for modification through HTML forms which mir-rors the XML files. The user is also able to send these messages as JMS mes-sages. Responses to these JMS messages are also shown in the GUI and saved in a database. The results suggest that accomplishing the common task to select a connection tree and send a message takes 45% less time in the proof-of-concept application compared to Dewire’s tool. To accomplish the common task to alter a message and send it takes 79% less time in the proof-of-concept appli-cation compared to Dewire’s tool. The results also suggest that theory of com-puter-human interaction have been applied during the implementation to ac-complisha usable UI. It is assessed that data sources, message queues and mes-sage can be easy configured in a GUI. Further, it is assessed that the productivi-ty has been increased compared to the former tool used.
2

Implementace služby poskytující frontu zpráv v technologii cloud computing / Implementation of Message Queue as a Service in Cloud Computing

Hanus, Tomáš January 2018 (has links)
Thesis discusses about different ways of a communication between components of a distributed system. It describes a communication using a message exchange and at the same time talks about other alternatives. It adds details about various models of a message exchange, various message types and about various specifications as well. Commercial tools ActiveMQ, RabbitMQ and Kafka are presented. Special emphasis is placed on describing the way these tools exchange messages, scalability options and others. The web service is designed according to the described features. Its main purpose is management and monitoring of the tool by user choice and easy replacement of this tool with another one. Designed application is implemented using the Kotlin language for selected tool RabbitMQ. The implemented solution allows a simple exchange of messages through the REST api.
3

Scalability of Topic Map Systems

Hoyer, Marcel 26 February 2018 (has links)
The purpose of this thesis was to find approaches solving major performance and scalability issues for Topic Maps-related data access and the merging process. Especially regarding the management of multiple, heterogeneous topic maps with different sizes and structures. Hence the scope of the research was mainly focused on the Maiana web application with its underlying MaJorToM and TMQL4J back-end.
4

Aktiv felhantering av loggdata

Åhlander, Mattias January 2020 (has links)
The main goal of this project has been to investigate how a message queue can be used to handle error codes in log files more actively. The project has followed the Design Science Research Methodology for development and implementation of the solution. A model of the transaction system was developed and emulated in newly developed applications. Two experiments were performed, the first of which tested a longer run time with intervals between messages and the second a time measurement of how long it takes to send 20 000 messages. The first experiment showed that the message queue was able to handle all messages which gave a high throughput of 22.5 messages per second without any messages being lost. The implemented consumer application received all messages and successfully counted the number of error codes in the received data. The experiments that have been carried out have proven that a message queue can be implemented to handle error codes in log files more actively. The future work that can be performed may include an evaluation of the security of the system, comparisons of performance compared to other message queues, performing the experiments on more powerful computers and implementation of machine learning to classify the log data. / Målet med det här projektet har varit att undersöka hur en meddelandekö kan användas för att felhantera felkoder i loggfiler mer aktivt. Projektet har följt Design Science Research Methodology för utveckling och implementering av lösningen. En modell av transaktionssystemet togs fram och emulerades i nyutvecklade applikationer. Två experiment utfördes varav det första testade en längre körning med intervall mellan meddelanden och det andra en tidmätning för hur lång tid det tar att skicka 20 000 meddelanden. Det första experimentet visade att meddelandekön klarade av att hantera meddelanden som skickades över två timmar. Det andra experimentet visade att systemet tog 14 minuter och 45 sekunder att skicka och hantera alla meddelanden, vilket gav en hög genomströmning av 22.5 meddelanden per sekund utan att några meddelanden gick förlorade. Den implementerade mottagarapplikationen tog emot alla meddelanden och lyckades räkna upp antalet felkoder som presenterades i den inkomna datan. De experiment som har utförts har bevisat att en meddelandekö kan implementeras för att felhantera felkoder i loggfiler mer aktivt. De framtida arbeten som kan utföras omfattar en utvärdering av säkerheten av systemet, jämförelser av prestanda jämfört med andra meddelandeköer, utföra experimenten på kraftfullare datorer och en implementering av maskininlärning för att klassificera loggdatan.
5

ParCam : Applikation till Android för tolkning av parkeringsskyltar

Forsberg, Tomas January 2020 (has links)
It is not always that easy to accurately interpret a parking signs The driver is expected to keep track of what every road sign, direction, prohibition, and amendment means, both by themselves and in combination with each others In addition, the driver must also keep track of the time, date, if there is a holiday, week number, etcs This can make the driver unsure of the rules, or interpret the rules incorrectly, which can lead to hefty fnes or even a towed vehicles By developing a mobile application that can analyze a photograph of a parking sign and quickly give the driver the verdict, the interpretation process can be made easys The purpose of this study has been to examine available technology within image and text analysis and then develop a prototype of an Android application that can interpret a photograph of a parking sign and quickly give the correct verdict, with the help of said technologys The constructed prototype will be evaluated partly by user tests to evaluate the application’s usability, and partly by functionality tests to evaluate the accuracy of the analysis processs Based on the results from the tests, a conclusion was drawn that the application gave a very informative and clear verdict, which was correct most of the time, but ran into problems with certain signs and under more demanding environmental circumstancess The tests also showed that the interface was perceived as easy to understand and use, though less interaction needed from the user was desireds There is a great potential for future development of ParCam, where the focus will be on increasing the automation of the processs / Att tolka en parkeringsskylt korrekt är inte alltid så  enkelt. Föraren förväntas ha koll på vad alla vägmärken, anvisningar, förbud, och tillägg betyder, både för sig själva och i kombination med varandra. Dessutom måste föraren även ha koll på  tid, datum, ev. helgdag, veckonummer m.m. Detta kan leda till att föraren blir osäker på vad som gäller eller tolkar reglerna felaktigt, vilket kan leda till dryga böter och även bortbogserat fordon. Genom att utveckla en mobilapplikation som kan analysera ett fotografi av en parkeringsskylt och snabbt ge svar kan denna tolkningsprocess underlättas för föraren. Syftet med denna studie har varit att utforska befintliga teknologier inom bild- och textanalys och därefter konstruera en prototyp av en Android-app som med hjälp av denna teknologi samt användarens mobilkamera kunna tolka fotografier av en parkeringsskylt och snabbt ge en korrekt utvärdering. Den konstruerade prototypen kommer att utvärderas dels genom användartester för att testa applikationens användbarhet och dels genom analys av utdata för att mäta analysens träffsäkerhet. Från testerna drogs slutsatsen att applikationen gav ett väldigt tydligt och informativt svar där analysen var korrekt de allra flesta gångerna, men stötte på problem med vissa skyltar och under svårare miljöförhållanden. Testerna visade också att gränssnittet upplevdes lätt att använda, men skulle helst kräva mindre inblandning från användaren. Det finns stor utvecklingspotential för ParCam, där fokus kommer att läggas på utökad automatisering av processen.
6

Evaluation of push/pull based loadbalancing in a distributed loggingenvironment / Utvärdering avlastbalanseringsmetoder i endistribuerad loggmiljö

Nilstadius, Gustaf, Duda, Robin January 2016 (has links)
This report compares the characteristics of push/pull load balancing techniques usedin the context of a logging system. The logging system is expected to handle a largevolume of events. The load balancing techniques are evaluated with focus onthroughput during high load. The testing scenarios includes the use of a traditionalload balancer (push-based) and the use of messaging queues (pull-based and indirectlycontext aware) in its place. The ultimate goal of the report is to determine the feasibilityof using a messaging queue rather than a traditional load balancer in a distributedlogging system. Tests were conducted measuring the throughput of multiple setupswith different load balancers. The conclusion of this report is that both messagingqueues and load balancing are equally feasible in a logging context. / Rapporten jämför egenskaper hos lastbalanseringstekniker för användning i ettdistribuerat logghanteringssystem. Systemet förväntas hantera stora volymermeddelanden vid hög belastning. Testscenarion som utförs sker med traditionelllastbalansering där event trycks ut, samt med meddelandeköer som är hämtbaserade.Målet med rapporten är att avgöra om kontextbaserad lastbalansering kan ökastabiliteten i ett system avsett för hantering av loggdata. Testerna som utfördesuppmätte mängden data som gick igenom systemet vid en given tidpunkt, testernakördes med flera typer av lastbalanserare. Slutsatsen som dras är att bådemeddelandeköer och lastbalansering är passande för användning i ett loggsystem.
7

Push-based low-latency solution for Tracked Resource Set protocol : An extension of Open Services for Lifecycle Collaboration specification

Ning, Xufei January 2017 (has links)
Currently, the development of embedded system requires a variety of software and tools. Moreover, most of this software and tools are standalone applications, thus they are unconnected and their data can be inconsistent and duplicated. This increase both heterogeneity and the complexity of the development environment. To address this situation, tool integration solutions based on Linked Data are used, as they provide scalable and sustainable integration across different engineering tools. Different systems can access and share data by following the Linked-Data-based Open Service for Lifecycle Collaboration (OSLC) specification. OSLC uses the Tracked Resource Set (TRS) protocol to enable a server to expose a resource set and to enable a client to discover a resource in the resource set. Currently, the TRS protocol uses a client pull for the client to update its data and to synchronize with the server. However, this method is inefficient and time consuming. Moreover, high-frequency pulling may introduce an extra burden on the network and server, while low-frequency pulling increases the system’s latency (as seen by the client). A push-based low-latency solution for the TRS protocol was implemented using Message Queue Telemetry Transport (MQTT) technology. The TRS server uses MQTT to push the update patch (called a ChangeEvent) to the TRS client, then the client updates its content according to this ChangeEvent. As a result, the TRS client synchronizes with the TRS server in real time. Furthermore, a TRS adaptor was developed for Atlassian’s JIRA, a widely-used project and issue management tool. This JIRA-TRS adaptor provides a TRS provider with the ability to share data via JIRA with other software or tools which utilize the TRS protocol. In addition, a simulator was developed to simulate the operations in JIRA for a period of time (specifically the create, modify, and delete actions regarding issues) and acts as a validator to check if the data in TRS client matches the data in JIRA. An evaluation of the push-based TRS system shows an average synchronization delay of around 30 milliseconds. This is a huge change compared with original TRS system that synchronized every 60 seconds. / Nuvarande inbyggda system kräver en mängd olika program och verktyg för att stödja dess utveckling. Dessutom är de flesta av dessa programvara och verktyg fristående applikationer. De är oanslutna och deras data kan vara inkonsistent och duplicerad. Detta medför ökad heterogenitet och ökar komplexiteten i utvecklingsmiljön. För att hantera denna situation används verktygsintegrationslösningar baserade på Länkad Data, eftersom de ger en skalbar och hållbar integrationslösning för olika tekniska verktyg. Olika system kan komma åt och dela data genom att följa den Länkad Data-baserade tjänsten Open Service for Lifecycle Collaboration (OSLC). OSLC använder TRS-protokollet (Tracked Resource Set) så att en server kan exponera en resursuppsättning och för att möjliggöra för en klient att upptäcka en resurs i resursuppsättningen. TRS-protokollet använder för tillfället pull-metoden så att klienten kan uppdatera sin data och synkronisera med servern. Denna metod är emellertid ineffektiv och tidskrävande. Vidare kan en högfrekvensdriven pull-metod införa en extra börda på nätverket och servern, medan lågfrekvensdriven ökar systemets latens (som ses av klienten). I det här examensprojektet implementerar vi en pushbaserad låg latenslösning för TRS-protokollet. Den teknik som används är Message Queue Telemetry Transport (MQTT). TRS-servern använder MQTT för att pusha uppdateringspatchen (som kallas ChangeEvent) till TRS-klienten. Därefter uppdaterar klienten dess innehåll enligt denna ChangeEvent. Vilket resulterar i att TRS-klienten synkroniseras med TRS-servern i realtid. Dessutom utvecklas en TRS-adapter för Atlassians JIRA som är ett välanvänt projekt och problemhanteringsverktyg. JIRA-TRS-adaptern tillhandahåller en TRS-leverantör med möjlighet att dela data via JIRA med annan programvara eller verktyg som använder TRS-protokollet. Dessutom utvecklade vi en simulator för att simulera verksamheten i JIRA under en tidsperiod (specifikt skapa, ändra och ta bort åtgärder rörande problem) och en validator för att kontrollera om data i TRS-klienten matchar data i JIRA. En utvärdering av det pushbaserade TRS-systemet visar en genomsnittlig synkroniseringsfördröjning på cirka 30 millisekunder. Detta är en stor förändring jämfört med det ursprungliga TRS-systemet som synkroniseras var 60:e sekund.
8

Network protocol for distribution and handling of data from JAS 39 Gripen / Nätverksprotokoll för distribuering och hantering av data från JAS 39 Gripen

Karlsson, Jonathan January 2015 (has links)
On board the aircraft JAS 39 Gripen a measuring system, Data Acquisition System (DAS), is sending sensor data to a server on the ground. In this master thesis, a unified API for distribution and handling of the sensor data is designed and implemented. The work has been carried out at Saab Aeronautics, Linköping during, 2014. During flights with the aircraft the engineers at Saab need to monitor different sensors in the aircraft, including the exact commands of the pilots. All that data is serialized and sent via radio link to a server at Saab. The current data distribution solution includes several clients that need to connect to the server. Each client has its own connection protocol, making the system complex and difficult to maintain. An API is needed in order to make the clients connect in a unified manner. This would also enable future clients to implement the API and start receiving sensor data from the server. The research conducted in the thesis project was centered on the different choices that exist for designing such an API. The question that needed answering was; how can an existing complex system can be replaced by a publish-subscribe system and what the benefits would be in terms of latency and flexibility of the system? The design would have to be flexible enough to support multiple clients. The investigated research question was answered with a design utilizing ZMQ, pthreads and a design pattern. The result is a flexible system that was sufficiently fast for the requirements set at Saab and open to future extensions. The thesis work also included designing a unified API with requirements on latency and functionality. The resulting API was designed using the publish-subscribe design pattern, the network library Zero Message Queue (ZMQ) and the threading library pthreads. The resulting system supports multiple coexisting servers and clients that request sensor data. A new feature is that the clients can start sending calculations performed on samples to other clients. To demonstrate that the solution provides a unified framework, two existing clients and the server were developed with the proposed API. To test the latency requirements, tests were performed in the control room at Saab.
9

A Framework for Interoperability on the United States Electric Grid Infrastructure

Laval, Stuart 01 January 2015 (has links)
Historically, the United States (US) electric grid has been a stable one-way power delivery infrastructure that supplies centrally-generated electricity to its predictably consuming demand. However, the US electric grid is now undergoing a huge transformation from a simple and static system to a complex and dynamic network, which is starting to interconnect intermittent distributed energy resources (DERs), portable electric vehicles (EVs), and load-altering home automation devices, that create bidirectional power flow or stochastic load behavior. In order for this grid of the future to effectively embrace the high penetration of these disruptive and fast-responding digital technologies without compromising its safety, reliability, and affordability, plug-and-play interoperability within the field area network must be enabled between operational technology (OT), information technology (IT), and telecommunication assets in order to seamlessly and securely integrate into the electric utility's operations and planning systems in a modular, flexible, and scalable fashion. This research proposes a potential approach to simplifying the translation and contextualization of operational data on the electric grid without being routed to the utility datacenter for a control decision. This methodology integrates modern software technology from other industries, along with utility industry-standard semantic models, to overcome information siloes and enable interoperability. By leveraging industrial engineering tools, a framework is also developed to help devise a reference architecture and use-case application process that is applied and validated at a US electric utility.

Page generated in 0.0667 seconds