• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4712
  • 1157
  • 1
  • Tagged with
  • 5872
  • 5872
  • 5872
  • 5872
  • 5360
  • 829
  • 622
  • 551
  • 551
  • 499
  • 359
  • 325
  • 322
  • 318
  • 277
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Evaluation of a synchronous leader-based group membership protocol / Utvärdering av ett synkront ledarbaserat protokoll för gruppmedlemskap

Tengroth, Anton, Vong, Chi January 2018 (has links)
The group membership protocol is a mechanism that handle mobile nodes in a dynamic environment and provide and maintain these nodes in a membership. These nodes can, for instance, be seen as the increasing connected devices which lead to a more dynamic group of devices in systems like distributed systems. In this thesis, a synchronous leader-based group membership protocol (SLMP) is evaluated. By doing simulations where the SLMP gets to handle nodes joining and crashing in different frequencies in a noisy environment, while we vary the length of the timeout, the frequency of nodes joining and crashing, and the packet loss rate; we were able to establish that all these parameters affect the performance of the protocol in different ways. When nodes join and crash in a high frequency it is wise to have a short timeout, but if the packet loss rate also is high, then the performance of the protocol will decrease. However, if the packet loss rate is high, there still are possibilities for the protocol to deliver a good service, if the timeout is long enough and the rate that nodes join & crash is not too high.
62

An ALARP Stop-Test Decision for the Worst-Case Timing Characteristics of Safety-Critical Systems

Malekzadeh, Mahnaz January 2016 (has links)
Safety-critical systems are those in which failure can lead to loss of people’s lives, or catastrophic damage to the environment. Timeliness is an important requirement in safety-critical systems, which relates to the notion of response time, i.e., the time a system takes to respond to stimuli from the environment. If the response time exceeds a specified time interval, a catastrophe might occur.   Stringent timing requirements make testing a necessary and important process with which not only the correct system functionality has to be verified but also the system timing behaviour. However, a key issue for testers is to determine when to stop testing, as stopping too early may result in defects remaining in the system, or a catastrophe due to high severity level of undiscovered defects; and stopping too late will result in waste of time and resources. To date, researchers and practitioners have mainly focused on the design and application of diverse testing strategies, leaving the critical stop-test decision a largely open issue, especially with respect to timeliness.   In the first part of this thesis, we propose a novel approach to make a stop-test decision in the context of testing the worst-case timing characteristics of systems. More specifically, we propose a convergence algorithm that informs the tester whether further testing would reveal significant new insight into the timing behaviour of the system, and if not, it suggests testing to be stopped. The convergence algorithm looks into the observed response times achieved by testing, and examines whether the Maximum Observed Response Time (MORT) has recently increased, and when this is no longer the case, it investigates if the distribution of response times has changed significantly. When no significant new information about the system is revealed during a given period of time it is concluded, with some statistical confidence, that more testing of the same nature is not going to be useful. However, some other testing techniques may still achieve significant new findings.   Furthermore, the convergence algorithm is evaluated based on the As Low As Reasonably Practicable (ALARP)  principle which is an underpinning concept in most safety standards. ALARP involves weighting benefit against the associated cost. In order to evaluate the convergence algorithm, it is shown that the sacrifice, here testing time, would be grossly disproportionate compared to the benefit attained, which in this context is any further significant increase in the MORT after stopping the test.   Our algorithm includes a set of tunable parameters. The second part of this work is to improve the algorithm performance and scalability through the following steps: firstly, it is determined whether the parameters do affect the algorithm. Secondly, the most influential parameters are identified and tuned. This process is based on the Design of Experiment (DoE)  approach.   Moreover, the algorithm is required to be robust, which in this context is defined “the algorithm provides valid stop-test decisions across a required range of task sets”. For example, if the system’s number of tasks varies from 10 to 50 tasks and the tasks’ periods change from the range [200 μ s, 400 μ s] to the range [200 μ s, 1000 μ s], the algorithm performance would not be adversely affected. In order to achieve robustness, firstly, the most influential task set parameters on the algorithm performance are identified by the Analysis of Variance (ANOVA)  approach. Secondly, it is examined whether the algorithm is sound over some required ranges of those parameters, and if not, the situations in which the algorithm’s performance significantly degrades are identified. Then, these situations will be used in our future work to stress test the algorithm and to tune it so that it becomes robust across the required ranges.   Finally, the convergence algorithm was shown to be successful while being applied on task sets having similar characteristics. However, we observe some experiments in which the algorithm could not suggest a proper stop-test decision in compliance to the ALARP principle, e.g., it stops sooner than expected. Therefore, we examine whether the algorithm itself can be further improved focusing on the statistical test it uses and if another test would perform better.
63

Traveling Into The Future: The Development of a Currency Exchange Application

Dahlén, Olle, Urby, Fredrik, B. Fredriksson, Sebastian, Dabérius, Kevin, Qaderi, Idris, Wilson, Magnus, Axelsson, Martin, Ask Åström, Anton, Nilsson, Gustaf January 2016 (has links)
This thesis examines the experiences and results regarding the development of the web application Cash-on- Arrival, an online currency exchange service where the customer can order their currency exchange ahead of their travel to save money compared to exchanging their money at the time of arrival to their destination. The thesis mainly focuses on answering the research questions raised concerning how a secure and usable web application can be developed as well as how it can save money for travelers. In order to establish what design and features customers preferred, a market research in the form of a survey and market plan was done in the pre-study phase. Furthermore, the thesis continues with presenting how the implementation of features and solutions were performed. The result of the development is the web application Cash-on-Arrival where focus was on the security and usability of the application. Conclusions drawn were the need for a secure site, with a quick and efficient buying process, to encourage users to utilize the service. The implementation of the currency exchange optimization has not been fully answered.
64

Authentication system based on hand-arm-movements

Lindström, Johan, Sonnert, Adrian January 2016 (has links)
This study treats behavioral biometric authentication, and its focus is on creating an application that uses hand-arm-movements to identify users. The systemis modelled to achieve a minimal false rejection rate (FRR) and false acceptance rate (FAR). Experiments featuring several test subjects performing hand-arm-movements using our device were performed in order to gauge the FRR and FAR of our system. The FRR and FAR achieved were 35% and 15.8%, respectively. The study concludes that hand-arm-movements may be useful for authentication, but further research is required.
65

Security in software : How software companies work with security during a software development process / Säkerhet i mjukvara : Hur mjukvaruföretag arbetar med säkerhet under en mjukvaruutvecklingsprocess

Lindmark, Fanny, Kvist, Hanna January 2016 (has links)
This study is conducted, due to recent interest in privacy and software security, on how a number of software development companies work during a development process to develop secure software to the possible extent. The study is based on four interviews with four software companies located in Linköping, Sweden. The interviews followed a semi-structured format to ensure the possibility to compare the given answers from the companies to each other. This structure was chosen to give each company the liberty to express what they valued and thought were important during a software development process. The aim was to analyze how and if these companies work with security when developing software, and to see what differences and similarities that could be established. We found differences between the companies' perspective of security and on their methods of working. Some valued secure software more than others and performed more measures to ensure it. We also found some similarities on their view on importance of secure software and ways to work with it. The differences and similarities were related to the size of the companies, their resources, the types of products they develop, and the types of clients they have.
66

Ransomwarehotet ur kommunalt perspektiv : Erfarenheter, utmaningar, förebyggande åtgärder och incidenthantering / The ransomware threat from a swedish municipal point of view : Experiences, challenges, security measures and incident management

Larsson, Petter January 2016 (has links)
Ransomware är en typ av skadlig programvara som tar gisslan på ett offers dator genom att till exempel låsa hela datorn, eller genom att kryptera (låsa och göra obrukbara) filer så som foton och dokument, och sedan kräver en lösensumma för att låsa upp datorn eller dekryptera (återställa) filerna. Ransomware har på senare år blivit vanligare och vanligare, och är idag vanligt förekommande även i Sverige. Ett exempel på svensk spridning är de mail som sedan september 2015 skickats till svenska användare, och påståtts komma från Postnord. När mottagare öppnat mailet och försökt ladda hem den fraktsedel som mailet sagt sig innehålla har dennes personliga filer krypterats av ransomware och en lösensumma har utkrävts för att filerna ska återfås. Denna studie undersöker denna otrevliga typ av skadlig programvaras inverkan på arbetet med IT inom kommuner i Skaraborg, vilka erfarenheter och utmaningar kommunernas IT-avdelningar kopplar till ransomware, samt hur de hanterar hotet. Intervjuer har använts för att samla in informationen. Resultatet visar att ransomware drabbat de allra flesta av kommunerna, men att IT-avdelningarna har kunnat återställa det mesta som förlorats från säkerhetskopior. Resultatet visar också att en stor utmaning för kommunerna är att utbilda sina användare i hur denna typ av angrepp kan kännas igen, och därmed undvika smitta. Till sist visar resultatet även att problematiken kan komma att växa då en ny förordning om dataskydd från EU införs.
67

The Indie Developer’s guide to immersive tweens and animation : What you need to know as a programmer to animate and increase immersion

Öhrström, Fredrik January 2016 (has links)
Some games are grabbing your attention more than others. Some even do it so well people even lose track of time and their surroundings. Why does this happen? And how can the effect be harnessed for your own game? This report studies what immersion is and subjects related to it, such as richness and flow, and then how and what kind of easy animations and effects that build on these concepts that you can create in a 2D puzzle game. Most of the effects, animations and ideas can probably be carried over to other game types without much difficulty if you want a more immersive product. In the end, the player experience is tested by two surveys to see if players were immersed, PANAS and IEQ. The results go over what kind of effects were implemented and the surveys showed that most players were a bit immersed and that they enjoyed the graphics of the game.
68

Human resource matching through query augmentation for improving search context

Tran, Huy January 2016 (has links)
The objective with the thesis is to research how to match a company's human resources with job assignments received from clients. A common problem is the difficulty for computers to distinguish what semantic context a word is in. This means that for words with multiple interpretations it is hard to determine which meaning is the correct meaning in a given context. The proposed solution is to use ontologies to implement a query augmentation that will improve defining the context through users adding suggestions of relevant words. The intuition is that by incrementally adding words, the context narrows, making it easier to search for any consultant matching a specific assignment. The query augmentation will then manifest in a web application created in NodeJS and AngularJS. The experiments will then measure, based on \emph{precision}, \emph{recall} and \emph{f-measure}, the performance of the query augmentation. The thesis will also look into how to store document-based résumés, .docx file-format, and properly enable querying over the database of résumés. The Apache based frameworks Solr and Lucene, with its inverted indexing and support for HTTP requests, are used in this thesis to solve this problem. Looking at the results, the query augmentation was indicated of having somewhat too strict restrictions for which the reason is that it only permits \emph{AND} conditions. With that said, the query augmentation was able to narrow down the search context. Future work would include adding additional query conditions and expand the visualization of the query augmentation.
69

Static Code Analysis of C++ in LLVM

Kvarnström, Olle January 2016 (has links)
Just like the release of the Clang compiler, the advent of LLVM in the field of static code analysis already shows great promise. When given the task of covering rules not ideally covered by a commercial contender, the end result is not only overwhelmingly positive, the implementation time is only a fraction of what was initially expected. While LLVM’s support for sophisticated AST analysis is remarkable, being the main reason these positive results, it’s support for data flow analysis is not yet up to par. Despite this, as well as a lack of thorough documentation, LLVM should already be a strong rival for any commercial tool today.
70

Semi-automated annotation of histology images : Development and evaluation of a user friendly toolbox / Semi-automatisk uppmärkning av histologibilder : Utveckling och utvärdering av en användarvänlig verktygslåda

Sanner, Alexander, Petré, Fredrik January 2016 (has links)
Image segmentation has many areas of application, one of them being in medical science. When segmenting an image, there are many automatic approaches that generally do not let the user change the outcome. This is a problem if the segmentation is badly done. On the other hand there is the manual approach which is slow and cumbersome since it relies heavily on the users effort. This thesis presents a semi-automated approach that allow user interaction and computer assisted segmentation, which was realized in a hi-fi prototype. The prototype made use of SLIC superpixels which the user could combine with interactions to create segments. The prototype was iteratively developed and tested to ensure high usability and user satisfaction. The final prototype was also tested quantitatively to examine if the process of segmenting images had been made more efficient, compared to a manual approach. It was found that the users got a better result in the prototype than the manual if the same time was spent segmenting. Although it was found that the users could not segment images faster by using the prototype than the manual process, it was believed that it could be made more efficient with superpixels that followed the natural border of the image better. / Bildsegmentering har många användningsområden, där ett av dem är inom medicinsk foskning. Det finns många automatiska algoritmer för att segmentera bilder som generellt inte låter användaren ändra resultatet av segmenteringen. Detta är problematiskt när segmenteringen blir dåligt gjord. Däremot finns det helt manuella tillvägagångssätt som är mer besvärliga och tidskrävande eftersom de kräver mer utav användaren. Det här arbetet presenterar ett semi-automatiskt tillvägagångssätt som tillåter användarinteraktion tillsammans med dator-assistans för att segmentera bilder. Det semi-automatiska tillvägagångsättet realiserades i en hi-fi prototyp som använder SLIC superpixlar. Superpixlarna kan slås samman genom användarinteraktion där de sammanslagna superpixlarna utgör segmenten. Prototypen utvecklades och testades iterativt för att uppnå hög användbarhet och nöjda användare. Prototypen testades även kvantitativt för att undersöka om segmenteringsprocessen hade effektiviserats tidsmässigt jämfört med redan existerande mjukvaran. Det visade sig att användarena fick högre precision i segmenteringen vid användning av prototypen jämfört med den existerande mjukvaran när samma tid lades. Även om det visades att segmenteringen inte var mer tidseffektiv så troddes det att tidseffektiviteten i prototypen kunde ökas genom att superpixlarna föjde de naturliga områdena bättre i bilden.

Page generated in 0.1208 seconds