• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4468
  • 975
  • 37
  • 1
  • Tagged with
  • 5584
  • 5584
  • 5584
  • 5333
  • 5321
  • 766
  • 451
  • 344
  • 309
  • 306
  • 299
  • 280
  • 265
  • 251
  • 251
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Cookie-varning på steroider : Ramverk på samtyckestjänst för webbsidor enligt GDPR / Cookie warning on steroids : Framework for consent service on web pages according to GDPR

Mattsson, Jonas, Öberg, Axel January 2018 (has links)
The objective of this study intends to develop a framework containing design principles, that can be used as a guidance to build useful and GDPR-safe consents-solutions. With the forthcoming implementation of GDPR (25th May 2018), new ways and methods are needed to manage consents at web pages that in some way handles personal data. In order to provide a stable foundation for the work, theory has been developed in relation to the subject and the area. The theoretical reference framework consists of GDPR (Law), which also includes Privacy by Design and Privacy by Default as well as Design and Usability principles. Furthermore, the approach and method to develop the framework, has been based on the design process by Arvola (2014). Within the process, a qualitative data collection has been made with a company and also with a targeted audience. The interviewed company is Meramedia, and during our work procedure they have also been developing a consent solution themselves, which makes them relevant for us to intervene, in order to find interesting information. The data collection with the targeted audience of potential users has contributed with an increased understanding of how users feel and think about this type of solution, which may involve questions and concerns regarding personal data management and design aspects. The empirics is then analyzed using the theory, which allowed the framework to be updated with new content and new principles that arose during the data collection, to answer the purpose of the study. The conclusions found, are that the use of a framework comprising 11 principles would facilitate the work of developing a consent-solution. The principles are as follows: Suitable reduction Response Logic & Unity Adaptation Generality & Reuse Divergence Invitation Simplicity & Efficiency Legal, Correct & Open Data Limitations Predefined choices The meaning of the principles is presented in the conclusion. The conclusion also shows a design proposal based on the intended framework, which shows the importance and matter of all principles. The work is lastly rounded off by reflecting upon the intended work, and it also incorporates future findings related to the area and the subject. GDPR is being implemented on 25th of May 2018, and new challenges in consents-management can certainly emerge as soon as the law has been implemented, which probably opens up for new perspectives.
32

Behave and PyUnit : A Testers Perspective

Borgenstierna, Johan January 2018 (has links)
A comparison between two different testing frameworks Behave and PyUnit is demonstrated. PyUnit is TDD driven and Behave is BDD driven. The method SBTS shows that Behave enforces better quality of software in the maintainability branch than PyUnit. The Gherkin language used in Behave is easy to read and widens the scope of protentional testers. Although Behave is not as fine grained with the cover of the tests than PyUnit since Behave is limited to the behaviour of the system.
33

Social Engineering : En kvalitativ studie om hur organisationer hanterar social engineering

Loggarfve, Robin, Rydell, Johan January 2018 (has links)
Traditionellt sett används svagheter i tekniken för att få obehörig tillgång till information, men det finns andra sofistikerade metoder och tillvägagångssätt som kan vara mer effektiva. Social engineering är konsten att bedra, manipulera och utnyttja sociala aspekter. Metoden utnyttjar den svagaste länken i informationssäkerheten, mänskliga faktorn. Syftet med studien är att undersöka hur organisationer hanterar social engineering. Den syftar också till att belysa och informera kring ämnet med ambition att öka medvetenheten om ämnet. Studien har utförts tillsammans med tre organisationer där kvalitativa intervjuer genomförts. Studien undersökte organisationernas medvetenhet, vanligast förekommande social engineering-attackerna och förebyggande arbete. Resultatet visar att medvetenheten var god på IT-avdelningarna medan den var sämre på övriga avdelningar i organisationerna. De främsta hoten social engineering utgör mot organisationer är ekonomisk förlust och informationsläckage. Det vanligaste tillvägagångssättet visade sig vara phishing och spear phishing. Slutligen kan studien fastslå att utbildning och informationsspridning är den mest framgångsrika metod för att förebygga social engineering. Studien konstaterar att det saknas ett fullständigt skydd och att mer utbildning krävs för att öka medvetenheten inom social engineering. Ett säkerhetsskydd är inte starkare än den svagaste länken och därför bör mer resurser läggas på förebyggande arbete. / Traditionally, weaknesses in technology are used to gain unauthorized access to information, but there are other sophisticated methods and approaches that can be more effective. Social engineering is the art of deceiving, manipulating and utilizing social aspects. The method utilizes the weakest link in information security, the human factor. The purpose of the study is to investigate how organizations handle social engineering. It also aims to highlight and inform about the subject with an ambition to raise awareness about the subject. The study has been conducted together with three organizations where qualitative interviews were conducted. The study examined the awareness of the organizations, the most common social engineering attacks and preventive work. The result shows that awareness was good at IT departments while it was worse at other departments in the organizations. The main threats of social engineering to organizations are economic loss and information leakage. The most common approach was phishing and spear phishing. Finally, the study can conclude that education and dissemination of information is the most successful method of preventing social engineering. The study finds that there is no complete protection and that more education is required to raise awareness in social engineering. A security system is not stronger than the weakest link and therefore more resources should be put on preventive work.
34

Real-time Vision-based Fall Detection : with Motion History Images and Convolutional Neural Networks

Haraldsson, Truls January 2018 (has links)
Falls among the elderly is a major health concern worldwide due to theserious consequences, such as higher mortality and morbidity. And as theelderly are the fastest growing age group, an important challenge for soci-ety is to provide support in their every day life activities. Given the socialand economical advantages of having an automatic fall detection system,these systems have attracted the attention from the healthcare industry.With the emerging trend of Smart Homes and the increasing numberof cameras in our daily environments, this creates an excellent opportu-nity for vision-based fall detection systems. In this work, an automaticreal-time vision-based fall detection system is presented. It uses motionhistory images to capture temporal features in a video sequence, spatialfeatures are then extracted efficiently for classification using depthwiseconvolutional neural network. The system is evaluated on three publicfall detection datasets, and furthermore compared to other state-of-the-art approaches.
35

Test Generation For Digital Circuits – A Mapping Study On VHDL, Verilog and SystemVerilog

Alape Vivekananda, Ashish January 2018 (has links)
Researchers have proposed different methods for testing digital logic circuits. The need for testing digital logic circuits has become more important than ever due to the growing complexity of such systems. During the development process, testing is focusing on design defects as well as manufacturing and wear out type of defects. Failures in digital systems could be caused by design errors, the use of inherently probabilistic devices, and manufacturing variability. The research in this area has focused also on the design of digital logic circuit for achieving better testability. In addition, automated test generation has been used to create tests that can quickly and accurately identify faulty components. Examples of such methods are the Ad Hoc techniques, Scan Path Technique for testable sequential circuits, and the random scan technique. With the research domain becoming more mature and the number of related studies increasing, it is essential to systematically identify, analyse and classify the papers in this area. The systematic mapping study of testing digital circuits performed in this thesis aims at providing an overview of the research trends in this domain and empirical evidence. In order to restrict the scope of the mapping study we only focus on some of the most widely-used and well-supported hardware description languages (HDLs): Verilog, SystemVerilog and VHDL. Our results suggest that most of the methods proposed for test generation of digital circuits are focused on the behavioral level and Register Transfer Levels. Fault independent test generation is the most frequently applied test goal and simulation is the most common experimental test evaluation method. Majority of papers published in this area are conference papers and the publication trend shows a growing interest in this area. 63% of papers execute the test method proposed. An equal percentage of papers experimetnatlly evaluate the test method they propose. From the mapping study we inferred that papers that execute the test method proposed, evaluate them as well.
36

Community-based Influence Maximization framework for Social Networks

Wangchuk, Tshering January 2018 (has links)
Maximizing Influence (IM) in social networks has a considerable role to play in the phenomenon of viral marketing, targeted advertisements and in promoting any campaigns. However, the Influence Maximization Problem is a very challenging research-problem due to it being NP-Hard and scaling with the social networks with millions of nodes and edges becomes very tough due to the computational complexities concerned with it. Recently, solving this problem through the use of community detection based methodology is becoming very popular since, it reduces the search space by dividing the network into smaller and more manageable groups called "communities." As part of the larger research work, we reiterate a framework which has been inspired by collection of different work done by Alfalahi et al. (2013) that we can implement to solve the IM problem and its limitation through community detection and fuzzy logic inspired approach. Since the work is still under development, for this project, we report on understanding the IM field through literature reviews and in communicating a design of IM framework as inspired by the previous works. We also present our version ofthe blueprint (Algorithm design) of the framework as a five step approach. For the purpose of this report, we implement and evaluated the step 1 and step 2 of the framework. Step 1 is about preprocessing the input network with a similarity measure, which according to previous study by Alfalahi et al. (2013a) aids the algorithms in detecting better community structure (clear and accurate distinction of the nodes into communities in networks). We test it to see if it holds true. Step 2 is about implementing the community detection in social network. We benchmarkthree candidate algorithms, chosen based on theirperformance, from the previous studies in community detection fieldand we report onwhich algorithm should we consider to use in the proposed framework through experimentationon the simulated data. We use Normalized Mutual Information (NMI) and Modularity (Q) as evaluation metrics to measure the accuracy of the community detected by the candidate algorithms. Our results show that similarity based preprocessing does not improve the community structure and thus may not be required in the framework. We also found out that Louvain should be the algorithm that use to detect communities in social networks since it outperforms both CNM and Infomap on Q and NMI
37

Investigating Mobile Broadband Coverage in Rural Areas

Söderlund, Gustaf January 2018 (has links)
With an increasing demand of mobile data traffic, and a growing assumption of continuous Internet connectivity, it is important to investigate the characteristics of mobile cellular networks. The consequence of insufficient capacity will grow as the cloud and other Internet demanding services, not only makes us dependent, but becomes a way of living. The present study aims to identify areas without mobile network coverage in Värmland County in central Sweden. An additional aim is to find statistical relationships between network performance metrics such as throughput, signal strength and latency. With data collected during an eight month period, network characteristics have been investigated for the three Swedish mobile operators, Tre, Telia and Telenor. The performed data analysis show multiple regions where at least one operator is unable to provide sufficient performance and regions strongly overrepresented by a specific operator. Provided analysis also show a correlation between signal strength and key performance metrics, such as throughput and network delay.
38

A Scalable Recommender System for Automatic Playlist Continuation

Bennett, Jack January 2018 (has links)
As major companies like Spotify, Deezer and Tidal look to improve their music streamingproducts, they repeatedly opt for features that engage with users and lead to a morepersonalised user experience. Automatic playlist continuation enables these platforms tosupport their users with a seamless and smooth interface to enjoy music, own their experience,and discover new songs and artists.This report details a recommender system that enables automatic playlist continuation;providing the recommendation of music tracks to users who are creating new playlists or curatingexisting ones. The recommendation framework given in this report is able to provide accurateand pertinent track recommendation, but also addresses issues of scalability, practicalimplementation and decision transparency, so that commercial enterprises can deploy such asystem more easily and develop a winning strategy for their user experience. Furthermore, therecommender system does not require any rich and varied supply of user data, instead requiringonly basic information as input such as the title of the playlist, the tracks currently in the playlist,and the artists associated with those tracks.To accomplish these goals, the system relies on user-based collaborative filtering; a simple, wellestablishedmethod of recommendation, supported by web-scraping and topic modellingalgorithms that creatively use the supplied data to paint a more holistic image of what kind ofplaylist the user would like. This system was developed using data from the Million PlaylistDataset, released by Spotify in 2018 as part of the Recommender Systems Challenge, evaluatedusing R-precision, normalised discounted cumulative gain, and a proprietary evaluation metriccalled Recommended Song Clicks, that reflects the number of times a user would have to refreshthe list of recommendations provided if the current Spotify user interface was used tocommunicate them. Over an 80:20 train-test split, the scores were: 0.343, 0.224, and 15.73.
39

Inomhuslokalisering med Bluetooth 5

Hellsin, Beppe January 2018 (has links)
No description available.
40

Optimization of graphical performance in a motion-based web game : Improving design and implementation of a game measured by frame rate

Therén, Oskar January 2017 (has links)
This thesis uses Chrome Timeline tool, Firefox Canvas Debugger and an FPS module to evaluate performance issues in a motion-based web game built with the framework Phaser. For each issue an explanation of how it is found and a proposed solution is given. The game that is the basis of this work, gains input through a WebGL-based camera module that uses shaders to interpret the data. Some solutions might be specific for this particular project and some may be more generally applicable. A few pointers are given to what can be graphically demanding when developing in JavaScript. The game has different themes and features that are further developed which is done from a performance point of view, there are in total eight different improvements that are discussed. The used tools and metrics are further evaluated e.g. the Timeline tool is considered a useful tool for web developers though it has some drawbacks related to WebGL.

Page generated in 0.2027 seconds