• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4665
  • 1142
  • 1
  • Tagged with
  • 5810
  • 5810
  • 5810
  • 5810
  • 5308
  • 825
  • 610
  • 538
  • 538
  • 495
  • 356
  • 323
  • 317
  • 311
  • 274
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Towards color compatibility in fashion using machine learning

Wang, Xinhui January 2019 (has links)
Fashion analyses, such as predicting trends and fashion recommendation, have been a hot topic. Color, as one of the dominant features of clothing, has great influence on people’s shopping behaviors. Understanding popular colors and color combinations are of high business value. In this thesis, we investigate compatible color combinations in fashion. We tackle this problem in two parts. First, we implement a semantic segmentation model of fashion images to segment different clothing items of daily photos. We employ Deeplab V2 trained on ModaNet dataset, reaching 0.64 mIoU and 0.96 accuracy in the test set. Our experimental results achieve the state-of-the-art performance comparing to other models proposed in this field. Second, we propose two color recommendation approaches, matrix factorization and item-to-item collaborative filtering, in order to study color combinations in fashion and possibly make recommendations based on the study outcomes. The item-to-item collaborative filtering model shows the compatibility between/among colors quantitatively and achieves high-quality color recommendations with a hit-rate of 0.49. / Modeanalyser,som att förutse trender och mode, är ett hett område. Färg, som är en av de dominerande egenskaperna hos kläder,har stor inverkan på människors shoppingbeteenden. Att förstå populära färger och färgkombinationer är av högt aärsvärde. I denna avhandling undersöker vi kompatibla färgkombinationer inom mode. Vi tar itu med detta problem i två delar. Först genomför vi en semantisk segmenteringsmodell av modebilder för att segmentera olika klädselar av modebilder.Våra experimentella resultat visar att vår segmenteringsmodell når topp-prestanda och är mer generaliserbar jämfört med andra modeller som föreslås inom detta fält. Därför föreslår vi två färgrekommendationsmetoder; matrisfaktorisering och sammansatt ltrering mellan objekt och objekt. Detta i syfte att studera färgkombinationer inom mode och möjligengöra färgrekommendationer. Våra experiment visar kompatibilitet mellan färger kvantitativt och uppnår färgrekommendationer med en träffsäkerhet på 0.49.
22

WinBro: A Window and Broadcast-based Parallel Streaming Graph Partitioning Framework for Apache Flink

Ackva, Adrian January 2019 (has links)
The past years have shown an increasing demand to process data of various kinds and size in real-time. A common representation for many real-world scenarios is a graph, which shows relations between entities, such as users of social networks or pages on the Internet. These graphs increase in size over time and can easily exceed the capacity of single machines.Graph partitioning is used to divide graphs into multiple subgraphs on different servers. Traditional partitioning techniques work in an offline manner where the whole graph is processed before partitioning. Due to the recently increased demand for real-time analysis, online partitioning algorithms have been introduced. They are able to partition a graph arriving as a stream, also referred to as a streaming graph, without any pre-processing step.The goal of a good graph partitioning algorithm is to maintain the data locality and to balance partitions’ load at the same time. Although different algorithms have proven to achieve both goals for real-world graphs, they often require to maintain a state. However, modern stream processing systems, such as Apache Flink, work with a shared-nothing architecture in a data-parallel manner. Therefore, they do not allow to exchange information along with parallel computations. These systems usually use Hash-based partitioning, that is a fast stateless technique but ignores the graph structure. Hence, it can lead to longer analysis times for streaming applications which could benefit from preserved structures.This work aims to develop a state-sharing parallel streaming graph partitioner for Apache Flink, called WinBro, implementing well-performing partitioning algorithms. In order to do this, existing streaming graph algorithms are studied for possible implementation and then integrated into WinBro.For validation, different experiments were made with real-world graphs. In these experiments, the partitioning quality, and partitioning speed were measured. Moreover, the performance of different streaming applications using WinBro was measured and compared with the default Hash-based partitioning method.Results show that the new partitioner WinBro provides better partitioning quality in terms of data locality and also higher performance for applications with requirements for locality-based input data. Nonetheless, the Hash-based partitioner shows the highest throughput and better performance for data localityagnostic streaming applications. / De senaste åren har det skett en ökande efterfrågan på att bearbeta data av olika sorter och storlek i realtid. En vanlig representation för många scenarier är diagram som visar relationer mellan enheter, till exempel användare av sociala nätverk eller sidor på Internet. Dessa grafers storlek ökar över tiden och kan enkelt överstiga kapaciteten hos individuella maskiner.Grafpartitionering används för att dividera grafer i flera delgrafer på olika servrar. Traditionella partitioneringstekniker fungerar offline, där hela grafen bearbetas före partitionering. Baserat på den nyligen ökade efterfrågan på realtidsanalys har online-partitionsalgoritmer introducerats. De kan partitionera en graf som kommer strömmande, även kallad ett strömmande diagram, utan förbehandling.Målet med en bra grafpartitioneringsalgoritm är att behålla datalokalitet och balansera partitionernas belastning samtidigt. Även om olika algoritmer har visat möjligheten att uppnå båda målen för realvärldsgrafik, behöver de ofta behålla ett tillstånd. Moderna strömbehandlingssystem, som Apache Flink, arbetar emellertid med en gemensam-ingenting-arkitektur på ett data-parallellt sätt. Därför tillåter de inte att utbyta information under parallella beräkningar. Dessa system brukar använda Hash-baserad partitionering, vilket är en snabb tillståndslös teknik men ignorerar grafstrukturen. Därför kan det leda till längre analystider för strömmande applikationer som kan dra nytta av bevarade strukturer.Detta arbete har som mål till att utveckla en tillstånsdsdelande, parallellströmmande grafpartitionering för Apache Flink, kallad WinBro, som implementerar välpresterande partitioneringsalgoritmer. För att nå målet studeras befintliga strömmande grafalgoritmer för möjlig implementering och sedan integreras i WinBro.För validering görs olika experiment med realvärldsgrafik. I våra experiment mäter vi partitioneringskvaliteten och partitioneringshastigheten. Dessutom kvantifierar vi prestanda för olika strömmande applikationer med WinBro och jämför den med en standard Hash-baserad partitionsmetod.Resultat visar att den nya partitionern WinBro ger bättre partitioneringskvalitet när det gäller datalokalitet och även högre prestanda för applikationer med krav på lokalitetsbaserad inmatningsdata. Alternativt visar den Hashbaserade partitionen den högsta genomströmningen och bättre prestanda för datalokalitets-agnostiska strömmande applikationer.
23

SSC: Single-Shot Multiscale Counter. : Counting Generic Objects in Images / SSC: Single-Shot Multiscale Counter. : Räknageneriska objekt i bilder

Vavassori, Luca January 2019 (has links)
Counting object in pictures is a computer vision task that has been explored in the past years, achieving state-of-the-art results thanks to the rise of convolutional neural networks. Most of the work focused on specific and limited domains to predict the number of just one category in the likes of people, cars, cells, and animals. Little effort has been employed to investigate methods to count the instances of different classes at the same time. This thesis work explored the different approaches present in the literature to understand their strenghts and weaknesses and eventually improve the accuracy and reduce the inference time of models aimed to estimate the number of multiple elements. At first, new techniques have been applied on top of the previously proposed algorithms to lower the prediction error. Secondly, the possibility to adapt an object detector to the counting task avoiding the localization prediction has been investigated. As a result, a new model called Single-Shot Multiscale Counter has been proposed, based on the architecture of the Single-Shot Multibox Detector. It achieved a lower prediction error on the ground truth count by 11% (from an mRMSE of 0.42 to 0.35) and an inference time 16x to 20x faster compared to the models found in the literature (from 1.25s to 0.049s). / Att räkna objekt i bilder är en datorvisionsuppgift som har utforskats under de senaste åren och uppnått toppmoderna resultat tack vare ökningen av invändiga neurala nätverk. De flesta av arbetena fokuserade på specifika och begränsade domäner för att förutsäga antalet bara en kategori som människor, bilar, celler och djur. Liten ansträngning har använts för att undersöka metoder för att räkna förekomsten av olika klasser samtidigt. Detta avhandlingsarbete utforskade de olika metoder som finns i litteraturen för att förstå deras styrka och svagheter och så småningom förbättra noggrannheten och minska inferingstiden för modeller som syftar till att uppskatta antalet flera element. Först har nya tekniker tillämpats ovanpå de tidigare föreslagna algoritmerna för att sänka förutsägelsefelet. För det andra har möjligheten att anpassa en objektdetektor till räkneuppgiften för att undvika lokaliseringsförutsägelse undersökts. Som ett resultat har en ny modell som heter Single-Shot Multiscale Counter föreslagits, baserad på arkitekturen för Single-Shot Multibox Detector. Den uppnådde ett lägre förutsägelsefel på sanningsräkningen på marken med 11 % (från en mRMSE på 0,42 till 0,35) och en slutningstid 16x till 20x snabbare jämfört med modellerna som finns i litteraturen (från 1,25 till 0,049 sek).
24

Streaming Graph Partitioning with Graph Convolutional Networks

Zwolak, Michal January 2020 (has links)
In this work, we present a novel approach to the streaming graph partitioning problem which handles unbounded streams.Graph partitioning is a process of dividing a graph into groups of nodes or edges. Traditional, offline partitioning methods require a priori access to the entire graph and make multiple passes over the data in order to compute partitions. However, recently the demand for real-time analysis of graph data sparked the prospect of online partitioning. In such an approach, the graph arrives as a stream of nodes or edges which are assigned to partitions as they come and are never reassigned again. Additionally, in the case of modern systems, where graphs constantly grow, the streams are unbounded. The main goals of graph partitioning are preserving data locality, so related items belong to the same partitions, and load balance, so partitions have similar sizes.State-of-the-art streaming graph partitioning algorithms fulfil the two latter requirements. However, they make their partitioning decisions based on internal state, which grows as new items arrive. Thus, they are not capable of processing unbounded streams. At some point, the state will exceed the memory capacity of the machine the algorithm is running on. Moreover, modern stream data processors run in a distributed environment. In such a setting synchronisation of a shared state is an expensive operation.In the proposed approach, in addition to structural information about the graph, we utilise attributes associated with vertices such as user’s location, age, or previous actions. In order to do that, we employ a graph convolutional network (GCN), which is a novel method of graph representation learning. A GCN can embed both structural and feature-based characteristics of each vertex into a low-dimensional space. Secondly, we feed these representations into a neural network, which assigns incoming items to partitions. Such a method requires only the networks’ parameters’ values in order to make a partitioning decision. Thus, the size of the state remains constant regardless of the length of the stream.We present both unsupervised and supervised approaches to train the proposed framework. Moreover, we describe a method to apply the models to partition the streaming graph. We evaluate the performance of our novel method on three real-world graph datasets and compare it with the state-of-the-art HDRF algorithm as well as a simple, stateless hash-based approach. The experimental results show the generalisation capabilities of our models. Moreover, our methods can yield up to 16% lower replication factor than hash partitioning which corresponds to only 1% increase compared to HDRF. At thesame time, we reduce state requirements from linear to constant, which for the graph with 230k vertices and 5.7M edges translates to 125 times smaller size of the state and allows for processing unbounded streams. Nevertheless, the latency of our methods is about 20 times higher than HDRF. / I det här arbetet presenterar vi en ny metod för grafpartitionering av obundna strömmar.Grafpartitionering är en process att dela upp en graf i grupper av noder eller kanter. Traditionella, offline-partitioneringsmetoder kräver en priori åtkomst till hela grafen och gör flera passeringar över datan för att beräkna partitioner. Nyligen gjorde dock efterfrågan på realtidsanalys av grafdata möjligheterna att partitionera online. I ett sådant tillvägagångssätt ankommer grafen som en ström av noder eller kanter som tilldelas partitioner när de kommer och aldrig tilldelas igen. Dessutom, för moderna system, där grafer ständigt växer, är strömmarna obegränsade. Huvudmålen för grafpartitionering är att bevara datalokalitet, så relaterade objekt tillhör samma partitioner och lastbalans, så partitioner har liknande storlekar.Avancerade algoritmer för strömmande grafpartitionering uppfyller de två senare kraven. De fattar emellertid sina partitionsbeslut baserade på internt tillstånd, som växer när nya artiklar kommer. Således kan de inte bearbeta obundna strömmar. Vid någon tidpunkt kommer tillståndet att överskrida minneskapaciteten för maskinen som algoritmen kör på. Dessutom körs moderna databehandlare i en distribuerad miljö. I en sådan inställning är synkronisering av ett delat tillstånd en dyr operation.I det föreslagna tillvägagångssättet använder vi, utöver strukturell information om grafen, attribut som är förknippade med hörn som användarens plats, ålder eller tidigare åtgärder. För att göra det använder vi ett grafkonvolutional nätverk (GCN), som är en ny metod för grafrepresentation. En GCN kan bädda in både strukturella och funktionsbaserade egenskaper hos varje toppunkt i ett lågdimensionellt utrymme. För det andra matar vi dessa representationer i ett neuralt nätverk, som tilldelar inkommande objekt till partitioner. En sådan metod kräver bara nätverksparametrarnas värden för att fatta ett partitionsbeslut. Således förblir tillståndets storlek konstant oavsett strömmens längd.Vi presenterar både obevakade och övervakade tillvägagångssätt för att utbilda det föreslagna ramverket. Dessutom beskriver vi en metod för att tillämpa modellerna för att partitionera strömningsgrafen. Vi utvärderar prestandan för vår nya metod på tre grafiska datauppsättningar i verkligheten och jämför den med den senaste HDRF-algoritmen samt en enkel, statslös hashbaserad strategi. De experimentella resultaten visar generaliseringsförmågan hos våra modeller. Dessutom kan våra metoder ge upp till 16% lägre replikationsfaktor än hashpartitionering, vilket motsvarar endast 1% ökning jämfört med HDRF. Samtidigt minskar vi tillståndskraven från linjär till konstant,vilket för diagrammet med 230k vertikaler och 5,7M kanter motsvarar 125 gånger mindre storlek på tillståndet. Trots det är latens för våra metoder ungefär 20 gånger högre än HDRF.
25

Cookie-varning på steroider : Ramverk på samtyckestjänst för webbsidor enligt GDPR / Cookie warning on steroids : Framework for consent service on web pages according to GDPR

Mattsson, Jonas, Öberg, Axel January 2018 (has links)
The objective of this study intends to develop a framework containing design principles, that can be used as a guidance to build useful and GDPR-safe consents-solutions. With the forthcoming implementation of GDPR (25th May 2018), new ways and methods are needed to manage consents at web pages that in some way handles personal data. In order to provide a stable foundation for the work, theory has been developed in relation to the subject and the area. The theoretical reference framework consists of GDPR (Law), which also includes Privacy by Design and Privacy by Default as well as Design and Usability principles. Furthermore, the approach and method to develop the framework, has been based on the design process by Arvola (2014). Within the process, a qualitative data collection has been made with a company and also with a targeted audience. The interviewed company is Meramedia, and during our work procedure they have also been developing a consent solution themselves, which makes them relevant for us to intervene, in order to find interesting information. The data collection with the targeted audience of potential users has contributed with an increased understanding of how users feel and think about this type of solution, which may involve questions and concerns regarding personal data management and design aspects. The empirics is then analyzed using the theory, which allowed the framework to be updated with new content and new principles that arose during the data collection, to answer the purpose of the study. The conclusions found, are that the use of a framework comprising 11 principles would facilitate the work of developing a consent-solution. The principles are as follows: Suitable reduction Response Logic & Unity Adaptation Generality & Reuse Divergence Invitation Simplicity & Efficiency Legal, Correct & Open Data Limitations Predefined choices The meaning of the principles is presented in the conclusion. The conclusion also shows a design proposal based on the intended framework, which shows the importance and matter of all principles. The work is lastly rounded off by reflecting upon the intended work, and it also incorporates future findings related to the area and the subject. GDPR is being implemented on 25th of May 2018, and new challenges in consents-management can certainly emerge as soon as the law has been implemented, which probably opens up for new perspectives.
26

Metallic antenna design based on topology optimization techniques

Hassan, Emadeldeen January 2013 (has links)
No description available.
27

Behave and PyUnit : A Testers Perspective

Borgenstierna, Johan January 2018 (has links)
A comparison between two different testing frameworks Behave and PyUnit is demonstrated. PyUnit is TDD driven and Behave is BDD driven. The method SBTS shows that Behave enforces better quality of software in the maintainability branch than PyUnit. The Gherkin language used in Behave is easy to read and widens the scope of protentional testers. Although Behave is not as fine grained with the cover of the tests than PyUnit since Behave is limited to the behaviour of the system.
28

Social Engineering : En kvalitativ studie om hur organisationer hanterar social engineering

Loggarfve, Robin, Rydell, Johan January 2018 (has links)
Traditionellt sett används svagheter i tekniken för att få obehörig tillgång till information, men det finns andra sofistikerade metoder och tillvägagångssätt som kan vara mer effektiva. Social engineering är konsten att bedra, manipulera och utnyttja sociala aspekter. Metoden utnyttjar den svagaste länken i informationssäkerheten, mänskliga faktorn. Syftet med studien är att undersöka hur organisationer hanterar social engineering. Den syftar också till att belysa och informera kring ämnet med ambition att öka medvetenheten om ämnet. Studien har utförts tillsammans med tre organisationer där kvalitativa intervjuer genomförts. Studien undersökte organisationernas medvetenhet, vanligast förekommande social engineering-attackerna och förebyggande arbete. Resultatet visar att medvetenheten var god på IT-avdelningarna medan den var sämre på övriga avdelningar i organisationerna. De främsta hoten social engineering utgör mot organisationer är ekonomisk förlust och informationsläckage. Det vanligaste tillvägagångssättet visade sig vara phishing och spear phishing. Slutligen kan studien fastslå att utbildning och informationsspridning är den mest framgångsrika metod för att förebygga social engineering. Studien konstaterar att det saknas ett fullständigt skydd och att mer utbildning krävs för att öka medvetenheten inom social engineering. Ett säkerhetsskydd är inte starkare än den svagaste länken och därför bör mer resurser läggas på förebyggande arbete. / Traditionally, weaknesses in technology are used to gain unauthorized access to information, but there are other sophisticated methods and approaches that can be more effective. Social engineering is the art of deceiving, manipulating and utilizing social aspects. The method utilizes the weakest link in information security, the human factor. The purpose of the study is to investigate how organizations handle social engineering. It also aims to highlight and inform about the subject with an ambition to raise awareness about the subject. The study has been conducted together with three organizations where qualitative interviews were conducted. The study examined the awareness of the organizations, the most common social engineering attacks and preventive work. The result shows that awareness was good at IT departments while it was worse at other departments in the organizations. The main threats of social engineering to organizations are economic loss and information leakage. The most common approach was phishing and spear phishing. Finally, the study can conclude that education and dissemination of information is the most successful method of preventing social engineering. The study finds that there is no complete protection and that more education is required to raise awareness in social engineering. A security system is not stronger than the weakest link and therefore more resources should be put on preventive work.
29

Real-time Vision-based Fall Detection : with Motion History Images and Convolutional Neural Networks

Haraldsson, Truls January 2018 (has links)
Falls among the elderly is a major health concern worldwide due to theserious consequences, such as higher mortality and morbidity. And as theelderly are the fastest growing age group, an important challenge for soci-ety is to provide support in their every day life activities. Given the socialand economical advantages of having an automatic fall detection system,these systems have attracted the attention from the healthcare industry.With the emerging trend of Smart Homes and the increasing numberof cameras in our daily environments, this creates an excellent opportu-nity for vision-based fall detection systems. In this work, an automaticreal-time vision-based fall detection system is presented. It uses motionhistory images to capture temporal features in a video sequence, spatialfeatures are then extracted efficiently for classification using depthwiseconvolutional neural network. The system is evaluated on three publicfall detection datasets, and furthermore compared to other state-of-the-art approaches.
30

Test Generation For Digital Circuits – A Mapping Study On VHDL, Verilog and SystemVerilog

Alape Vivekananda, Ashish January 2018 (has links)
Researchers have proposed different methods for testing digital logic circuits. The need for testing digital logic circuits has become more important than ever due to the growing complexity of such systems. During the development process, testing is focusing on design defects as well as manufacturing and wear out type of defects. Failures in digital systems could be caused by design errors, the use of inherently probabilistic devices, and manufacturing variability. The research in this area has focused also on the design of digital logic circuit for achieving better testability. In addition, automated test generation has been used to create tests that can quickly and accurately identify faulty components. Examples of such methods are the Ad Hoc techniques, Scan Path Technique for testable sequential circuits, and the random scan technique. With the research domain becoming more mature and the number of related studies increasing, it is essential to systematically identify, analyse and classify the papers in this area. The systematic mapping study of testing digital circuits performed in this thesis aims at providing an overview of the research trends in this domain and empirical evidence. In order to restrict the scope of the mapping study we only focus on some of the most widely-used and well-supported hardware description languages (HDLs): Verilog, SystemVerilog and VHDL. Our results suggest that most of the methods proposed for test generation of digital circuits are focused on the behavioral level and Register Transfer Levels. Fault independent test generation is the most frequently applied test goal and simulation is the most common experimental test evaluation method. Majority of papers published in this area are conference papers and the publication trend shows a growing interest in this area. 63% of papers execute the test method proposed. An equal percentage of papers experimetnatlly evaluate the test method they propose. From the mapping study we inferred that papers that execute the test method proposed, evaluate them as well.

Page generated in 0.1834 seconds