• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4506
  • 975
  • 69
  • 49
  • 39
  • 11
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 5890
  • 5890
  • 5606
  • 5343
  • 5321
  • 773
  • 451
  • 372
  • 320
  • 314
  • 304
  • 286
  • 265
  • 257
  • 254
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

The rise of Brutalism and Antidesign : And their implications on web design history

Brage, Ellen January 2019 (has links)
The following bachelor thesis is written by a student at New Media Design within Informatics at the School of Engineering, Jönköping University. The background of this study is based on the emergence of web design trends brutalism and antidesign, which have been argued to originate from styles used during early periods of the web’s history. Furthermore, a lack of cultural analysis within web design has been identified. The visual evolution of the world wide web is not sorted into distinct and widely acknowledged periods or categories such as is the case with most other cultural areas like music and art. The emergence and popularity of brutalism and antidesign were identified as potential cases of visual styles returning from the past. They were therefore considered opportunities to examine visual periods in web design and predict where the field is heading in the future. The study was conducted using the qualitative method of semi-structured interviews. The empirical data was analysed using a thematic analysis and was later compared with theories derived from literature studies. The study found three reasons behind the rise of brutalism and antidesign in web design; the world wide web’s coming of age, reactions towards the mainstream web and the interest in retro trends. The study also aimed to find the possible implications of their emergence on the aesthetic evolution of web design.  It was found that brutalism and antidesign are part of a large number of experimental and retro trends that will continue to emerge. Though they are unlikely to directly affect mainstream web design in its current state, they may be seen as design movements. This may be viewed as a step in the direction of visual categories within web design.
22

A study on load balancing within microservices architecture

Sundberg, Alexander January 2019 (has links)
This study addresses load balancing algorithms for networked systems with microservices architecture. In microservices applications, functionality and logic have been split into small pieces referred to as services. Such divisions allow for higher levels of scalability and distributivity than obtainable for more classical architectures where functionality and logic is packaged into large non-separable applications. As a first step, we investigate existing load balancing algorithms in the literature. A conclusion reached from this literature survey is that there is a lack of proposed load balancing algorithms for microservices, and it is not obvious how to adapt such algorithms to the architecture under consideration. In particular, many existing algorithms incorporate queues, which should be avoided for microservices, where the small services should be provided in fast manner. Hence, we provide modified and new candidates for load balancing, where one such is a probabilistic approach including a distribution that is a function of service providers' load. The algorithms are implemented in microservices simulation environment developed in Erlang by Ericsson AB. We consider a range of scenarios for evaluation, where amongst other things, we vary the number of service consumers and providers. To evaluate the load balancing algorithms, we perform statistical analysis, where first and second order moments are computed for relevant metrics under the different scenarios considered. A conclusion drawn from the results is that the algorithm referred to as "Round Robin" performed best according to the results from various simulation scenarios. This study serves as a stepping stone for further investigations. We outline several possible extensions such as more in-depth statistical analysis accounting for the time-varying aspects of the systems (currently omitted), as well other classes of algorithms.
23

Light scattering in two-dimensional inhomogeneous paper : Analysis using general radiative transfer theory

Nukala, Madhuri January 2019 (has links)
Modeling light scattering is important in diverse reasearch fields such as paper and print, optical tomography, remote sensing and also in computer rendering of im­ages. Particularly in paper and printing industry light scattering simulations play a significant role in understanding the optical response of paper in relation to its properties. Light scattering models are used in paper and print for improving the paper making process, designing new paper qualities, and evaluating printing tech­niques. The models most widely used for light scattering calculations in the paper and printing industry are based on the Kubelka-Munk theory. The theory proposed by Kubelka and Munk, a special case of radiative transfer theory, has several limi­tations and it can only be applied to homogeneous media with isotropic scattering and diffuse illumination. Real paper and print in particular do not satisfy these as­sumptions. These limitations of the Kubelka-Munk model encouraged scientists to develop models based on angle-resolved geometry to account for anisotropic scat­tering of light in paper and print, but in a single spatial dimension. To correctly represent spatial inhomogeneities like ink dots which spread as a function of depth, length and width of the paper, one-dimensional (lD) models are insufficient. In addi­tion to angle-resolved geometry, multi-dimensional models are necessary to analyze light scattering effects in a printed paper. The method used in this thesis, unlike the Kubelka-Munk method employs gen­eral radiative transfer formulation to obtain the reflectances of paper with inhomo­geneities like ink dots. These ink dots printed on plain sheet of paper are consid­ered to spread not only as a function of depth but also as a function of length or width of the paper. First, a numerical solution method comprising of a combination of discrete ordinates and finite differences is developed to solve the general two­dimensional (2D) radiative transfer equation (RTE) with the two dimensions repre­senting the depth and length of the paper. The solver is validated by comparing the results obtained with Monte Carlo simulations adapted to suit paper optics and DORT2002. For isotropic scattering, and for angles close to the normal direction, good agreement is observed among all the three solvers. As the anisotropy factor increases, the present solver needs higher number of radiation streams for conver­gence. The 2D radiative transfer (RT) solver is then applied to printed paper and re­flectances obtained are analyzed. The ink distribution is considered to be non-uniform such that the density of ink decreases linearly with depth. The dots are separated by a distance to study the interference pattern of the intensity distribution which is use­ful in understanding defects like print mottle, print density and optical dot gain. The reflectances obtained are analyzed based on medium parameters such as thickness of the paper sample, its optical parameters and assymetry factor. The illuminating and viewing angles and the depth of ink penetration also influence the optical response and appearance of print. It is observed that the reflectance of dots largely depends on the illuminating and viewing angles with an apparent increase in the size of the dots seen more prominently when viewed across the line. A 2D RT solver is superior in understanding the interference pattern of radiation as observed in the results presented in this thesis, when compared to a lD RT solver. A lD RT solver uses independent columns to approximate the radiation in the lateral direction. It also assumes that the layers in the lateral direction are homogeneous and the radiation from the columns do not interfere with each other. The independent column approximation pays little attention to the lateral variations in intensity. / <p>Vid tidpunkten för framläggningen av avhandlingen var följande delarbete opublicerat: delarbete 1 (manuskript).</p><p>At the time of the defence the following paper was unpublished: paper 1 (manuscript).</p>
24

Random numbers for generation in web games : And how the quality of them effects the end user.

Jacobsson, Linus January 2019 (has links)
No description available.
25

IoT-NETZ: Spoong Attack Mitigation in IoT Network

Mohammadnia, Hamzeh January 2019 (has links)
The phenomenal growth of the Internet of Things (IoT) and popularity of the mobile stations have rapidly increased the demand of WLAN network (known as IEEE 802.11 and WiFi). WLAN is a low-cost alternative of the cellular network and being an unlicensed spectrum to build the master plan of embedding the Internet in everything -&amp;-anywhere. At the same time, monitoring the number of IoT and WiFi-enabled devices across residential and enterprises is not trivial. Therefore, future WiFi network architecture requires an agile management paradigm to provide internal support and security for WiFi networks.The operation of IoT and mobile device applications relies on scalability and high-performance computing of clouds. Cloud computing has completely centralized the current data center networking architecture and it provides computation-intensive, high-speed network, and realtime responses to the requests of IoT. The IoT-to-cloud communication is the essence of network security concerns and it is in grievous need of constant security improvement along the inter-networking. Based on the number of researches and analysis on generated traffic by IoT, it has been observed there are the significant number of massive spoofing-oriented attacks targeting cloud services are launched from compromised IoT.On the basis of reviewing prior researches on mostly-conducted network attacks by IoT, there is a challenging and common characteristic which has been frequently utilized in the numerous massive Internet attacks, known as spoofing. This work will survey the existing proposed solutions which have been deployed to protect both traditional and softwarized network paradigms. Then, it proposes the approach of this work that enables IoT-hosting networks protected by employing Software-defined Wireless Networking (SDWN) within the proposed model to mitigate spoofing -oriented network attacks. In addition, the proposed solution provides the environmental sustainability feature by saving power consumption in networking devices during network operation. The practical improvement in the proposed model is measured and evaluated within the emulated environment of Mininet-WiFi. / Den fenomenala tillväxten av IoT och populariteten hos mobilstationerna har snabbt ökat efterfrågan på WLAN-nätverk (känd som IEEE 802.11 och WiFi). WLAN är ett billigt alternativ för mobilnätet och är ett olicensierat spektrum för att bygga huvudplanen för att bädda in Internet i allt-och-var som helst. Samtidigt är det inte trivialt att övervaka antalet IoT och WiFi-aktiverade enheter över bostäder och företag. Därför kräver framtida WiFi nätverksarkitektur ett smidigt hantering paradigm för att tillhandahålla internt stöd och säkerhet för WiFi-nätverk.Användningen av IoT och mobilanvändningsapplikationer är beroende av skalbarhet och högpresterande beräkningar av moln. Cloud computing har helt centraliserat den nuvarande datacenters nätverksarkitektur och det ger beräkningsintensiva, höghastighetsnätverk och realtidssvar påbegäran från IoT. IoT-till-moln kommunikationen är kärnan i nätverkssäkerhetshänsyn och de har ett allvarligt behov av ständig förbättring och säkerhetshärdning inom deras internätverk. Baserat på antalet undersökningar och analyser av genererad trafik av IoT har det observerats. Det finns det betydande antalet massiva spoofing-orienterade attacker som riktar sig mot molntjänster, lanseras från komprometterad IoT.På grundval av att granska tidigare undersökningar om IoTs mest genomförda nätverksattacker finns det en utmanande och gemensam egenskap som ofta utnyttjats i de många massiva internetattackerna. Detta arbete kommer att undersöka de befintliga lösningarna som har implementerats för att skydda både traditionella och mjukvariga nätverksparadigmer. Därefter föreslår det tillvägagångssättet för detta arbete som möjliggör IoT-värdnät skyddade genom att använda SDWN inom den föreslagna modellen för att mildra poofing-orienterade nätverksattacker. Dessutom erbjuder den föreslagna lösningen miljöhållbarhet genom att spara strömförbrukning i nätverksenheter under nätverksdrift. Den praktiska förbättringen av den föreslagna modellen mäts och utvärderas inom den omgivande miljön av Mininet-WiFi.
26

MeteorShower: geo-replicated strongly consistent NoSQL data store with low latency : Achieving sequentially consistent keyvalue store with low latency

Guan, Xi January 2016 (has links)
According to CAP theorem, strong consistency is usually compromised in the design of NoSQL databases. Poor performance is often observed when strong data consistency level is required, especially when a system is deployed in a geographical environment. In such an environment, servers need to communicate through cross-datacenter messages, whose latency is much higher than message within a data center. However, maintaining strong consistency usually involves extensive usage of cross-datacenter messages. Thus, the large cross-data center communication delay is one of the most dominant reasons, which leads to poor performance of most algorithms achieving strong consistency in a geographical environment. This thesis work proposes a novel data consistency algorithm – I-Write-One-Read-One based on Write-One-Read- All. The novel approach allows a read request to be responded by performing a local read. Besides, it reduces the cross-datacenter-consistency-synchronization message delay from a round trip to a single trip. Moreover, the consistency model achieved in I-Write-One-Read-One is higher than sequential consistency, however, looser than linearizability. In order to verify the correctness and effectiveness of IWrite- One-Read-One, a prototype, MeteoerShower, is implemented on Cassandra. Furthermore, in order to reduce time skews among nodes, NTP servers are deployed. Compared to Cassandra with Write-One-Read-All consistency setup, MeteoerShower has almost the same write performance but much lower read latency in a real geographical deployment. The higher cross-datacenter network delay, the more evident of the read performance improvement. Same as Cassandra, MeteorShower also has excellent horizontal scalability, where its performance grows linearly with the increasing number of nodes per data center.
27

A Continuous Dataflow Pipeline For Low Latency Recommendations

Ge, Wu January 2016 (has links)
The goal of building recommender system is to generate personalized recommendations to users. Recommender system has great value in multiple business verticals like video on demand, news, advertising and retailing. In order to recommend to each individual, large number of personal preference data need to be collected and processed. Processing big data usually takes long time. The long delays from data entered system to results being generated makes recommender systems can only benefit returning users. This project is an attempt to build a recommender system as service with low latency, to make it applicable for more scenarios. In this paper, different recommendation algorithms, distributed computing frameworks are studied and compared to identify the most suitable design. Experiment results reviled the logarithmical relationship between recommendation quality and training data size in collaborative filtering. By applying the finding, a low latency recommendation workflow is achieved by reduce training data size and create parallel computing partitions with minimal cost of prediction quality. In this project the calculation time is successfully limited in 3 seconds (instead of 25 in control value) while maintaining 90% of the prediction quality.
28

Fraud detection in online payments using Spark ML

Amaya de la Pena, Ignacio January 2017 (has links)
Frauds in online payments cause billions of dollars in losses every year. To reduce them, traditional fraud detection systems can be enhanced with the latest advances in machine learning, which usually require distributed computing frameworks to handle the big size of the available data. Previous academic work has failed to address fraud detection in real-world environments. To fill this gap, this thesis focuses on building a fraud detection classifier on Spark ML using real-world payment data. Class imbalance and non-stationarity reduced the performance of our models, so experiments to tackle those problems were performed. Our best results were achieved by applying undersampling and oversampling on the training data to reduce the class imbalance. Updating the model regularly to use the latest data also helped diminishing the negative effects of non-stationarity. A final machine learning model that leverages all our findings has been deployed at Qliro, an important online payments provider in the Nordics. This model periodically sends suspicious purchase orders for review to fraud investigators, enabling them to catch frauds that were missed before. / Bedrägerier vid online-betalningar medför stora förluster, så företag bygger bedrägeribekämpningssystem för att förhindra dem. I denna avhandling studerar vi hur maskininlärning kan tillämpas för att förbättra dessa system. Tidigare studier har misslyckats med att hantera bedrägeribekämpning med verklig data, ett problem som kräver distribuerade beräkningsramverk för att hantera den stora datamängden. För att lösa det har vi använt betalningsdata från industrin för att bygga en klassificator för bedrägeridetektering via Spark ML. Obalanserade klasser och icke-stationäritet minskade träffsäkerheten hos våra modeller, så experiment för att hantera dessa problem har utförts. Våra bästa resultat erhålls genom att kombinera undersampling och oversampling på träningsdata. Att använda bara den senaste datan och kombinera flera modeller som ej har tränats med samma data förbättrar också träffsäkerheten. En slutgiltig modell har implementerats hos Qliro, en stor leverantör av online betalningar i Norden, vilket har förbättrat deras bedrägeribekämpningssystem och hjälper utredare att upptäcka bedrägerier som tidigare missades.
29

Exploring consensus mediating arguments in online debates

Kaas Johansen, Andreas January 2017 (has links)
This work presents a first venture into the search for features that define the rhetorical strategy known as Rogerian rhetoric. Rogerian rhetoric is a conflictsolving rhetorical strategy intended to find common ground instead of polarizing debates further by presenting strong arguments and counter arguments, as is often done in debates. The goal of the thesis is to lay the groundwork, a feature exploration and an evaluation of machine learning in this domain, for others tempted to model consensus-mediating arguments. In order to evaluate different sets of features statistical testing is applied to test if the distribution of certain features differ over consensus-mediating comments compared to nonconsensus mediating comments. Machine Learning in this domain is evaluated using support vector machines and different featuresets. The results show that on this data the consensus-mediating comments do have some characteristics that differ from other comments, some of which may generalize across debates. Next, as consensus-mediating arguments proved to be rare, these comments are a minority class, and in order to classify them using machine learning techniques overfitting needs to be addressed, the results suggest that the strategy applied to deal with overfitting is highly important. Due to the bias inherent in the hand annotated dataset the results should be considered provisional, more studies using debates from more domains with either expert or crowdsourced annotations are necessary to take the research further and produce results that generalize well. / Detta arbete presenterar en första resa in i eftersökningen för egenskaper som definierar den retoriska strategin kallat Rogerian Rhetoric. Rogerian Rhetoric är en konfliktlösande retorikstrategi skapat för att hitta en gemensam grund, istället för att polarisera debatten ytterligare genom att presentera starka och motstridiga argument, som det ofta görs. Målet med denna uppsats är att skapa det underliggande jobbet, en egenskapsundersökning och en evaluering av maskininlärning i denna domän, för andra som tänker att modellera konsensusförmedlade kommentarer. For at kunna evaluera olika sätt av egenskaper används statistiska tester. För att kunna testa om bestämda egenskaper varierar i konsensusförmedlade kommentarer sammanhållit med icke-konsensus förmedlade kommentarer. Maskininlärning i denna domän är evaluerat genom användning av support vector machine och olika egenskapssätt. Resultatet visar att på det använda datasätt har de konsensusförmedlade kommentarerna några karakteristika som skiljer sig från andra kommentarer, några av dom generaliserar på tvärs av debatter. Eftersom konsensusförmedlade kommentarer är sällsynta, är dissa kommentar en minority class och för att kunna klassificera genom användande av maskininlärningstekniker måste overfitting hanteras, resultatet visar att vilken strategi som man använder till overfitting är av högsta betydning. Grundet biasen som uppstår i det manuellt-kategoriserat datasätt skal resultatet anses för att provisorisk, behöves fler studier på debatter inom andra domänen göras, äntligen med expert eller crowdsourced kategoriseringar för att ta forskningen till nästa steg och producera resultat som sen kan används brett.
30

Sort Merge Buckets: Optimizing Repeated Skewed Joins in Dataflow

Nardelli, Andrea January 2019 (has links)
The amount of data being generated and consumed by today’s systems and applications is staggering and increasing at a vertiginous rate. Many businesses and entities rely on the analysis and the insights gained from this data to deliver their service. Due to the massive scale of this data, it is not possible to process it on a single machine, requiring instead parallel processing on multiple workers through horizontal scaling. However, even simple operations become complicated in a parallel environment. One such operation are joins, used widely in order to connect data by matching on the value of a shared key. Data-intensive platforms are used in order to make it easier to perform this and other operations at scale. In 2004, MapReduce was presented, revolutionizing the field by introducing a simpler programming model and a fault-tolerant and scalable execution framework. MapReduce’s legacy went on to inspire many processing frameworks, including contemporary ones such as Dataflow, used in this work. The Dataflow programming model (2015) is a unified programming model for parallel processing of data-at-rest and data-in-motion. Despite much work going into optimizing joins in parallel processing, few tackle the problem from a data perspective rather than an engine perspective, tying solutions to the execution engine. The reference implementation of Dataflow, Apache Beam, abstracts the execution engine away, requiring solutions that are platformindependent. This work addresses the optimization of repeated joins, in which the same operation is repeated multiple times by different consumers, e.g., user-specific decryption. These joins might also be skewed, creating uneven work distribution among the workers with a negative impact on performance. The solution introduced, sort merge buckets, is tested on Cloud Dataflow, the platform that implements the eponymous model, achieving promising results compared to the baseline both in terms of compute resources and network traffic. Sort merge buckets uses fewer CPU resources after two join operations and shuffles fewer data after four, for non-skewed inputs. Skew-adjusted sort merge buckets is robust to all types and degrees of skewness tested, and is better than a single join operation in cases of extreme skew. / Mängden data som genereras av applikationer och system ökar med en acceleration som inte tidigare skådats. Trots mängden data måste företag och organisationer kunna dra rätt slutsater av sin data, även om mängden är så stor att det går att behandla på en dator. Istället behövs parallella system för att bearbeta data, men de enklaste operationerna blir lätt komplicerade i ett parallellt system. En sådan enkel operation är join, som grupperar matchande par av datarader för en gemensam nyckel. Processningsramverk har implementerat join och andra operationer för att underlätta utveckling av storskaliga parallella system. MapReduce, som är ett sådant ramverk, presenterades 2004 och var banbrytande genom att tillhandahålla en enkel modell för programmering och en robust och skalbar exekveringsmiljö. MapReduce lade grunden för fler ramverk, till exempel Dataflow som används i denna uppsats. Dataflow (2015) är en programmeringsmodell för att parallellt behandla lagrad data på hårddisk och strömmande data. Join är en kostsam operation och trots att mycket arbete läggs på att optimera join i parallell databehandling, angriper få problemet från ett dataperspektiv istället för att optimera exekveringskod. Apache Beam, referensimplementationen av Dataflow, abstraherar bort exekveringsmiljön och ger utvecklare möjligheten att skriva databehandlingskod som är oberoende av platformen där den exekveras. Denna uppsats utforskar metoder för att optimera joins som utförs på ett repeterande sätt, där operationen utförs på en datamängd, men flera gånger av olika data-pipelines. Ett exempel på en sådan operation är kryptering av användarspecifik data. Join utförs ibland på data som är skev, det vill säga där vissa join-nycklar förekommer oftare än andra, vilket ofta leder till en negativ effekt på prestanda. Sort Merge Bucket Join, en optimering av join operationen och en lösning för skeva datamängder, introduceras i denna uppsats med tillhörande implementation för Cloud Dataflow. Resultaten av denna optimering är lovande med anseende till minskad användning av resurser för processning och nätverkstrafik.

Page generated in 0.1398 seconds