361 |
Near Real-time Detection of Masquerade attacks in Web applications : catching imposters using their browsing behavorPanopoulos, Vasileios January 2016 (has links)
This Thesis details the research on Machine Learning techniques that are central in performing Anomaly and Masquerade attack detection. The main focus is put on Web Applications because of their immense popularity and ubiquity. This popularity has led to an increase in attacks, making them the most targeted entry point to violate a system. Specifically, a group of attacks that range from identity theft using social engineering to cross site scripting attacks, aim at exploiting and masquerading users. Masquerading attacks are even harder to detect due to their resemblance with normal sessions, thus posing an additional burden. Concerning prevention, the diversity and complexity of those systems makes it harder to define reliable protection mechanisms. Additionally, new and emerging attack patterns make manually configured and Signature based systems less effective with the need to continuously update them with new rules and signatures. This leads to a situation where they eventually become obsolete if left unmanaged. Finally the huge amount of traffic makes manual inspection of attacks and False alarms an impossible task. To tackle those issues, Anomaly Detection systems are proposed using powerful and proven Machine Learning algorithms. Gravitating around the context of Anomaly Detection and Machine Learning, this Thesis initially defines several basic definitions such as user behavior, normality and normal and anomalous behavior. Those definitions aim at setting the context in which the proposed method is targeted and at defining the theoretical premises. To ease the transition into the implementation phase, the underlying methodology is also explained in detail. Naturally, the implementation is also presented, where, starting from server logs, a method is described on how to pre-process the data into a form suitable for classification. This preprocessing phase was constructed from several statistical analyses and normalization methods (Univariate Selection, ANOVA) to clear and transform the given logs and perform feature selection. Furthermore, given that the proposed detection method is based on the source and1request URLs, a method of aggregation is proposed to limit the user privacy and classifier over-fitting issues. Subsequently, two popular classification algorithms (Multinomial Naive Bayes and Support Vector Machines) have been tested and compared to define which one performs better in our given situations. Each of the implementation steps (pre-processing and classification) requires a number of different parameters to be set and thus a method called Hyper-parameter optimization is defined. This method searches for the parameters that improve the classification results. Moreover, the training and testing methodology is also outlined alongside the experimental setup. The Hyper-parameter optimization and the training phases are the most computationally intensive steps, especially given a large number of samples/users. To overcome this obstacle, a scaling methodology is also defined and evaluated to demonstrate its ability to handle larger data sets. To complete this framework, several other options have been also evaluated and compared to each other to challenge the method and implementation decisions. An example of this, is the "Transitions-vs-Pages" dilemma, the block restriction effect, the DR usefulness and the classification parameters optimization. Moreover, a Survivability Analysis is performed to demonstrate how the produced alarms could be correlated affecting the resulting detection rates and interval times. The implementation of the proposed detection method and outlined experimental setup lead to interesting results. Even so, the data-set that has been used to produce this evaluation is also provided online to promote further investigation and research on this field. / Det här arbetet behandlar forskningen på maskininlärningstekniker som är centrala i utförandet av detektion av anomali- och maskeradattacker. Huvud-fokus läggs på webbapplikationer på grund av deras enorma popularitet och att de är så vanligt förekommande. Denna popularitet har lett till en ökning av attacker och har gjort dem till den mest utsatta punkten för att bryta sig in i ett system. Mer specifikt så syftar en grupp attacker som sträcker sig från identitetsstölder genom social ingenjörskonst, till cross-site scripting-attacker, på att exploatera och maskera sig som olika användare. Maskeradattacker är ännu svårare att upptäcka på grund av deras likhet med vanliga sessioner, vilket utgör en ytterligare börda. Vad gäller förebyggande, gör mångfalden och komplexiteten av dessa system det svårare att definiera pålitliga skyddsmekanismer. Dessutom gör nya och framväxande attackmönster manuellt konfigurerade och signaturbaserade system mindre effektiva på grund av behovet att kontinuerligt uppdatera dem med nya regler och signaturer. Detta leder till en situation där de så småningom blir obsoleta om de inte sköts om. Slutligen gör den enorma mängden trafik manuell inspektion av attacker och falska alarm ett omöjligt uppdrag. För att ta itu med de här problemen, föreslås anomalidetektionssystem som använder kraftfulla och beprövade maskininlärningsalgoritmer. Graviterande kring kontexten av anomalidetektion och maskininlärning, definierar det här arbetet först flera enkla definitioner såsom användarbeteende, normalitet, och normalt och anomalt beteende. De här definitionerna syftar på att fastställa sammanhanget i vilket den föreslagna metoden är måltavla och på att definiera de teoretiska premisserna. För att under-lätta övergången till implementeringsfasen, förklaras även den bakomliggande metodologin i detalj. Naturligtvis presenteras även implementeringen, där, med avstamp i server-loggar, en metod för hur man kan för-bearbeta datan till en form som är lämplig för klassificering beskrivs. Den här för´-bearbetningsfasen konstruerades från flera statistiska analyser och normaliseringsmetoder (univariate se-lection, ANOVA) för att rensa och transformera de givna loggarna och utföra feature selection. Dessutom, givet att en föreslagen detektionsmetod är baserad på käll- och request-URLs, föreslås en metod för aggregation för att begränsa problem med överanpassning relaterade till användarsekretess och klassificerare. Efter det så testas och jämförs två populära klassificeringsalgoritmer (Multinomialnaive bayes och Support vector machines) för att definiera vilken som fungerar bäst i våra givna situationer. Varje implementeringssteg (för-bearbetning och klassificering) kräver att ett antal olika parametrar ställs in och således definieras en metod som kallas Hyper-parameter optimization. Den här metoden söker efter parametrar som förbättrar klassificeringsresultaten. Dessutom så beskrivs tränings- och test-ningsmetodologin kortfattat vid sidan av experimentuppställningen. Hyper-parameter optimization och träningsfaserna är de mest beräkningsintensiva stegen, särskilt givet ett stort urval/stort antal användare. För att övervinna detta hinder så definieras och utvärderas även en skalningsmetodologi baserat på dess förmåga att hantera stora datauppsättningar. För att slutföra detta ramverk, utvärderas och jämförs även flera andra alternativ med varandra för att utmana metod- och implementeringsbesluten. Ett exempel på det är ”Transitions-vs-Pages”-dilemmat, block restriction-effekten, DR-användbarheten och optimeringen av klassificeringsparametrarna. Dessu-tom så utförs en survivability analysis för att demonstrera hur de producerade alarmen kan korreleras för att påverka den resulterande detektionsträ˙säker-heten och intervalltiderna. Implementeringen av den föreslagna detektionsmetoden och beskrivna experimentuppsättningen leder till intressanta resultat. Icke desto mindre är datauppsättningen som använts för att producera den här utvärderingen också tillgänglig online för att främja vidare utredning och forskning på området.
|
362 |
Usability of a subscription-based book service : - How can a web application for a subscription-based book service be designed to reduce the number of clicks and improve subjective user experiences for given tasks in the aspect of usability?Amolegbe, Elvira, Berggren, Jakob, Burman, Frans, Falk, Axel, Folkunger, Philip, Nyreröd Granath, Lars, Waller, Alice, Wallhem, Max, Åström, Daniel January 2022 (has links)
To increase the number of customers buying from a web application, theory states that the application needs to be easy to use. In this study, navigability and usability will be investigated and tested on a web-based book-subscription service. The purpose of this study is to come to new conclusions regarding what is important for navigability and usability for subscription services that target students. A web application was created for an online book club subscription service for students at Linköping University. It was created in two different iterations where each iteration was tested on 12 students, which according to theory is a sufficient number of testers. The first iteration was designed based on theory and the second was redesigned based on the input from the test users. Counting the number of clicks taken by the user was the main method used for evaluating navigability, while Concurrent Thinking Aloud, System Usability Scale and Retrospective Probing Procedure were the methods used to measure usability. The most considerable changes made between the two iterations were the updating of the filtering function on the find book club page and the implementation of a search function on the same page. This was also the page where the most improvement in the number of clicks was made. In line with the theory, the usability improved in the tests after changes had been made that test users had commented on. The number of clicks did mostly decrease in the second iteration but not in all categories. This result does not mean that the user experience was not improved since some requested functionality was implemented that required additional clicks.
|
363 |
Exploring the intersection of design, reflection and sustainable food shopping practices : The case of the EcoPanelBohné, Ulrica January 2016 (has links)
Food production has been shown to have considerable negative impacts on the environment. A means to reduce this is to choose organic products when shopping for food. Through the case of the EcoPanel, a web application prototype that visualises the organic proportion of the household’s food shopping, the thesis explores the intersection between design, reflection and sustainable food shopping practices. In order to contextualise the role of the EcoPanel, the text discusses the concept of food shopping practice, both from the perspective of social practice theory (SPT), and the more focused food choice perspective. The studies show that it is fundamental to understand the complexity of choosing food, and the habitual aspect of practice, in order to understand the role of reflection in food shopping practice, and consequently the role of a tool for reflective decision-making, like the EcoPanel. We have used a research through design approach to develop the EcoPanel prototype. In an iterative process we probed how the EcoPanel could be designed to be as relevant and accessible for the users as possible. Essential in the process were the iterative user feedback sessions. The way in which the users answered the questions from the sessions formed the guiding principles for the development of the design. A central question in the thesis is to explore in what ways the users’ access to their individual sustainable grocery data provided by the EcoPanel affects their food shopping practices. The studies include monitoring sixty-five users of the EcoPanel over five months, a survey regarding aspects of lifestyle and attitudes to food, and interviews with ten of the users. The long-term study shows an increased organic purchase level (17%) for the EcoPanel users in comparison to the reference group. We also see that when the users receive feedback on their organic food purchases through the EcoPanel, they can make more reflective decisions. This is shown to be highly relevant and creates meaning for the users in several different ways. From this result, in combination with the result of the long-term study, we can conclude that the EcoPanel contributes with support for more sustainable food practices. The last question in the thesis is to understand how SPT can be useful for design practice. SPT shows a view that goes beyond the traditional interaction perspective, and points to the importance of approaching complex issues, such as sustainability challenges, with an awareness that also includes social and cultural aspects of the context. As well as this view being pertinent when approaching sustainability issues, it also provides value to designers in their emerging roles of dealing with more socially embedded concerns, such as social innovation and design for public policies. / <p>QC 20160311</p>
|
364 |
Compiling SDL Multiplayer Games to WebAssembly with Emscripten for use in Web Browsers / Kompilera SDL multiplayer spel till WebAssembly med Emscripten för användning i webbläsareFalkmer, Oscar, Norrman, Martin January 2022 (has links)
Collecting and deploying online games made by inexperienced developers can behard. This is something KTH (Royal Institute of Technology) has a problem withpertaining to a course involving SDL and SDL_Net programming. A good solutionto this problem is to host these games on a website. An easy-to-use way of compilingand deploying multiplayer games and game-servers written in C as web applicationsand web servers was needed. Specifically for games written in C using SDL andSDL_net libraries. The compiler toolchain Emscripten was used to compile gameand server code from C to WebAssembly, that could then be used through the generated JavaScript functions. Communication between the client and the server washandled by WebSockets. As much of the Emscripten specific functions were to behidden behind C libraries, emulating the format of SDL_Net. The finished solutionsthat emulated the format of SDL_Net, consisted of two new libraries, one for theserver and the other for the client. The libraries successfully emulated the TCP partsof SDL_Net library. The browsers event scheduler necessitates applications to beable to return control back to it. This meant that the game codes endlessly loopingfunctions had to be rewritten to enable rendering in the browser. / Det kan vara svårt att samla in och distribuera onlinespel gjorda av oerfarnautvecklare. Detta är något som KTH (Kungliga Tekniska Högskolan) har problemmed i en kurs som involverar SDL och SDL_Net programmering. En bra lösning pådetta problem är att köra dessa spel på en webbsida. Ett lättanvänt sätt att kompileraoch distribuera multiplayer-spel och spelservrar skrivna i C till webbapplikationeroch webbservrar behövdes. Specifikt för spel skrivna i C med SDL och SDL_netbiblioteken. Kompileringsverktyget Emscripten användes för att kompilera spel- ochserverkod från C till WebAssembly, som sedan kunde användas genom degenererade JavaScript-funktionerna. Kommunikationen mellan klienten ochservern sköttes av WebSockets. I största möjliga mån skulle Emscripten specifikafunktioner döljas bakom C-bibliotek som emulerade formatet av SDL_Net. Defärdiga lösningarna som emulerar formatet av SDL_Net bestod av två nya bibliotek,ett för servern och det andra för klienten. De emulerade framgångsrikt TCP-delarnaav SDL_Net biblioteket. Webbläsarens händelseschemaläggare kräver attapplikationer har möjligheten att återge kontroll till den. Detta gjorde att spelkodensoändligt loopande funktioner behövdes skrivas om för att kunna rendera i webbläsaren.
|
365 |
Diseño de un sistema de contenedores inteligentes para mejorar la recolección de residuos sólidos domiciliarios en el distrito de San Martin de Porres / Design of a smart container system to improve household solid waste collection in the district of San Martin de PorresVera Villanueva, Carlos Alberto 17 April 2021 (has links)
El presente trabajo de titulación muestra la propuesta de una red de contenedores inteligentes ubicados dentro de la urbanización San Diego Vipol y el diseño de un prototipo que tenga la capacidad de monitorear en línea el nivel de basura y determinar la ubicación de cada contenedor. Para ello se incorporaron cinco nodos inteligentes denominados nodo final, Gateway, servidor de Red, aplicación web y el cliente que supervisara vía web las variables físicas y geográficas de cada contenedor. Asimismo, se utiliza el protocolo de comunicación MQTT para la comunicación entre los diversos nodos. El nodo final procesa los datos para ser enviados a través de la red inalámbrica de tipo estrella mediante el transceptor LoRa RFM95 de 915 MHz. Este a su vez se comunica con el nodo Gateway para enviar los datos a través de Internet mediante una conexión Wi-Fi. Todos los datos son recibidos y almacenados en el servidor de red, para ello se usó un servidor privado virtual (VPS) de la plataforma de Amazon Web Services (AWS). Asimismo, para el alojamiento de la aplicación web y el almacenamiento de la base de datos se instala un panel Web hosting llamado Vesta CP, que es un panel muy potente y liviano que tiene servicios como WEB, DATABASE, MAIL, DNS, FTP, FIREWALL y otros servicios, que permiten la configuración de la aplicación web de monitoreo. La funcionalidad de la aplicación web es mostrar todos los valores medidos en el nodo final y en intervalos de tiempo de 10 segundos. / This degree work shows the proposal for a network of smart containers located within the San Diego Vipol urbanization and the design of a prototype that has the ability to monitor the level of garbage online and determine the location of each container. For this, five intelligent nodes called end node, Gateway, Network server, web application and the client that would supervise the physical and geographic variables of each container via the web were incorporated. Besides, the MQTT communication protocol is used for communication between the various nodes. The end node processes the data to be sent through the star-type wireless network through the LoRa RFM95 915 MHz transceiver. This in turn communicates with the Gateway node to send the data through the Internet through a Wi-Fi connection. Fi. All data is received and stored on the network server, for which a virtual private server (VPS) of the Amazon Web Services (AWS) platform was used. Likewise, for the hosting of the web application and the storage of the database, a Web hosting panel called Vesta CP is installed, which is a very powerful and lightweight panel that has services such as WEB, DATABASE, MAIL, DNS, FTP, FIREWALL and other services, which allow the configuration of the monitoring web application. The functionality of the web application is to display all the measured values at the end node and in time intervals of 10 seconds. / Tesis
|
366 |
Konvertera klasskomponenter till funktioner med React-Hooks — Riktlinjer för utvecklare / Converting Class Components to Functions Using React-Hooks — Developer GuidelinesQing, He, Dong, Wang January 2022 (has links)
The maintainability of the system deteriorates since wide use of class-based components. Theoretically, researchers have dedicated their work on addressing the common problems of Class Components as: Huge components, Duplicated logic and Complex patterns. Recently, replacing the Class Components by converting to Functions is expected as a solution to eradicate such problems. In fact, we still lack the practical paradigm to guide the conversion projects. Furthermore, the adoption of the solution is based on the code quality improvements assessed on the newly generated functional codebase in a practical environment. Based on their Web platform, Glodon company provided the codebase conversion from Class to Functions with support of React-Hooks techniques. Thereafter, in this research, a design science study was employed as the main methodology to build a set of guidelines to conduct the conversion project. By application of the guideline and observing the data output from the Glodon project, the researcher studied the feasibility and effectiveness of the conversion from Class to Functions in a practical environment. Aslo, other methodologies were conducted to help collect the data and understand the research questions: an interview was used to identify the pain points of the class-based code; the code review and open discussion with the developers were conducted to evaluate the effectiveness of the code conversion. In the end, the examples and the consideration output in the project were analyzed and summarized for optimizing the conversion guideline which was evaluated by a survey and will contribute to other developers for their future conversion practice.
|
367 |
Informační systém pro evidenci a tisk zásilek / Delivery Information SystemPetřík, Radek January 2008 (has links)
The field of this work was to create a web application easing shipper's and carrier's work with consignments. The purpouse of the application is evidence of packages combined with label printing. A prewiew of present-day web application development techniques is provided first. Recent package delivery systems are discussed then. The application specification is the next topic. Implementation process using C# ASP .NET and the results of this work is analyzed in the last chapters.
|
368 |
Automatický design webových aplikací / Automatic Design of Web ApplicationsŽilka, Radek January 2009 (has links)
This project is focused on software design and implementation. This software represents a generator of web information system source code developed in ASP.NET 2.0. The information system requires the SQL database. The source code generated by this application is the base of the next development in Visual Studio 2008. This project inspects similar generators too.
|
369 |
Převody mezi regulárními gramatikami, regulárními výrazy a konečnými automaty / Mutual Transformations of Regular Grammars, Regular Expressions and Finite AutomataPodhorský, Michal Unknown Date (has links)
This work describes models of modern language theory - finite automata, regular grammars and regular expressions. A web application converting among these models is implemented.
|
370 |
REFLEKTOR - webový systém pro podporu výuky / REFLEKTOR - Web System for Teaching SupportSitko, Radek January 2007 (has links)
This thesis deals with design and implementation of a web application to support interactive tuition in programming within the IZP - Course of Programming Essentials. It enables students submit their own solutions to a specific task and allows them to review some tasks completed by other students. Web pages are created with respect to maximum accessibility of their content and functionality. Their layout and controls are designed in such manner to make them accessible both by alternative browsers as well as by people with specific needs. The application design is created using the UML modelling language. The very application is implemented using the PHP script language together with theXHTML, cascade style sheets (CCS) and the MySQL database system. This thesis also focuses on implementation of the system on FIT websites and its utilisation by students.
|
Page generated in 0.0578 seconds