• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • 1
  • Tagged with
  • 19
  • 9
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Decentralized Identity Management for a Maritime Digital Infrastructure : With focus on usability and data integrity

Fleming, Theodor January 2019 (has links)
When the Internet was created it did not include any protocol for identifying the person behind the computer. Instead, the act of identification has primarily been established by trusting a third party. But, the rise of Distributed Ledger Technology has made it possible to authenticate a digital identity and build trust without the need of a third party. The Swedish Maritime Administration are currently validating a new maritime digital infrastructure for the maritime transportation industry. The goal is to reduce the number of accidents, fuel consumption and voyage costs. Involved actors has their identity stored in a central registry that relies on the trust of a third party. This thesis investigates how a conversion from the centralized identity registry to a decentralized identity registry affects the usability and the risk for compromised data integrity. This is done by implementing a Proof of Concept of a decentralized identity registry that replaces the current centralized registry, and comparing them. The decentralized Proof of Concept’s risk for compromised data integrity is 95.1% less compared with the centralized registry, but this comes with a loss of 53% in efficiency.
12

Denial of service : prevention, modelling and detection

Smith, Jason January 2007 (has links)
This research investigates the denial of service problem, in the context of services provided over a network, and contributes to improved techniques for modelling, detecting, and preventing denial of service attacks against these services. While the majority of currently employed denial of service attacks aim to pre-emptively consume the network bandwidth of victims, a significant amount of research effort is already being directed at this problem. This research is instead concerned with addressing the inevitable migration of denial of service attacks up the protocol stack to the application layer. Of particular interest is the denial of service resistance of key establishment protocols (security protocols that enable an initiator and responder to mutually authenticate and establish cryptographic keys for establishing a secure communications channel), which owing to the computationally intensive activities they perform, are particularly vulnerable to attack. Given the preponderance of wireless networking technologies this research hasalso investigated denial of service and its detection in IEEE 802.11 standards based networks. Specific outcomes of this research include: - investigation of the modelling and application of techniques to improve the denial of service resistance of key establishment protocols; - a proposal for enhancements to an existing modelling framework to accommodate coordinated attackers; - design of a new denial of service resistant key establishment protocol for securing signalling messages in next generation, mobile IPv6 networks; - a comprehensive survey of denial of service attacks in IEEE 802.11 wireless networks; discovery of a significant denial of service vulnerability in the clear channel assessment procedure implemented by the medium access control layer of IEEE 802.11 compliant devices; and - design of a novel, specification-based intrusion detection system for detecting denial of service attacks in IEEE 802.11 wireless networks.
13

A framework for high speed lexical classification of malicious URLs

Egan, Shaun Peter January 2014 (has links)
Phishing attacks employ social engineering to target end-users, with the goal of stealing identifying or sensitive information. This information is used in activities such as identity theft or financial fraud. During a phishing campaign, attackers distribute URLs which; along with false information, point to fraudulent resources in an attempt to deceive users into requesting the resource. These URLs are made obscure through the use of several techniques which make automated detection difficult. Current methods used to detect malicious URLs face multiple problems which attackers use to their advantage. These problems include: the time required to react to new attacks; shifts in trends in URL obfuscation and usability problems caused by the latency incurred by the lookups required by these approaches. A new method of identifying malicious URLs using Artificial Neural Networks (ANNs) has been shown to be effective by several authors. The simple method of classification performed by ANNs result in very high classification speeds with little impact on usability. Samples used for the training, validation and testing of these ANNs are gathered from Phishtank and Open Directory. Words selected from the different sections of the samples are used to create a `Bag-of-Words (BOW)' which is used as a binary input vector indicating the presence of a word for a given sample. Twenty additional features which measure lexical attributes of the sample are used to increase classification accuracy. A framework that is capable of generating these classifiers in an automated fashion is implemented. These classifiers are automatically stored on a remote update distribution service which has been built to supply updates to classifier implementations. An example browser plugin is created and uses ANNs provided by this service. It is both capable of classifying URLs requested by a user in real time and is able to block these requests. The framework is tested in terms of training time and classification accuracy. Classification speed and the effectiveness of compression algorithms on the data required to distribute updates is tested. It is concluded that it is possible to generate these ANNs in a frequent fashion, and in a method that is small enough to distribute easily. It is also shown that classifications are made at high-speed with high-accuracy, resulting in little impact on usability.
14

Implementation av Self-Sovereign Identity : Applikationsutveckling i React Native och Aries Cloud Agent Python / Implementation of Self-Sovereign Identity : Application development in React Native and Aries Cloud Agent Python

Deubler, Oskar, Stenqvist, Oscar January 2024 (has links)
In today’s centralized identity systems, large companies control and store user datain a centralized manner, which poses a risk to users’ privacy and personal data. Self-sovereign identity (SSI) decentralizes digital identity management and gives individu-als full control over which personal data is shared and with whom. Furthermore, SSIenables verifiable credentials, allowing companies, authorities and individuals to buildnetworks of trust among themselves. This thesis discusses SSI, as well as a project whereSSI is practically applied in a mobile software application project for the distribution ofdigital drink tickets. The goal of the project is to develop a prototype to demonstratehow SSI can be applied in a general application, that involves issuing a verifiable cre-dential in the form of a drink ticket. To realize the project goal, an application has beendeveloped in React Native, where several frameworks for SSI have been studied. ”AriesCloud Agent Python” (ACA-Py) has been integrated with a REST API to provide SSIfunctionality to the project. The project has resulted in a working mobile applicationthat can issue and verify digital drink tickets, stored in the digital wallet application ofthe ticket holder. Minimal personal data is shared with the application, and only withthe exclusive approval of the ticket holder. The result confirms the potential of SSI fordecentralized identity management. / I dagens centraliserade identitetssystem är det stora teknikföretag som lagrar och kon-trollerar användardata, vilket medför en risk för användarnas integritet och personligadata. ”Self-sovereign identity” (SSI) decentraliserar digitala identiteter och ger indivi-den full kontroll över vilka personliga data som delas och till vem. Vidare möjliggörSSI verifierbara legitimationer, där förtroendenätverk byggs mellan företag, myndig-heter och individer. Denna uppsats avhandlar SSI, samt ett projekt där SSI appliceratspraktiskt i en mobilapplikation för distribution av digitala dryckesbiljetter. Målet medprojektet är att utveckla en prototyp för att visa hur SSI kan appliceras på en generellapplikation som innebär utgivande av en verifierbar legitimation i form av en dryckesbil-jett. För att realisera målet har en applikation utvecklats i React Native, där flera ramverkför SSI har studerats. ”Aries Cloud Agent Python” (ACA-Py) har integrerats genomREST API för att tillhandahålla SSI-funktionalitet till projektet. Projektet har resulterati en fungerande mobilapplikation som kan utfärda och verifiera digitala dryckesbiljetter,som lagras på en biljettinnehavares digitala plånboksapplikation. Minimal personligdata delas med mobilapplikationen och med innehavarens exklusiva tillstånd. Resultatetbekräftar potentialen av SSI för decentraliserad digital identitetshantering.
15

Improving internet usability - a framework for domain name policy evaluation.

Rowe, Joshua Luke, josh@email.nu January 2009 (has links)
A domain name is a unique alphanumeric designation that facilitates reference to sets of numbers which actually locate a particular computer on the Internet. Domain names are a fundamental part of the Internet's user interface. Improving the usability of the Internet depends upon effective domain name policy. This study is intended to contribute to improvement in Internet usability for the end users of domain names. Benefits of more usable domain names include: higher sales, customer satisfaction and productivity, and reduced support costs. Domain name policies worldwide vary considerably. Consequently, end users are inconvenienced by contradictory domain name policies, diminishing the predictability of an entity's domain name, and thus decreasing usability for end users. The study objective was to develop criteria with which policy makers can evaluate their domain name policies, in order to improve the usability of domain names for end users. The main research question posed was: What are the criteria for an effective domain name policy? The research methodology included a literature review, domain name policy examination and an ethnographic narrative. The literature review found existing research examining either domain names or usability in isolation. However, research examining the intersection of the two is scarce. The existing research describes domain names as part of the web user interface. In practical terms, this is about how people use domain names to access web sites, email addresses and other Internet resources. It was established that the predictability (and thus usability) of domain names relies on effective domain name policy. The non-standardised and widely delegated process of domain name policy development leads to unpredictable and inconsistent domain names. The narrative recollection presented the researcher's inside perspective on the domain name industry, with a focus on domain name usability. The researcher provided first-hand insights into the evolution of the industry and policy development process, from Australian and international perspectives. To address the problem of poor domain name usability, a framework for domain name policy evaluation is proposed. The framework extends the current research that treats domain names as a user interface by proposing criteria which address usability concerns. The framework allows policy makers to critically assess domain name policies with end users in mind. Examples of the criteria include: understanding who are its intended and untended users, and whether it's consistent with other domain names. The framework has the potential to set an international standard for the critical evaluation of domain name policy, and become the basis for further research. This study was developed from the researcher's perspective as a participant in the domain name industry. A secondary lens regarding the usability of domain names was then applied. This study has only scraped the surface in terms of how the research fields of domain names and usability may be considered together. The research methodology for this study was primarily qualitative and interpretive. A quantitative study of domain name policies globally could provide further insight into areas including: the differences in second level country code domain names, and language implications of domain names.
16

Digitalisering av individuell studieplan : Från PDF till en grund för ett digitalt system

Esplund, Emil, Gosch, Elin January 2020 (has links)
Denna uppsats ämnar belysa om det går att digitalisera den individuella studieplanen på en institution vid Uppsala universitet. Digitalisering anses vara en av de största trenderna som påverkar dagens samhälle och det har visats bidra till ökad tillgänglighet och effektivitet inom offentlig förvaltning. Vid Uppsala universitet finns det i dagsläget behov av ett digitalt system för hanteringen av den individuella studieplanen. Studieplanen hanteras för tillfället som ett papper i form av en PDF och upplevs enligt tidigare studier, som ett mindre bra planerings- och uppföljningsverktyg. Forskningsarbetet har utförts baserat på forskningsstrategin Design & creation. Resultatet av forskningsprocessen är en informationsmodell, en processmodell i olika nivåer, samt en databas. Dessa modeller och databas är baserade på tidigare forskning, befintliga dokument och empiriskt material från intervjuer. Tidigare forskning omfattar ett verktyg för digitalisering, problem med identifierare, samt forskning kring modellering. De befintliga dokumenten utgörs av en tidigare studie om den individuella studieplan, samt lagstiftning, riktlinjer och allmän information gällande denna studieplan. Intervjuerna genomfördes med 9 informanter som använder studieplanen i sin roll på Uppsala universitet. Modellerna och databasen har utvärderats i en kriteriebaserad intervju med ämnesexpert, samt i en teoribaserad utvärdering. Forskningsresultatet tyder på att det är möjligt att digitalisera studieplanen med hjälp av de framlagda modellerna och databasen. Dessa modeller och databas kan med viss modifikation användas för att bygga ett digitalt gränssnitt och ett fullständigt system för studieplanen. / This thesis aims to illustrate whether it is possible to digitize the individual study plan at an institution at Uppsala university. Digitalization is considered one of the biggest trends that affects today’s society and it has been shown to contribute to increased accessibility and efficiency in public administration. Uppsala university has a need for a digital system for the individual study plan. It is currently handled as a paper in the form of a PDF and is perceived as an inferior planning and monitoring tool, according to previous studies. The research work has been carried out based on the research strategy Design & creation. The result of the research process is an information model, a process model at different levels and a database. These models and database are based on previous research, existing documents and empirical material from interviews. Previous research includes a tool for digitalization, problems regarding identifiers and research regarding modeling. The existing documents comprise a previous study of the ISP, legislation, guidelines and general information regarding the ISP. The interviews were conducted with 9 informants who use the study plan within their role at Uppsala university. The models and database have been evaluated in a criteria-based interview with a subject matter expert and in a theory-based evaluation. The research results indicate that it is possible to digitize the study plan using the presented models and database. These models and database can be used with slight modifications to build a digital interface and a complete system for the ISP.
17

Autenticita a digitální informace / Authenticity and Digital Information

Cubr, Ladislav January 2017 (has links)
The dissertation focuses on the authenticity of digitized books in the context of their life cycle (production, preservation, access). First the OIAS high-level conceptual framework for lifecycle management of digital documents maintained by organizations is introduced. Then the current situation of digitized books lifecycle management is described. This part is followed by an introducing to relevant conceptualizations of the authenticity of digital documents and these conceptualizations are analyzed and reviewed. Then framework for analysis of authenticity is established based on previous findings. This framework is then used to identify authenticity requirements for digitized books and to develop a domain-specific conceptualization of the authenticity of digitized books. This conceptualization deploys detailed analysis of risks threatening authenticity during lifecycle management of digitized books. The selected topics of this conceptualization are then the source for the next step, which is to develop a recommended practice for maintaining authenticity of digitized books. This practice is further specified for one partial solution to the problem of maintaining the authenticity of digital documents throughout their life cycle, which is a persistent identification system.
18

Designing a geodetic research data management system for the Hartebeeshoek radio astronomy observatory

Coetzer, Glenda Lorraine 11 1900 (has links)
The radio astronomy and space geodesy scientific instrumentation of the Hartebeesthoek Radio Astronomy Observatory (HartRAO) in Gauteng, South Africa, generate large volumes of data. Additional large data volumes will be generated by new geodesy instruments that are currently under construction and implementation, including a lunar laser ranging (LLR) system, seismic and meteorological systems and a Very Long Baseline Interferometry (VLBI) global observing system (VGOS) radio telescope. The existing HartRAO data management and storage system is outdated, incompatible and has limited storage capacity. This necessitates the design of a new geodetic research data management system (GRDMS). The focus of this dissertation is on providing a contextual framework for the design of the new system, including criteria, characteristics, components, an infrastructure architectural model and data structuring and organisation. An exploratory research methodology and qualitative research techniques were applied. Results attained from interviews conducted and literature consulted indicates a gap in the literature regarding the design of a data management system, specifically for geodetic data generated by HartRAO instrumentation. This necessitates the development of a conceptual framework for the design of a new GRDMS. Results are in alignment with the achievement of the research questions and objectives set for this study. / Information Science / M.A. (Information Science)
19

EXPLORING GRAPH NEURAL NETWORKS FOR CLUSTERING AND CLASSIFICATION

Fattah Muhammad Tahabi (14160375) 03 February 2023 (has links)
<p><strong>Graph Neural Networks</strong> (GNNs) have become excessively popular and prominent deep learning techniques to analyze structural graph data for their ability to solve complex real-world problems. Because graphs provide an efficient approach to contriving abstract hypothetical concepts, modern research overcomes the limitations of classical graph theory, requiring prior knowledge of the graph structure before employing traditional algorithms. GNNs, an impressive framework for representation learning of graphs, have already produced many state-of-the-art techniques to solve node classification, link prediction, and graph classification tasks. GNNs can learn meaningful representations of graphs incorporating topological structure, node attributes, and neighborhood aggregation to solve supervised, semi-supervised, and unsupervised graph-based problems. In this study, the usefulness of GNNs has been analyzed primarily from two aspects - <strong>clustering and classification</strong>. We focus on these two techniques, as they are the most popular strategies in data mining to discern collected data and employ predictive analysis.</p>

Page generated in 0.0748 seconds