• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 7
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 42
  • 42
  • 12
  • 12
  • 11
  • 11
  • 9
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

The major security challenges to cloud computing.

Inam ul Haq, Muhammad January 2013 (has links)
Cloud computing is the computing model in which the computing resources such as software, hardware and data are delivered as a service through a web browser or light-weight desktop machine over the internet (Wink, 2012). This computing model abolishes the necessity of sustaining the computer resources locally hence cuts-off the cost of valuable resources (Moreno, Montero & Llorente, 2012). A distinctive cloud is affected by different security issues such as Temporary Denial of Service (TDOS) attacks, user identity theft, session hijacking issues and flashing attacks (Danish, 2011). The purpose of this study is to bridge the research gap between the cloud security measures and the existing security threats. An investigation into the existing cloud service models, security standards, currently adopted security measures and their degree of flawless protection has been done. The theoretical study helped in revealing the security issues and their solutions whereas the empirical study facilitated in acknowledging the concerns of users and security analysts in regards to those solution strategies. The empirical methods used in this research were interviews and questionnaires to validate the theoretical findings and to grasp the innovativeness of practitioners dealing with cloud security.With the help of theoretical and empirical research, the two-factor mechanism is proposed that can rule out the possibility of flashing attacks from remote location and can help in making the cloud components safer. The problem of junk traffic can be solved by configuring the routers to block junk data packets and extraneous queries at the cloud outer-border. This security measure is highly beneficial to cloud security because it offers a security mechanism at the outer boundary of a cloud. It was evaluated that a DOS attack can become a huge dilemma if it affects the routers and the effective isolation of router-to-router traffic will certainly diminish the threat of a DOS attack to routers. It is revealed that the data packets that require a session state on the cloud server should be treated separately and with extra security measures because the conventional security measures cannot perform an in-depth analysis of every data packet. This problem can be solved by setting an extra bit in the IP header of those packets that require a state and have a session. Although this change should be done at universal level and would take time; it can provide a protocol-independent way to identify packets which require extra care. It will also assist firewalls to drop bits which are requesting a session sate without a state-bit being set. The cloud security analysts should consider that the interface and authentication layer should not be merged into a single layer because it endangers the authentication system as the interface is already exposed to the world. The use of login-aiding devices along with secret keys can help in protecting the cloud users. Moreover, a new cloud service model “Dedicated cloud” is proposed in this research work to reinforce the cloud security. It was discovered that the optimal blend of HTTPS and SSL protocols can resolve the problem of session hijacks. The client interface area should be protected by HTTPS protocols and the secure cookies should be sent through a SSL link along with regular cookies. Disallowing the multiple sessions and the use of trusted IP address lists will help even further. A reasonable amount of care has been paid to ensure clarity, validity and trustworthiness in the research work to present a verifiable scientific knowledge in a more reader-friendly manner. These security guidelines will enhance the cloud security and make a cloud more responsive to security threats. / Program: Masterutbildning i Informatik
22

A conceptualized data architecture framework for a South African banking service.

Mcwabeni-Pingo, Lulekwa Gretta. January 2014 (has links)
M. Tech. Business Information Systems / Currently there is a high demand in the banking environment for real time delivery of consistent, quality data for operational information. South African banks have the fastest growing use and demand for quality data; however, the bank still experiences data management related challenges and issues. It is argued that the existing challenges may be leveraged by having a sound data architecture framework. To this point, this study sought to address the data problem by theoretically conceptualizing a data architecture framework that may subsequently be used as a guide to improve data management. The purpose of the study was to explore and describe how data management challenges could be improved through Data Architecture.
23

Development Of A Database Management System For Small And Medium Sized Enterprises

Safak, Cigdem 01 May 2005 (has links) (PDF)
Databases and database technology have become an essential component of everyday life in modern society. As databases are widely used in every organization with a computer system, control of data resources and management of data are very important. Database Management System (DBMS) is the most significant tool developed to serve multiple users in a database environment consisting of programs that enable users to create and maintain a database. Windows Distributed Internet Applications (DNA) architecture describes a framework of building software technologies together in an integrated web and client-server model of computing. This thesis focuses on development of a general database management system, for small and medium sized manufacturing enterprises, by using Windows DNA technology. Defining, constructing and manipulating institutional, commercial and operational data of the company is the main frame of the work. And also by integrating &ldquo / Optimization&rdquo / and &ldquo / Agent&rdquo / system components which were previously developed in Middle East Technical University, Mechanical Engineering Department, Computer Integrated Manufacturing Laboratory (METUCIM) into the SME DBMS, a unified information system is developed. &ldquo / Optimization&rdquo / system was developed in order to calculate optimum cutting conditions for turning and milling operations. &ldquo / Agent&rdquo / system was implemented to control and send work orders to the available manufacturing cell in METUCIM. The components of these systems are redesigned to share a unique database together with the newly developed &ldquo / SME Information System&rdquo / application program in order to control data redundancy and to provide data sharing and data integrity.
24

Blockkedjor : Teknik för bevaring av dataintegritet i industriella nätverk

Hansson, Martin, Magnusson, Olof January 2018 (has links)
I en perfekt värld hanteras all data på ett säkert och verifierbart sätt för att förhindra att information förändras, stjäls eller blockeras. Dagens infrastruktur bygger på centraliserade system som är format till ett fåtal aktörer som statliga organisationer, myndigheter och institutioner. Denna lösning är inte anpassningsbar till den digitala utvecklingen vilket har lett till att mer information sparas och hanteras online. Blockkedjan har en stor potential att decentralisera hur vi lagrar och hanterar data genom effektivitet, transparens och säkerhet. Blockkedjetekniken har en mängd appliceringsområden som finans, medicin och logistik, men kan summeras som en teknik där algoritmerna utformas för att skapa en distribuerad ledger av informationen som sparas, vilket är en teknik för att få data replikerad, synkroniserad, delad och utspridd geografiskt över ett antal platser. Avsikten med blockkedjan är att tillämpas som ett register av tidigare transaktioner på ett sådant sätt att alla inblandade noder på nätverket tillhandahålls en kopia av kedjan, varmed samtliga deltagare kan verifiera med övriga på nätverket att kedjan inte har manipulerats. Detta öppnar upp för frågorna: Hur ser landskapet ut idag? Vilka tekniker är lämpligast för just industriella system? Vad är det som krävs för att komma igång med en blockkedjeteknik i industriella nätverk? Syftet med studien är att undersöka de viktigaste teknikerna inom ämnet och föra ett resonemang kring lämplighet av olika tekniker med hänsyn till de egenskaperna som är relevanta för industriella system. Även ett experiment utförs om hur man kan använda blockkedjetekniken utifrån ett enkelt scenario taget från industrin. Sammanfattningsvis ses blockkedjan som en innovation med potential att förändra hur man distribuerar information i industriella system på ett säkert sätt. Resultatet av denna studie är en kartläggning och en demonstration som kan lägga grunden för beslut kring hur blockkedjor skulle kunna användas i framtiden. / In a perfect world, all data is handled in a secure and verifiable manner to prevent information from being changed, stolen or blocked. Today's infrastructure is based on centralized systems that are shaped to a few participants like government, authorities and institutions. This solution is not adaptable to the digital development, which has led to more information being stored and managed online. The blockchain has a great potential to decentralize how we store and manage data through efficiency, transparency and security. Blockchain technology has a variety of application areas such as finance, medicine and logistics, but can be summed up as a technology in which the algorithms are designed to create a distributed ledger of the information that is stored, which is a technique for getting the data replicated, synchronized, shared and spread geographically over a number of places. The purpose of the blockchain is to be used as a ledger of previous transactions in such a way that all involved nodes on the network are provided a copy of the chain, whereby all participants can verify with the others on the network that the chain has not been manipulated. This opens the questions: How does the landscape look like today? Which techniques are the most appropriate for industrial systems? What is required to get started with a blockchain technology in industrial networks? The purpose of the study is to investigate the most important techniques in the area and clarify the most suitable of the different techniques, taking into consideration the characteristics relevant to industrial systems. An experiment is also being conducted on how to use the blockchain technique based on a simple scenario taken from the industry. In summary, the blockchain is seen as an innovation with the potential to change how to securely distribute information in industrial systems. The result of this study is a survey and a demonstration that can lay the groundwork for decisions about how blockchains could be used in the future.
25

Prioritising data quality challenges in electronic healthcare systems in South Africa

Botha, Marna 10 1900 (has links)
Data quality is one of many challenges experienced in electronic healthcare (e-health) services in South Africa. The collection of data with substandard data quality leads to inappropriate information for health and management purposes. Evidence of challenges with regard to data quality in e-health systems led to the purpose of this study, namely to prioritise data quality challenges experienced by data users of e-health systems in South Africa. The study followed a sequential QUAL-quan mixed method research design to realise the research purpose. After carrying out a literature review on the background of e-health and the current status of research on data quality challenges, a qualitative study was conducted to verify and extend the identified possible e-health data quality challenges. A quantitative study to prioritise data quality challenges experienced by data users of e-health systems followed. Data users of e-health systems in South Africa served as the unit of analysis in the study. The data collection process included interviews with four data quality experts to verify and extend the possible e-health data quality challenges identified from literature. This was followed by a survey targeting 100 data users of e-health systems in South Africa for which 82 responses were received. A prioritised list of e-health data quality challenges has been compiled from the research results. This list can assist data users of e-health systems in South Africa to improve the quality of data in those systems. The most important e-health data quality challenge is a lack of training for e-health systems data users. The prioritised list of e-health data quality challenges allowed for evidence-based recommendations which can assist health institutions in South Africa to ensure future data quality in e-health systems. / Computing / M. Sc. (Computing)
26

Query authentication in data outsourcing and integration services

Chen, Qian 27 August 2015 (has links)
Owing to the explosive growth of data driven by e-commerce, social media, and mobile apps, data outsourcing and integration have become two popular Internet services. These services involve one or more data owners (DOs), many requesting clients, and a service provider (SP). The DOs outsource/synchronize their data to the SP, and the SP will provide query services to the requesting clients on behalf of DOs. However, as a third-party server, the SP might alter (leave out or forge) the outsourced/integrated data and query results, intentionally or not. To address this trustworthy issue, the SP is expected to deliver their services in an authenticatable manner, so that the correctness of the service results can be verified by the clients. Unfortunately, existing work on query authentication cannot preserve the privacy of the data being queried. Furthermore, almost all previous studies assume only a single data source/owner, while data integration services usually combine data from multiple sources. In this dissertation, we take the first step to study the authentication of location-based queries with confidentiality and investigate authenticated online data integration services. Cost models, security analysis, and experimental results consistently show the effectiveness and robustness of our proposed schemes under various system settings and query workloads.
27

Management of integrity constraints for multi-scale geospatial data = Gerenciamento de restrições de integridade para dados geoespaciais multi-escala / Gerenciamento de restrições de integridade para dados geoespaciais multi-escala

Longo, João Sávio Ceregatti, 1987- 22 August 2018 (has links)
Orientador: Claudia Maria Bauzer Medeiros / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-22T06:24:16Z (GMT). No. of bitstreams: 1 Longo_JoaoSavioCeregatti_M.pdf: 3005926 bytes, checksum: 986140c2ac8b7fc686b351fb6ce666fb (MD5) Previous issue date: 2013 / Resumo: Trabalhar em questões relativas a dados geoespaciais presentes em múltiplas escalas apresenta inúmeros desafios que têm sido atacados pelos pesquisadores da área de GIS (Sistemas de Informação Geográfica). De fato, um dado problema do mundo real deve frequentemente ser estudado em escalas distintas para ser resolvido. Outro fator a ser considerado é a possibilidade de manter o histórico de mudanças em cada escala. Além disso, uma das principais metas de ambientes multi-escala _e garantir a manipulação de informações sem qualquer contradição entre suas diferentes representações. A noção de escala extrapola inclusive a questão espacial, pois se aplica também, por exemplo, _a escala temporal. Estes problemas serão analisados nesta dissertação, resultando nas seguintes contribuições: (a) proposta do modelo DBV (Database Version) multi-escala para gerenciar de forma transparente dados de múltiplas escalas sob a perspectiva de bancos de dados; (b) especificação de restrições de integridade multi-escala; (c) implementação de uma plataforma que suporte o modelo e as restrições, testados com dados reais multi-escala / Abstract: Work on multi-scale issues concerning geospatial data presents countless challenges that have been long attacked by GIScience (Geographic Information Science) researchers. Indeed, a given real world problem must often be studied at distinct scales in order to be solved. Another factor to be considered is the possibility of maintaining the history of changes at each scale. Moreover, one of the main goals of multi-scale environments is to guarantee the manipulation of information without any contradiction among the different representations. The concept of scale goes beyond issues of space, since it also applies, for instance, to time. These problems will be analyzed in this thesis, resulting in the following contributions: (a) the proposal of the DBV (Database Version) multi-scale model to handle data at multiple scales from a database perspective; (b) the specification of multi-scale integrity constraints; (c) the implementation of a platform to support model and constraints, tested with real multi-scale data / Mestrado / Ciência da Computação / Mestre em Ciência da Computação
28

Authenticated query processing in the cloud

Xu, Cheng 19 February 2019 (has links)
With recent advances in data-as-a-service (DaaS) and cloud computing, outsourcing data to the cloud has become a common practice. In a typical scenario, the data owner (DO) outsources the data and delegates the query processing service to a service provider (SP). However, as the SP is often an untrusted third party, the integrity of the query results cannot be guaranteed and is thus imperative to be authenticated. To tackle this issue, a typical approach is letting the SP provide a cryptographic proof, which can be used to verify the soundness and completeness of the query results by the clients. Despite extensive research on authenticated query processing for outsourced databases, existing techniques have only considered limited query types. They fail to address a variety of needs demanded by enterprise customers such as supporting aggregate queries over set-valued data, enforcing fine-grained access control, and using distributed computing paradigms. In this dissertation, we take the first step to comprehensively investigate the authenticated query processing in the cloud that fulfills the aforementioned requirements. Security analysis and performance evaluation show that the proposed solutions and techniques are robust and efficient under a wide range of system settings.
29

AgroString: Exploring Distributed Ledger for Effective Data Management in Smart Agriculture

Tirumala Vangipuram, Lakshmi Sukrutha 07 1900 (has links)
Creating a robust supply chain is one of the factors for more sturdy agriculture. Most of the agricultural produce is getting wasted while storing and transporting the goods. AgroString in Section 3 system collects real-time temperature and humidity data from the IoAT edge device and performs secure data storage and transmission through a distributed ledger. Research and studies are being conducted to forecast the availability of clear groundwater with the help of traditional techniques to meet worldwide food requirements. Collecting quality data from various groundwater sites for storing and sharing for further analysis has become a more significant challenge. Our current work, Section 4, G-DaM, increases the value and reliability of groundwater data by implementing Distributed ledger with a public Blockchain, Ethereum, on the edge layer. Agriculture uses 65% of the world's freshwater for farming, half of which goes wasted; the same is the scenario for energy. We design an insurance system called IncentiveChain, which uses a distributed ledger on edge to incentivize farmers whenever they use resources at a needed level to give similar or more agricultural yield in Section 5. In the current research, we address some of the problems in data management and implement state-of-the-art distributed ledger designs and computing capabilities on the edge layer to show performance improvements in data from smart farming.
30

Prioritising data quality challenges in electronic healthcare systems in South Africa

Botha, Marna 10 1900 (has links)
Data quality is one of many challenges experienced in electronic healthcare (e-health) services in South Africa. The collection of data with substandard data quality leads to inappropriate information for health and management purposes. Evidence of challenges with regard to data quality in e-health systems led to the purpose of this study, namely to prioritise data quality challenges experienced by data users of e-health systems in South Africa. The study followed a sequential QUAL-quan mixed method research design to realise the research purpose. After carrying out a literature review on the background of e-health and the current status of research on data quality challenges, a qualitative study was conducted to verify and extend the identified possible e-health data quality challenges. A quantitative study to prioritise data quality challenges experienced by data users of e-health systems followed. Data users of e-health systems in South Africa served as the unit of analysis in the study. The data collection process included interviews with four data quality experts to verify and extend the possible e-health data quality challenges identified from literature. This was followed by a survey targeting 100 data users of e-health systems in South Africa for which 82 responses were received. A prioritised list of e-health data quality challenges has been compiled from the research results. This list can assist data users of e-health systems in South Africa to improve the quality of data in those systems. The most important e-health data quality challenge is a lack of training for e-health systems data users. The prioritised list of e-health data quality challenges allowed for evidence-based recommendations which can assist health institutions in South Africa to ensure future data quality in e-health systems. / School of Computing / M. Sc. (Computing)

Page generated in 0.0729 seconds