• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 6
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 39
  • 39
  • 12
  • 11
  • 11
  • 11
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A conceptualized data architecture framework for a South African banking service.

Mcwabeni-Pingo, Lulekwa Gretta. January 2014 (has links)
M. Tech. Business Information Systems / Currently there is a high demand in the banking environment for real time delivery of consistent, quality data for operational information. South African banks have the fastest growing use and demand for quality data; however, the bank still experiences data management related challenges and issues. It is argued that the existing challenges may be leveraged by having a sound data architecture framework. To this point, this study sought to address the data problem by theoretically conceptualizing a data architecture framework that may subsequently be used as a guide to improve data management. The purpose of the study was to explore and describe how data management challenges could be improved through Data Architecture.
22

Development Of A Database Management System For Small And Medium Sized Enterprises

Safak, Cigdem 01 May 2005 (has links) (PDF)
Databases and database technology have become an essential component of everyday life in modern society. As databases are widely used in every organization with a computer system, control of data resources and management of data are very important. Database Management System (DBMS) is the most significant tool developed to serve multiple users in a database environment consisting of programs that enable users to create and maintain a database. Windows Distributed Internet Applications (DNA) architecture describes a framework of building software technologies together in an integrated web and client-server model of computing. This thesis focuses on development of a general database management system, for small and medium sized manufacturing enterprises, by using Windows DNA technology. Defining, constructing and manipulating institutional, commercial and operational data of the company is the main frame of the work. And also by integrating &ldquo / Optimization&rdquo / and &ldquo / Agent&rdquo / system components which were previously developed in Middle East Technical University, Mechanical Engineering Department, Computer Integrated Manufacturing Laboratory (METUCIM) into the SME DBMS, a unified information system is developed. &ldquo / Optimization&rdquo / system was developed in order to calculate optimum cutting conditions for turning and milling operations. &ldquo / Agent&rdquo / system was implemented to control and send work orders to the available manufacturing cell in METUCIM. The components of these systems are redesigned to share a unique database together with the newly developed &ldquo / SME Information System&rdquo / application program in order to control data redundancy and to provide data sharing and data integrity.
23

Blockkedjor : Teknik för bevaring av dataintegritet i industriella nätverk

Hansson, Martin, Magnusson, Olof January 2018 (has links)
I en perfekt värld hanteras all data på ett säkert och verifierbart sätt för att förhindra att information förändras, stjäls eller blockeras. Dagens infrastruktur bygger på centraliserade system som är format till ett fåtal aktörer som statliga organisationer, myndigheter och institutioner. Denna lösning är inte anpassningsbar till den digitala utvecklingen vilket har lett till att mer information sparas och hanteras online. Blockkedjan har en stor potential att decentralisera hur vi lagrar och hanterar data genom effektivitet, transparens och säkerhet. Blockkedjetekniken har en mängd appliceringsområden som finans, medicin och logistik, men kan summeras som en teknik där algoritmerna utformas för att skapa en distribuerad ledger av informationen som sparas, vilket är en teknik för att få data replikerad, synkroniserad, delad och utspridd geografiskt över ett antal platser. Avsikten med blockkedjan är att tillämpas som ett register av tidigare transaktioner på ett sådant sätt att alla inblandade noder på nätverket tillhandahålls en kopia av kedjan, varmed samtliga deltagare kan verifiera med övriga på nätverket att kedjan inte har manipulerats. Detta öppnar upp för frågorna: Hur ser landskapet ut idag? Vilka tekniker är lämpligast för just industriella system? Vad är det som krävs för att komma igång med en blockkedjeteknik i industriella nätverk? Syftet med studien är att undersöka de viktigaste teknikerna inom ämnet och föra ett resonemang kring lämplighet av olika tekniker med hänsyn till de egenskaperna som är relevanta för industriella system. Även ett experiment utförs om hur man kan använda blockkedjetekniken utifrån ett enkelt scenario taget från industrin. Sammanfattningsvis ses blockkedjan som en innovation med potential att förändra hur man distribuerar information i industriella system på ett säkert sätt. Resultatet av denna studie är en kartläggning och en demonstration som kan lägga grunden för beslut kring hur blockkedjor skulle kunna användas i framtiden. / In a perfect world, all data is handled in a secure and verifiable manner to prevent information from being changed, stolen or blocked. Today's infrastructure is based on centralized systems that are shaped to a few participants like government, authorities and institutions. This solution is not adaptable to the digital development, which has led to more information being stored and managed online. The blockchain has a great potential to decentralize how we store and manage data through efficiency, transparency and security. Blockchain technology has a variety of application areas such as finance, medicine and logistics, but can be summed up as a technology in which the algorithms are designed to create a distributed ledger of the information that is stored, which is a technique for getting the data replicated, synchronized, shared and spread geographically over a number of places. The purpose of the blockchain is to be used as a ledger of previous transactions in such a way that all involved nodes on the network are provided a copy of the chain, whereby all participants can verify with the others on the network that the chain has not been manipulated. This opens the questions: How does the landscape look like today? Which techniques are the most appropriate for industrial systems? What is required to get started with a blockchain technology in industrial networks? The purpose of the study is to investigate the most important techniques in the area and clarify the most suitable of the different techniques, taking into consideration the characteristics relevant to industrial systems. An experiment is also being conducted on how to use the blockchain technique based on a simple scenario taken from the industry. In summary, the blockchain is seen as an innovation with the potential to change how to securely distribute information in industrial systems. The result of this study is a survey and a demonstration that can lay the groundwork for decisions about how blockchains could be used in the future.
24

Prioritising data quality challenges in electronic healthcare systems in South Africa

Botha, Marna 10 1900 (has links)
Data quality is one of many challenges experienced in electronic healthcare (e-health) services in South Africa. The collection of data with substandard data quality leads to inappropriate information for health and management purposes. Evidence of challenges with regard to data quality in e-health systems led to the purpose of this study, namely to prioritise data quality challenges experienced by data users of e-health systems in South Africa. The study followed a sequential QUAL-quan mixed method research design to realise the research purpose. After carrying out a literature review on the background of e-health and the current status of research on data quality challenges, a qualitative study was conducted to verify and extend the identified possible e-health data quality challenges. A quantitative study to prioritise data quality challenges experienced by data users of e-health systems followed. Data users of e-health systems in South Africa served as the unit of analysis in the study. The data collection process included interviews with four data quality experts to verify and extend the possible e-health data quality challenges identified from literature. This was followed by a survey targeting 100 data users of e-health systems in South Africa for which 82 responses were received. A prioritised list of e-health data quality challenges has been compiled from the research results. This list can assist data users of e-health systems in South Africa to improve the quality of data in those systems. The most important e-health data quality challenge is a lack of training for e-health systems data users. The prioritised list of e-health data quality challenges allowed for evidence-based recommendations which can assist health institutions in South Africa to ensure future data quality in e-health systems. / Computing / M. Sc. (Computing)
25

Query authentication in data outsourcing and integration services

Chen, Qian 27 August 2015 (has links)
Owing to the explosive growth of data driven by e-commerce, social media, and mobile apps, data outsourcing and integration have become two popular Internet services. These services involve one or more data owners (DOs), many requesting clients, and a service provider (SP). The DOs outsource/synchronize their data to the SP, and the SP will provide query services to the requesting clients on behalf of DOs. However, as a third-party server, the SP might alter (leave out or forge) the outsourced/integrated data and query results, intentionally or not. To address this trustworthy issue, the SP is expected to deliver their services in an authenticatable manner, so that the correctness of the service results can be verified by the clients. Unfortunately, existing work on query authentication cannot preserve the privacy of the data being queried. Furthermore, almost all previous studies assume only a single data source/owner, while data integration services usually combine data from multiple sources. In this dissertation, we take the first step to study the authentication of location-based queries with confidentiality and investigate authenticated online data integration services. Cost models, security analysis, and experimental results consistently show the effectiveness and robustness of our proposed schemes under various system settings and query workloads.
26

Management of integrity constraints for multi-scale geospatial data = Gerenciamento de restrições de integridade para dados geoespaciais multi-escala / Gerenciamento de restrições de integridade para dados geoespaciais multi-escala

Longo, João Sávio Ceregatti, 1987- 22 August 2018 (has links)
Orientador: Claudia Maria Bauzer Medeiros / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-22T06:24:16Z (GMT). No. of bitstreams: 1 Longo_JoaoSavioCeregatti_M.pdf: 3005926 bytes, checksum: 986140c2ac8b7fc686b351fb6ce666fb (MD5) Previous issue date: 2013 / Resumo: Trabalhar em questões relativas a dados geoespaciais presentes em múltiplas escalas apresenta inúmeros desafios que têm sido atacados pelos pesquisadores da área de GIS (Sistemas de Informação Geográfica). De fato, um dado problema do mundo real deve frequentemente ser estudado em escalas distintas para ser resolvido. Outro fator a ser considerado é a possibilidade de manter o histórico de mudanças em cada escala. Além disso, uma das principais metas de ambientes multi-escala _e garantir a manipulação de informações sem qualquer contradição entre suas diferentes representações. A noção de escala extrapola inclusive a questão espacial, pois se aplica também, por exemplo, _a escala temporal. Estes problemas serão analisados nesta dissertação, resultando nas seguintes contribuições: (a) proposta do modelo DBV (Database Version) multi-escala para gerenciar de forma transparente dados de múltiplas escalas sob a perspectiva de bancos de dados; (b) especificação de restrições de integridade multi-escala; (c) implementação de uma plataforma que suporte o modelo e as restrições, testados com dados reais multi-escala / Abstract: Work on multi-scale issues concerning geospatial data presents countless challenges that have been long attacked by GIScience (Geographic Information Science) researchers. Indeed, a given real world problem must often be studied at distinct scales in order to be solved. Another factor to be considered is the possibility of maintaining the history of changes at each scale. Moreover, one of the main goals of multi-scale environments is to guarantee the manipulation of information without any contradiction among the different representations. The concept of scale goes beyond issues of space, since it also applies, for instance, to time. These problems will be analyzed in this thesis, resulting in the following contributions: (a) the proposal of the DBV (Database Version) multi-scale model to handle data at multiple scales from a database perspective; (b) the specification of multi-scale integrity constraints; (c) the implementation of a platform to support model and constraints, tested with real multi-scale data / Mestrado / Ciência da Computação / Mestre em Ciência da Computação
27

Authenticated query processing in the cloud

Xu, Cheng 19 February 2019 (has links)
With recent advances in data-as-a-service (DaaS) and cloud computing, outsourcing data to the cloud has become a common practice. In a typical scenario, the data owner (DO) outsources the data and delegates the query processing service to a service provider (SP). However, as the SP is often an untrusted third party, the integrity of the query results cannot be guaranteed and is thus imperative to be authenticated. To tackle this issue, a typical approach is letting the SP provide a cryptographic proof, which can be used to verify the soundness and completeness of the query results by the clients. Despite extensive research on authenticated query processing for outsourced databases, existing techniques have only considered limited query types. They fail to address a variety of needs demanded by enterprise customers such as supporting aggregate queries over set-valued data, enforcing fine-grained access control, and using distributed computing paradigms. In this dissertation, we take the first step to comprehensively investigate the authenticated query processing in the cloud that fulfills the aforementioned requirements. Security analysis and performance evaluation show that the proposed solutions and techniques are robust and efficient under a wide range of system settings.
28

Prioritising data quality challenges in electronic healthcare systems in South Africa

Botha, Marna 10 1900 (has links)
Data quality is one of many challenges experienced in electronic healthcare (e-health) services in South Africa. The collection of data with substandard data quality leads to inappropriate information for health and management purposes. Evidence of challenges with regard to data quality in e-health systems led to the purpose of this study, namely to prioritise data quality challenges experienced by data users of e-health systems in South Africa. The study followed a sequential QUAL-quan mixed method research design to realise the research purpose. After carrying out a literature review on the background of e-health and the current status of research on data quality challenges, a qualitative study was conducted to verify and extend the identified possible e-health data quality challenges. A quantitative study to prioritise data quality challenges experienced by data users of e-health systems followed. Data users of e-health systems in South Africa served as the unit of analysis in the study. The data collection process included interviews with four data quality experts to verify and extend the possible e-health data quality challenges identified from literature. This was followed by a survey targeting 100 data users of e-health systems in South Africa for which 82 responses were received. A prioritised list of e-health data quality challenges has been compiled from the research results. This list can assist data users of e-health systems in South Africa to improve the quality of data in those systems. The most important e-health data quality challenge is a lack of training for e-health systems data users. The prioritised list of e-health data quality challenges allowed for evidence-based recommendations which can assist health institutions in South Africa to ensure future data quality in e-health systems. / School of Computing / M. Sc. (Computing)
29

An Effective Implementation of Operational Inventory Management

Sellamuthu, Sivakumar 16 January 2010 (has links)
This Record of Study describes the Doctor of Engineering (DE) internship experience at the Supply Chain Systems Laboratory (SCSL) at Texas A&M University. The objective of the internship was to design and develop automation tools to streamline lab operations related to inventory management projects and during that process adapt and/or extend theoretical inventory models according to real-world business complexity and data integrity problems. A holistic approach to automation was taken to satisfy both short-term and long-term needs subject to organizational constraints. A comprehensive software productivity tool was designed and developed that considerably reduced time and effort spent on non-value adding activities. This resulted in standardizing and streamlining data analysis related activities. Real-world factors that significantly influence the data analysis process were identified and incorporated into model specifications. This helped develop an operational inventory management model that accounted for business complexity and data integrity issues commonly encountered during implementation. Many organizational issues including new business strategies, human resources, administration, and project management were also addressed during the course of the internship.
30

Integritetsstudie av Ceph : En mjukvarubaserad lagringsplattform

Lundberg, Sebastian, Sönnerfors, Peter January 2015 (has links)
Dagens moderna organisationer driver vanligtvis en kostsam IT-baserad infrastruktur. De besitter högpresterande funktionalitet och säkerhet för att garantera att de dagliga operationerna fortskrider. Mjukvarubaserade lagringsplattformar är ett alternativ som kan användas för att reducera dessa kostnader. Genom att efterlikna avancerad teknik som redan existerar i traditionella lagringsplattformar kan detta implementeras som mjukvara på commodity hardware i syfte att uppnå samma funktionalitet. Ceph är ett alternativ som numera implementeras och ämnar erbjuda detta val. Vi anser att mjukvarubaserade lagringslösningar är obeprövade och en serie förkonstruerade tester utfördes som undersökte om Ceph kan garantera dataintegritet. Det som observerades var bristande funktionalitet att bibehålla dataintegritet efter att Ceph slutfört återskapningsprocessen när lagringsklustret erfor hög nyttjandegrad. / Modern organizations today normally operate an expensive IT-based infrastructure. They possess high-performance functionality and security to ensure that the daily operations progresses. Software-based storage platforms is an option that can be used to reduce these costs. By imitating advanced technology that already exists in the traditional storage platforms, this can be implemented as software on commodity hardware in order to achieve the same functionality. Ceph is an alternative that is implemented today and intends to provide this choice. We believe that software-based storage solutions are untested and a series of pre-built tests were conducted that examined whether Ceph can guarantee data integrity. What we observed was a lack of functionality to maintain data integrity after Ceph completed the recovery process, when the storage cluster experienced high utilization.

Page generated in 0.1014 seconds