Spelling suggestions: "subject:"aoql"" "subject:"coql""
101 |
Toward better server-side Web securitySon, Sooel 25 June 2014 (has links)
Server-side Web applications are constantly exposed to new threats as new technologies emerge. For instance, forced browsing attacks exploit incomplete access-control enforcement to perform security-sensitive operations (such as database writes without proper permission) by invoking unintended program entry points. SQL command injection attacks (SQLCIA) have evolved into NoSQL command injection attacks targeting the increasingly popular NoSQL databases. They may expose internal data, bypass authentication or violate security and privacy properties. Preventing such Web attacks demands defensive programming techniques that require repetitive and error-prone manual coding and auditing. This dissertation presents three methods for improving the security of server-side Web applications against forced browsing and SQL/NoSQL command injection attacks. The first method finds incomplete access-control enforcement. It statically identifies access-control logic that mediates security-sensitive operations and finds missing access-control checks without an a priori specification of an access-control policy. Second, we design, implement and evaluate a static analysis and program transformation tool that finds access-control errors of omission and produces candidate repairs. Our third method dynamically identifies SQL/NoSQL command injection attacks. It computes shadow values for tracking user-injected values and then parses a shadow value along with the original database query in tandem with its shadow value to identify whether user-injected parts serve as code. Remediating Web vulnerabilities and blocking Web attacks are essential for improving Web application security. Automated security tools help developers remediate Web vulnerabilities and block Web attacks while minimizing error-prone human factors. This dissertation describes automated tools implementing the proposed ideas and explores their applications to real-world server-side Web applications. Automated security tools are effective for identifying server-side Web application security holes and a promising direction toward better server-side Web security. / text
|
102 |
ProxStor : flexible scalable proximity data storage & analysisGiannoules, James Peter 17 February 2015 (has links)
ProxStor is a cloud-based human proximity storage and query informational system taking advantage of both the near ubiquity of mobile devices and the growing digital infrastructure in our everyday physical world, commonly referred to as the Internet of Things (IoT). The combination provides the opportunity for mobile devices to identify when entering and leaving the proximity of a space based upon this unique identifying infrastructure information. ProxStor provides a low-overhead interface for storing these proximity events while additionally offering search and query capabilities to enable a richer class of location aware applications. ProxStor scales up to store and manage more than one billion objects, while enabling future horizontal scaling to expand to multiple systems working together supporting even more objects. A single seamless web interface is presented to clients system.. More than 18 popular graph database systems are supported behind ProxStor. Performance benchmarks while running on Neo4j and OrientDB graph database systems are compared to determine feasibility of the design. / text
|
103 |
An artefact to analyse unstructured document data stores / by André Romeo BotesBotes, André Romeo January 2014 (has links)
Structured data stores have been the dominating technologies for the past few decades. Although dominating, structured data stores lack the functionality to handle the ‘Big Data’ phenomenon. A new technology has recently emerged which stores unstructured data and can handle the ‘Big Data’ phenomenon. This study describes the development of an artefact to aid in the analysis of NoSQL document data stores in terms of relational database model constructs. Design science research (DSR) is the methodology implemented in the study and it is used to assist in the understanding, design and development of the problem, artefact and solution. This study explores the existing literature on DSR, in addition to structured and unstructured data stores. The literature review formulates the descriptive and prescriptive knowledge used in the development of the artefact. The artefact is developed using a series of six activities derived from two DSR approaches. The problem domain is derived from the existing literature and a real application environment (RAE). The reviewed literature provided a general problem statement. A representative from NFM (the RAE) is interviewed for a situation analysis providing a specific problem statement. An objective is formulated for the development of the artefact and suggestions are made to address the problem domain, assisting the artefact’s objective. The artefact is designed and developed using the descriptive knowledge of structured and unstructured data stores, combined with prescriptive knowledge of algorithms, pseudo code, continuous design and object-oriented design. The artefact evolves through multiple design cycles into a final product that analyses document data stores in terms of relational database model constructs. The artefact is evaluated for acceptability and utility. This provides credibility and rigour to the research in the DSR paradigm. Acceptability is demonstrated through simulation and the utility is evaluated using a real application environment (RAE). A representative from NFM is interviewed for the evaluation of the artefact. Finally, the study is communicated by describing its findings, summarising the artefact and looking into future possibilities for research and application. / MSc (Computer Science), North-West University, Vaal Triangle Campus, 2014
|
104 |
Consequences of converting a data warehouse based on a STAR-schema to a column-oriented-NoSQL-databaseBodegård Gustafsson, Rebecca January 2018 (has links)
Data warehouses based on the relational model has been a popular technology for many years, because they are very reliable due to their ACID-properties (Atomicity, Consistency, Isolation, and Durability). However, the new demands on databases today due to increasing amounts of data and data structures changing do mean that the relational model might not always be the optimal choice. NoSQL is the name of a group of databases that are less bound by schemas and are therefore more scalable and easier to make changes in. They are also adapted for massive parallel processing and are therefore suited for handling large amounts of data. Out of all of the NoSQL databases column-databases are the most like the relational model since it also consists of tables. This study has therefore converted a relational data warehouse based on a STAR-schema to a column-oriented-NoSQL-database and evaluated the implementation by comparing query-times between the relational data warehouse and the column-oriented-NoSQL-database. Scrambled economical data from a business in Sweden has been used to do the conversion and test it by asking a few usual queries. The results show that the mapping works but the query-time in the NoSQL-database is simnifically longer.
|
105 |
Proposição de um modelo e sistema de gerenciamento de dados distribuídos para internet das coisas – GDDIoTCruz Huacarpuma, Ruben 27 July 2017 (has links)
Tese (doutorado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2017. / Submitted by Raquel Almeida (raquel.df13@gmail.com) on 2017-11-30T18:11:49Z
No. of bitstreams: 1
2017_RubenCruzHuacarpuma.pdf: 2899560 bytes, checksum: 365f64d81ab752f9b8e0d9d37bc5e549 (MD5) / Approved for entry into archive by Raquel Viana (raquelviana@bce.unb.br) on 2018-02-05T18:39:00Z (GMT) No. of bitstreams: 1
2017_RubenCruzHuacarpuma.pdf: 2899560 bytes, checksum: 365f64d81ab752f9b8e0d9d37bc5e549 (MD5) / Made available in DSpace on 2018-02-05T18:39:00Z (GMT). No. of bitstreams: 1
2017_RubenCruzHuacarpuma.pdf: 2899560 bytes, checksum: 365f64d81ab752f9b8e0d9d37bc5e549 (MD5)
Previous issue date: 2018-01-05 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES). / O desenvolvimento da Internet das Coisas (IoT) levou a um aumento do número e da variedade de dispositivos conectados à Internet. Dispositivos tais como sensores tornaram-se uma parte regular do nosso ambiente, instalados em carros e edifícios, bem como telefones inteligentes e outros dispositivos que coletam continuamente dados sobre nossas vidas, mesmo sem a nossa intervenção. Com tais objetos conectados, uma gama de aplicações tem sido desenvolvida e implantada, incluindo aquelas que lidam com grandes volumes de dados. Nesta tese, apresenta-se uma proposta e implementação de um modelo para o gerenciamento de dados em um ambiente de IoT. Este modelo contribui com a especificação das funcionalidades e a concepção de técnicas para coletar, filtrar, armazenar e visualizar os dados de forma eficiente. Uma característica importante deste trabalho é capacidade de integrar diferentes middlewares IoT. A implementação deste trabalho foi avaliada através de diferentes estudos de casos sobre cenário de sistemas inteligentes: Sistema de Casas Inteligentes, Sistema de Transporte Inteligente e a comparação do GDDIoT com middleware IoT. / The development of the Internet of Things (IoT) has led to a considerable increase in the number and variety of devices connected to the Internet. Smart objects such as sensors have become a regular part of our environment, installed in cars and buildings, as well as smart phones and other devices that continuously collect data about our lives even without our intervention. With such connected smart objects, a broad range of applications has been developed and deployed, including those dealing with massive volumes of data. In this thesis, it is proposed a data management approach and implementation for an IoT environment, thus contributing with the specification of functionalities and the conception of techniques for collecting, filtering, storing and visualization data conveniently and efficiently. An important characteristics of this work is to enable multiple and distinct middleware IoT to work together in a non-intrusive manner. The corresponding implementation of this work was evaluated through different case studies regarding a smart system scenarios: Smart Home System, Smart Transportation System and comparison between GDDIoT and an IoT middleware.
|
106 |
Uma Abordagem para a Modelagem de Desempenho e de Elasticidade para Bancos de Dados em Nuvem / A performance modeling and elasticity approach for cloud nosql databasesVictor Aguiar Evangelista de Farias 22 January 2016 (has links)
CoordenaÃÃo de AperfeiÃoamento de Pessoal de NÃvel Superior / A computaÃÃo em nuvem à um paradigma de computaÃÃo emergente e bem sucedido
que oferece serviÃos por demanda. Com o crescimento exponencial da quantidade de dados
utilizados pelas aplicaÃÃes atuais, os bancos de dados NoSQL, que sÃo sistemas inerentemente
distribuÃdos, tÃm sido usados para gerenciar dados na Nuvem. Nesse cenÃrio, Ã fundamental
que os provedores de serviÃos em nuvem garantam a Qualidade de ServiÃo (QoS) por meio do
cumprimento do contrato Service Level Agreement (SLA) enquanto reduz os custos operacionais
relacionados a overprovisioning e underprovisioning. Mecanismos de QoS podem se beneficiar
fortemente de modelos de desempenho preditivos que estimam o desempenho para uma dada
configuraÃÃo do sistema NoSQL e da carga de trabalho. Com isso, estratÃgias de elasticidade
podem aproveitar esses modelos preditivos para fornecer meios de adicionar e remover recursos
computacionais de forma mais confiÃvel. Este trabalho apresenta uma abordagem para
modelagem de desempenho genÃrica para banco de dados NoSQL em termos de mÃtricas de
desempenho baseadas no SLA capaz de capturar o efeitos nÃo-lineares causados pelo aspectos
de concorrÃncia e distribuiÃÃo. Adicionalmente, Ã apresentado um mecanismo de elasticidade
para adicionar e remover nÃs sistema NoSQL baseado em modelos de desempenho. Resultados
de avaliaÃÃo experimental confirmam que a modelagem de desempenho estima as mÃtricas de
forma acurada para vÃrios cenÃrios de carga de trabalho e configuraÃÃes do sistema. Por fim, a
nossa estratÃgia de elasticidade à capaz de garantir a QoS enquanto utiliza os recursos de forma
eficiente. / Cloud computing is a successful, emerging paradigm that supports on-demand
services. With the exponential growth of data generated by present applications, NoSQL
databases which are inherently distributed systems have been used to manage data in the cloud.
In this scenario, it is fundamental for cloud providers to guarantee Quality of Service (QoS) by
satisfying tho Service Level Agreement (SLA) contract while reducing the operational costs
related to both overprovisioning and underprovisioning. Thus QoS mechanisms can greatly
benefit from a predictive model that estimates SLA-based performance metrics for a given cluster
and workload configuration. Therewith, elastic provisioning strategies can benefit from these
predictive models to provide a reliable mechanism to add and remove resources reliably. In this
work, we present a generic performance modeling for NoSQL databases in terms of SLA-based
metrics capable of capturing non-linear effects caused by concurrency and distribution aspects.
Moreover we present a elastic provisioning mechanism based on performance models. Results
of experimental evaluation confirm that our performance modeling can accurately estimate the
performance under a wide range of workload configurations and also that our elastic provisioning
approach can ensure QoS while using resources efficiently.
|
107 |
Document Oriented NoSQL Databases : A comparison of performance in MongoDB and CouchDB using a Python interface / Dokumentorienterade NoSQL-databaser : En jämförelse av prestanda i MongoDB och CouchDB vid användning av ett PythongränssnittHenricsson, Robin January 2011 (has links)
For quite some time relational databases, such as MySQL, Oracle and Microsoft SQL Server, have been used to store data for most applications. While they are indeed ACID compliant (meaning interrupted database transactions won't result in lost data or similar nasty surprises) and good at avoiding redundancy, they are difficult to scale horizontally (across multiple servers) and can be slow for certain tasks. With the Web growing rapidly, spawning enourmous, user-generated content websites such as Facebook and Twitter, fast databases that can handle huge amounts of data are a must. For this purpose new databases management systems collectively called NoSQL are being developed. This thesis explains NoSQL further and compares the write and retrieval speeds, as well as the space efficiency, of two database management systems from the document oriented branch of NoSQL called MongoDB and CouchDB, which both use the JavaScript Object Notation (JSON) to store their data within. The benchmarkings performed show that MongoDB is quite a lot faster than CouchDB, both when inserting and querying, when used with their respective Python libraries and dynamic queries. MongoDB also is more space efficient than CouchDB.
|
108 |
SQL eller NoSQL : En utvärderingsmodell av relationella databassystem och icke-relationella databaslösningarSundin, Alexander January 2017 (has links)
Today’s digitalization and growing number of web-based applications increase the performative demands put on relational databases. Traditionally, relational databases have been an infallible solution to any data storage requirement, but with greater amounts of data needing to be stored – this is no longer the case. In addition to this, relational databases face shortcomings working in distributed environments, which is a prerequisite heavily linked to management of large datasets. This has led to an increase of non-relational databases, the newer of which are more commonly known as NoSQL-databases. Apart from representing different strengths and areas of expertise, NoSQL-databases are collectively designed to handle large amounts of data much more efficiently than their relational counterparts. However, recent studies show that abandoning relational databases completely is a fool’s errand, seeing as they still remain as the most viable option in most situations. This study aims to facilitate the process of determining which type of NoSQL-database is best suited as a replacement for an existing relational database, or whether a replacement is warranted at all. To do so, I isolate organizational and technological factors that together help evaluate an existing relational database’s compability in its organizational context.
|
109 |
An Improved Design and Implementation of the Session-based SAMBO with Parallelization Techniques and MongoDBZhao, Yidan January 2017 (has links)
The session-based SAMBO is an ontology alignment system involving MySQL to store matching results. Currently, SAMBO is able to align most ontologies within acceptable time. However, when it comes to large scale ontologies, SAMBO fails to reach the target. Thus, the main purpose of this thesis work is to improve the performance of SAMBO, especially in the case of matching large scale ontologies. To reach the purpose, a comprehensive literature study and an investigation on two outstanding large scale ontology system are carried out with the aim of setting the improvement directions. A detailed investigation on the existing SAMBO is conducted to figure out in which aspects the system can be improved. Parallel matching process optimization and data management optimization are determined as the primary optimization goal of the thesis work. In the following, a few relevant techniques are studied and compared. Finally, an optimized design is proposed and implemented. System testing results of the improved SAMBO show that both parallel matching process optimization and data management optimization contribute greatly to improve the performance of SAMBO. However the execution time of SAMBO to align large scale ontologies with database interaction is still unacceptable.
|
110 |
Storage and Transformation for Data Analysis Using NoSQL / Lagring och transformation för dataanalys med hjälp av NoSQLNilsson, Christoffer, Bengtson, John January 2017 (has links)
It can be difficult to choose the right NoSQL DBMS, and some systems lack sufficient research and evaluation. There are also tools for moving and transforming data between DBMS' in order to combine or use different systems for different use cases. We have described a use case, based on requirements related to the quality attributes Consistency, Scalability, and Performance. For the Performance attribute, focus is fast insertions and full-text search queries on a large dataset of forum posts. The evaluation was performed on two NoSQL DBMS' and two tools for transforming data between them. The DBMS' are MongoDB and Elasticsearch, and the transformation tools are NotaQL and Compose's Transporter. The purpose is to evaluate three different NoSQL systems, pure MongoDB, pure Elasticsearch and a combination of the two. The results show that MongoDB is faster when performing simple full-text search queries, but otherwise slower. This means that Elasticsearch is the primary choice regarding insertion and complex full-text search query performance. MongoDB is however regarded as a more stable and well-tested system. When it comes to scalability, MongoDB is better suited for a system where the dataset increases over time due to its simple addition of more shards. While Elasticsearch is better for a system which starts off with a large amount of data since it has faster insertion speeds and a more effective process for data distribution among existing shards. In general NotaQL is not as fast as Transporter, but can handle aggregations and nested fields which Transporter does not support. A combined system using MongoDB as primary data store and Elasticsearch as secondary data store could be used to achieve fast full-text search queries for all types of expressions, simple and complex.
|
Page generated in 0.0267 seconds