Spelling suggestions: "subject:"aql"" "subject:"eql""
191 |
Podpora pro práci s XML u databázového serveru Microsoft SQL Server 2008 / Support for XML in Microsoft SQL Server 2008Bábíčková, Radka Unknown Date (has links)
This thesis is focused on XML and related technologies. The XML language is directly linked to the databases and its support in databases. The overview of the XML support provided by various database products and systems are presented in this work. Support in the MS SQL Server 2008 is discussed in more detail starting with the mapping of relational data to XML and vice versa to support of the XML data type and work with it through XQuery. Also some indexing techniques are briefly presented. Finally, the support in MS SQL Server 2008 is demonstrated by means of a sample application, which verifes the theoretical knowledge in practice.
|
192 |
Analyse et évaluation de structures orientées document / Analysis and evaluation of document-oriented structuresGomez Barreto, Paola 13 December 2018 (has links)
De nos jours, des millions de sources de données différentes produisent une énorme quantité de données non structurées et semi-structurées qui changent constamment. Les systèmes d'information doivent gérer ces données tout en assurant la scalabilité et la performance. En conséquence, ils ont dû s'adapter pour supporter des bases de données hétérogènes, incluant des bases de données No-SQL. Ces bases de données proposent une structure de données sans schéma avec une grande flexibilité, mais sans séparation claire des couches logiques et physiques. Les données peuvent être dupliquées, fragmentées et/ou incomplètes, et ils peuvent aussi changer à mesure des besoins de métier.La flexibilité et l’absence de schéma dans les systèmes NoSQL orientés documents, telle que MongoDB, permettent d’explorer des nouvelles alternatives de structuration sans faire face aux contraintes. Le choix de la structuration reste important et critique parce qu’il y a plusieurs impacts à considérer et il faut choisir parmi des nombreuses d’options de structuration. Nous proposons donc de revenir sur une phase de conception dans laquelle des aspects de qualité et les impacts de la structure sont pris en compte afin de prendre une décision d’une manière plus avertie.Dans ce cadre, nous proposons SCORUS, un système pour l’analyse et l’évaluation des structures orientés document qui vise à faciliter l’étude des possibilités de semi-structurations orientées document, telles que MongoDB, et à fournir des métriques objectives pour mieux faire ressortir les avantages et les inconvénients de chaque solution par rapport aux besoins des utilisateurs. Pour cela, une séquence de trois phases peut composer un processus de conception. Chaque phase peut être aussi effectuée indépendamment à des fins d’analyse et de réglage. La stratégie générale de SCORUS est composée par :1. Génération d’un ensemble d’alternatives de structuration : dans cette phase nous proposons de partir d’une modélisation UML des données et de produire automatiquement un large ensemble de variantes de structuration possibles pour ces données.2. Evaluation d’alternatives en utilisant un ensemble de métriques structurelles : cette évaluation prend un ensemble de variantes de structuration et calcule les métriques au regard des données modélisées.3. Analyse des alternatives évaluées : utilisation des métriques afin d’analyser l’intérêt des alternatives considérées et de choisir la ou les plus appropriées. / Nowadays, millions of different data sources produce a huge quantity of unstructured and semi-structured data that change constantly. Information systems must manage these data but providing at the same time scalability and performance. As a result, they have had to adapt it to support heterogeneous databases, included NoSQL databases. These databases propose a schema-free with great flexibility but with a no clear separation of the logical and physical layers. Data can be duplicated, split and/or incomplete, and it can also change as the business needs.The flexibility and absence of schema in document-oriented NoSQL systems, such as MongoDB, allows new structuring alternatives to be explored without facing constraints. The choice of the structuring remains important and critical because there are several impacts to consider and it is necessary to choose among many of options of structuring. We therefore propose to return to a design phase in which aspects of quality and the impacts of the structure are considered in order to make a decision in a more informed manner.In this context, we propose SCORUS, a system for the analysis and evaluation of document-oriented structures that aims to facilitate the study of document-oriented semi-structuring possibilities, such as MongoDB, and to provide objective metrics for better highlight the advantages and disadvantages of each solution in relation to the needs of the users. For this, a sequence of three phases can compose a design process. Each phase can also be performed independently for analysis and adjustment purposes. The general strategy of SCORUS is composed by:1. Generation of a set of structuration alternatives: in this phase we propose to start from UML modeling of the data and to automatically produce a large set of possible structuring variants for this data.2. Evaluation of Alternatives Using a Set of Structural Metrics: This evaluation takes a set of structuring variants and calculates the metrics against the modeled data.3. Analysis of the evaluated alternatives: use of the metrics to analyze the interest of the considered alternatives and to choose the most appropriate one(s).
|
193 |
Um benchmark para avaliação de técnicas de busca no contexto de análise de Mutantes sql / A benchmark to evaluation of search techniques in the context of sql mutation analysisQueiroz, Leonardo Teixeira 02 August 2013 (has links)
Submitted by Cássia Santos (cassia.bcufg@gmail.com) on 2014-09-08T15:43:32Z
No. of bitstreams: 2
Dissertacao Leonardo T Queiroz.pdf: 3060512 bytes, checksum: 9db02d07b1a185dc6a2000968c571ae9 (MD5)
license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Made available in DSpace on 2014-09-08T15:43:32Z (GMT). No. of bitstreams: 2
Dissertacao Leonardo T Queiroz.pdf: 3060512 bytes, checksum: 9db02d07b1a185dc6a2000968c571ae9 (MD5)
license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Previous issue date: 2013-08-02 / Fundação de Amparo à Pesquisa do Estado de Goiás - FAPEG / One of the concerns in test Applications Database (ADB) is to keep the operating and
computational costs low. In the context of the ADB, one way to collaborate with this
assumption is ensuring that the Test Databases (TDB) are small, but effective in revealing
defects of SQL statements. Such bases can be constructed or obtained by the reduction of
Production Databases (PDB). In the reductions case, there are combinatorial aspects
involved that require the use of a specific technique for their implementation. In this
context, in response to a deficiency identified in the literature, this work aims to build and
provide a benchmark to enable performance evaluation, using SQL Mutation Analysis,
any search technique that intends to conduct databases reductions. Therefore, to exercise
the search techniques, the benchmark was built with two scenarios where each one
is composed of a PDB and a set of SQL statements. In addition, as a reference for
search techniques, it also contains performance of data database randomly reduced. As
a secondary objective of this work, from the experiments conducted in the construction
of the benchmark, analyses were made with the results obtained to answer important
questions about what factors are involved in the complexity of SQL statements in the
context of Test Mutation. A key finding in this regard was on the restrictiveness of SQL
commands, and this is the factor that most influences the complexity of statements. / Uma das preocupações no teste de Aplicações de Bancos de Dados (ABD) é manter
o custo operacional e computacional baixo. No contexto das ABD, uma das maneiras
de colaborar com essa premissa é garantir que as bases de dados de teste (BDT) sejam
pequenas, porém, eficazes na revelação de defeitos de instruções SQL. Tais bases podem
ser construídas ou obtidas pela redução de grandes bases de dados de produção (BDP). No
caso da redução, estão envolvidos aspectos combinatórios que exigem o uso de alguma
técnica para a sua realização. Neste contexto, em resposta a uma carência identificada na
literatura, o presente trabalho tem como objetivo construir e disponibilizar um benchmark
para possibilitar a avaliação de desempenho, utilizando a Análise de Mutantes SQL, de
qualquer técnica de busca que se proponha a realizar reduções de bases de dados. Sendo
assim, para exercitar as técnicas de busca, o benchmark foi construído com dois cenários,
onde cada um é composto por uma BDP e um conjunto de instruções SQL. Além disso,
como uma referência para as técnicas de busca, ele é composto também por resultados de
desempenho de bases de dados reduzidas aleatoriamente. Como objetivo secundário deste
trabalho, a partir dos experimentos conduzidos na construção do benchmark, foram feitas
análises dos resultados obtidos para responder importantes questões sobre quais fatores
estão envolvidos na complexidade de instruções SQL no contexto da Análise de Mutantes.
Uma das principais conclusões neste sentido foi sobre a restritividade dos comandos SQL,
sendo este o fator que mais influencia na complexidade das instruções.
|
194 |
Investigating Persistence Layers for NotificationsGhourchian, Isabel January 2019 (has links)
This work was carried out for Cisco at the Tail-f department. Cisco’s main focus is on network and telecommunication. The Tail-f department is the developer of a network service automatization product that allows customers to automate the process of adding, removing and manage their developed devices and services in their network. In the context of this network service, notifications will arise to notify operators when something has happened. The notifications are currently stored in a configurational database that is currently used for all the data in the network service. The customers have wished for more flexibility and functionality in terms of operating the notification data since they are limited in what queries they may perform in order to analyze their data. The purpose of this project is to find an alternative way of storing the notifications with better functionality and efficiency. A number of different databases were investigated with respect to the functionality and performance requirements of the system. A final storage system ElasticSearch was chosen. ElasticSearch provides flexible schema handling and complex queries which makes it a suitable choice that fulfills the customers’ needs. A generator and subscriber program were built in order to perform tests and insertion of notification data into ElasticSearch. The generator creates notifications manually and the subscriber receives them and performs some parsing and insertion into the storage. The queries and performance of ElasticSearch was measured. The query results show that the new system is able to perform much more complex queries than before such as range queries, filtering and full-text searches. The performance results show that the system is able to handle around 1000 notifications every other millisecond before the system will slow down. This is a sufficient number that satisfies the customers’ needs. / Detta arbete utfördes för Cisco vid Tail-f avdelningen. Cisco fokuserar på nätoch telekommunikation. Tail-f är utvecklare av en automatiseringsprodukt för nätverk som gör det möjligt för kunder att automatisera processen att lägga till, ta bort och hantera sina utvecklade enheter och tjänster i deras nätverk. I samband med nätverkstjänsten kommer notifikationer att meddela operatörer när något har hänt.Meddelandena lagras för tillfället i en konfigurationsdatabas som för närvarande används för lagring av all data inom nätverkstjänsten. Kunderna har önskat efter mer flexibilitet och funktionalitet när det gäller att hantera datat eftersom de är begränsade i vilka sökfrågor de kan utföra för att analysera deras notifikationsdata.Syftet med detta projekt är att hitta ett alternativt sätt att lagra notifikationerna med bättre funktionalitet och effektivitet. Ett antal olika databaser undersöktes med avseende på systemets funktionalitet och prestandakrav. Ett slutgiltigt lagringssystem ElasticSearch valdes. ElasticSearch erbjuder flexibel schema-hantering och komplexa sökfrågor som gör det till ett lämpligt val som uppfyller kundernas behov. En generator och subscriber program utvecklades för att utföra test och införande av notifikationsdata i ElasticSearch. Generatorn skapar meddelanden manuellt och subscribern tar emot dem och utför viss parsing och insättning i lagringssystemet.Sökfrågor och prestanda för ElasticSearch mättes. Sökfrågans resultat visar att det nya systemet kan utföra mycket mer komplexa sökfrågor än tidigare, såsom intervall-sökningar, filtrering och full-text sökning. Resultatet visar att systemet kan hantera cirka 1000 notifikationer varannan millisekund innan systemet saktar ner. Detta är ett tillräckligt antal som uppfyller kundernas behov.
|
195 |
Vizitų registravimo sistemos projektavimas ir testavimas / Design and testing of call reporting systemPrelgauskas, Justinas 10 July 2008 (has links)
Šiame dokumente aprašytas darbas susideda ir trijų pagrindinių dalių. Pirmojoje, inžinerinėje dalyje atlikome vizitų registravimo sistemos (toliau - „PharmaCODE“) analizę ir projektavimą. Čia pateikėme esmines verslo aplinkos, reikalavimų ir konkurentų analizės, o taipogi ir projektavimo detales. Pateikėme pagrindinius architektūrinius sprendimus. Antrojoje darbo dalyje aprašėme sistemos kokybės tyrimus, naudojant statinės išeities kodų analizės įrankius ir metodus. Šioje dalyje aprašėme kokius įrankius naudojome ir pateikėme pagrindinius kodo analizės rezultatus. Trečiojoje darbo dalyje gilinomės į išeities tekstų analizės metodus ir įrankius, sukūrėme patobulintą analizės taisyklę. Mūsų taisyklės pagalba pavyko aptikti daugiau potencialių SQL-įterpinių saugumo spragų nei aptiko jos pirmtakė – Microsoft projektuota kodo analizės taisyklė. / This work consists of three major parts. First – engineering part – is analysis and design of call reporting system (codename – “PharmaCODE”). We will provide main details of business analysis and design decisions. Second part is all about testing and ensuring system quality, mainly by means of static source code analysis tools & methods. We will describe tools being used and provide main results of source code analysis in this part. And finally, in the third part of this we go deeper into static source code analysis and try to improve one of analysis rules. These days, when there is plenty of evolving web-based applications, security is gaining more and more impact. Most of those systems have, and depend on, back-end databases. However, web-based applications are vulnerable to SQL-injection attacks. In this paper we present technique of solving this problem using secure-coding guidelines and .NET Framework’s static code analysis methods for enforcing those guidelines. This approach lets developers discover vulnerabilities in their code early in development process. We provide a research and realization of improved code analysis rule, which can automatically discover SQL-injection vulnerabilities in MSIL code.
|
196 |
Skaitmeninės antžeminės televizijos paslaugos duomenų saugyklos ir OLAP galimybių taikymas ir tyrimas / Digital video broadcasting terriastrial service`s data warehouse and OLAP opportunities research and applicationJuškaitis, Renatas 04 March 2009 (has links)
Tai darbas apie duomenų saugyklos ir OLAP galimybių panaudojimą skaitmeninės antžeminės televizijos paslaugos pardavimo procese. Atlikta išsami probleminės srities analizė, duomenų saugyklos, integravimo, kubų projektavimas ir realizavimas. Darbas atliktas, pasinaudojant MS SQL Server 2005 analizavimo ir integravimo paslaugomis. / This master`s work investigates data warehouse and OLAP tools opportunities research and practical use in organization which produces DVB-T (digital video broadcasting terrestrial) service for end-users. This work consist of problem analysis, data warehouse, data integration, data cubes project and realization. Realization was done using Ms SQL Server 2005 analysis and integration services.
|
197 |
Intelektuali universiteto akademinių duomenų analizė MS SQL Server 2008 priemonėmis / Intelligent analysis of university data with MS SQL Server 2008 toolsBrukštus, Vaidotas 23 July 2009 (has links)
Šiame darbe tiriama galimybė analizuoti įtakas, lemiančias studentų mokymosi universitete sėkmę. Remiamasi duomenų gavybos algoritmais. Sukurtas būdas, kaip prognozuoti, ar būsimas studentas, remiantis jo turimais stojimo balais bei ankstesnės kartos patirtimi, sėkmingai užbaigs studijas. Pradžioje aptariamos galimos duomenų gavybos taikymo sritys, būtini etapai, tam skirta programinė įranga. Detalizuojami Microsoft SQL Server 2008 palaikomi duomenų gavybos algoritmai. Keturi iš jų sėkmingai pritaikyti pasirinktos dalykinės srities analizei. Sukurta analitinė sistema, sugebanti įvertinti stojimo balų įtakas, universitete dėstomų dalykų įtakas galimybei sėkmingai baigti studijas. Atliktas tyrimas, nustatyti, kuris duomenų gavybos algoritmas yra tinkamiausias prognozuoti studentų iškritimą. / This paper describes a research of evaluation of influences, causing a success to graduate university. The research is based on the data mining algorithms. There has been developed way to predict if a prospective student will successfully graduate the university or not. The prediction is based on the data of earlier generation of students, and school’s marks of prospective student. First part of paper describes the spheres, where data mining is adapted. Then, there is detailed stages used in data mining process; reviewed most popular data mining tools. After Microsoft SQL Server data mining algorithms were analyzed, it became clear witch ones are most suitable for the selected research area. The realization part explains how data mining can serve to improve the study process in university. This can be achieved by analyzing influences of different study disciplines to the ability to graduate the university. The last part of paper describes the performed experiment, witch showed the most appropriate algorithm to make predictions about ability to graduate the university.
|
198 |
Datamigration av Content Management Systems (CMS) för Multi-siteapplikationer : En studie på SQL-till-NoSQL migration / Data migration of Content Management Systems (CMS) for Multi-site applications : A study on SQL-to-NoSQL migrationBrown, Elin January 2018 (has links)
Detta arbete undersöker om existerande Multi-siteapplikationer i CMS-systemet WordPress kan uppnå bättre prestanda genom att övergå från WordPress till det nya CMS-systemet Keystone JS genom en datamigration. Denna migrationsprocess utvärderas med ett vetenskapligt experiment, för att undersöka om migrationsprocessen i sig eventuellt kan medföra prestandaproblem, men också kring när en migration är relevant och i slutändan värd att genomföra. Experimentet mäter svarstider för olika databasoperationer av den originella WordPress-applikationen samt den migrerade Keystone JS-applikationen. Resultatet av mätningen visade att den migrerade applikationen kan uppnå upp till 59% förbättrade svarstider för subdomänrendering, vilket bekräftar att Multi-siteapplikationer kan gynnas av en migration till Keystone JS. Migrationsprocessen ansågs heller inte ha någon individuell negativ prestandapåverkan.
|
199 |
Teste baseado na interação entre regras ativas escritas em SQL / Testing based on interaction of SQL rulesLeitão Junior, Plinio de Sa 21 December 2005 (has links)
Orientadores: Mario Jino, Plinio Roberto Souza Vilela / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-05T17:50:16Z (GMT). No. of bitstreams: 1
LeitaoJunior_PliniodeSa_D.pdf: 1822006 bytes, checksum: d9b1eca29d417bcd74707f2e6c4d1eef (MD5)
Previous issue date: 2005 / Resumo: Bancos de dados ativos têm sido usados como uma alternativa à implementação de parte da funcionalidade de muitas aplicações em diversas áreas de conhecimento. A idéia básica é a resposta automática à ocorrência de eventos pela ativação de tarefas com funções específicas, conduzindo à execução de regras ativas. A despeito do uso difundido dessa tecnologia, pouco esforço de pesquisa tem sido direcionado ao teste de aplicações de bancos de dados ativos. Nesta pesquisa, nós investigamos o uso de uma técnica de teste estrutural para revelar a presença de defeitos, visando à melhoria de qualidade e ao aumento do nível de confiança com relação a este tipo de software. Uma família de critérios de adequação é proposta e analisada, no âmbito de regras ativas escritas em SQL. Especificamente, um modelo de interação entre regras é elaborado, visando à abstração de associações de interação, as quais formam a base para os requisitos de teste. No contexto de teste estrutural baseado em fluxo de dados, é definido um conjunto de critérios de adequação, denominados Critérios Baseados na Interação entre Regras, que requerem o exercício de associações de interação. Os critérios são uma extensão ao critério todos usos, pela exploração de relações de fluxo de dados persistentes devido a interações entre regras. Investigações teóricas e empíricas foram conduzidas, demonstrando que os critérios demonstram habilidade na detecção dos defeitos com complexidade polinomial. Defeitos e falhas de manipulação foram estudados, enumerados e utilizados em um experimento que avalia a habilidade de detecção de defeitos dos critérios em diferentes granularidades: precisões da análise de fluxo de dados. Uma ferramenta chamada ADAPT-TOOL (Active Database APplication Testing TOOL for active rules written in SQL) foi construída para suportar o experimento. Os resultados indicam que: (i) a eficácia de detecção de defeitos alcançou 2/3 do conjunto adequado, obtendo-se valores mais elevados para granularidades menos precisas; e (ii) a cobertura de associações de interação em granularidades mais precisas não melhora a habilidade de revelar defeitos / Abstract: Active Rule databases have been used as an alternative to the partial implementation of applications in several knowledge domains. Their principle is the automatic response to events by the activation of tasks with specific functionalities, leading to the execution of active rules. Notwithstanding their widespread use, few research efforts have been concentrated on active database application testing. In this research work we investigate the use of a structural testing technique to reveal the presence of faults, aimed at improving reliability and overall quality of this kind of software. A family of adequacy criteria is proposed and analysed in the active SQL-based database realm. Specifically, an interaction model between rules is elaborated, in order to abstract interaction associations that form the basis for testing requirements. In the context of data flow based structural testing, a family of adequacy criteria is defined, called Interaction Between Rules based Criteria, that demands the coverage of interaction associations. The criteria are an extension to the all uses criterion, by the exploitation of persistent data flow relations associated to rule interaction. Both theoretical and empirical investigations were performed, showing that the criteria posses fault detecting ability with polynomial complexity. Manipulation faults and failures were studied, enumerated and used in an experiment that evaluates criteria fault detecting ability at different granularities: data flow analysis precisions. A tool called ADAPT-TOOL (Active Database APplication Testing TOOL for active rules written in SQL) was built to support the experiment. The results indicate that: i) the fault-detecting efficacy was 2/3 of the adequate set, and reaches higher values for the lower data flow analysis precision; and (ii) the coverage of interaction association at higher granularities does not improve the fault detecting ability / Doutorado / Engenharia de Computação / Doutor em Engenharia Elétrica
|
200 |
Analise de mutantes em aplicações SQL de banco de dados / Mutation analysis for SQL database applicationsCabeça, Andrea Gonçalves 15 August 2018 (has links)
Orientador: Mario Jino, Plinio de Sa Leitão Junior / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-15T03:59:29Z (GMT). No. of bitstreams: 1
Cabeca_AndreaGoncalves_M.pdf: 8778522 bytes, checksum: c968246a4fb6a8fb41b47192a1d8cb15 (MD5)
Previous issue date: 2009 / Resumo: O teste de aplicações de banco de dados é crucial para assegurar a alta qualidade do software, pois defeitos não detectados podem resultar em corrupção irrecuperável dos dados. SQL é a mais amplamente utilizada interface para sistemas de banco de dados. Nossa abordagem visa a alcançar testes efetivos pela seleção de bases de dados reveladoras de defeitos. Usamos a análise de mutantes em comandos SQL e discutimos dois cenários para aplicar as técnicas de mutação forte e fraca. Uma ferramenta para auxiliar na automatização da técnica foi desenvolvida e implementada. Experimentos usando aplicações reais, defeitos reais e dados reais foram conduzidos para: (i) avaliar a aplicabilidade da abordagem; e (ii) comparar bases de dados de entrada quanto à habilidade para detectar defeitos / Abstract: Testing database applications is crucial for ensuring high quality software as undetected faults can result in unrecoverable data corruption. SQL is the most widely used interface language for relational database systems. Our approach aims to achieve better tests by selecting fault-revealing databases. We use mutation analysis on SQL statements and discuss two scenarios for applying strong and weak mutation techniques. A tool to support the automatization of the technique has been developed and implemented. Experiments using real applications, real faults and real data were performed to: (i) evaluate the applicability of the approach, and (ii) compare fault-revealing abilities of input databases / Mestrado / Engenharia de Computação / Mestre em Engenharia Elétrica
|
Page generated in 0.0366 seconds