• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 7
  • 3
  • 1
  • 1
  • Tagged with
  • 47
  • 47
  • 18
  • 14
  • 13
  • 11
  • 11
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

[en] QEEF: AN EXTENSIBLE QUERY EXECUTION ENGINE / [pt] QEEF: UMA MÁQUINA DE EXECUÇÃO DE CONSULTAS

FAUSTO VERAS MARANHAO AYRES 30 June 2004 (has links)
[pt] O processamento de consultas em Sistemas de Gerência de Banco de Dados tradicionais tem sido largamente estudado na literatura e utilizado comercialmente com enorme sucesso. Isso é devido, em parte, à eficiência das Máquinas de Execução de Consultas (MEC) no suporte ao modelo de execução tradicional. Porém, o surgimento de novos cenários de aplicação, principalmente em conseqüência do modelo computacional da web, motivou a pesquisa de novos modelos de execução, tais como: modelo adaptável e modelo contínuo, além da pesquisa de modelos de dados semi-estruturados, tal como o XML, ambos não suportados pelas MEC tradicionais. O objetivo desta tese consiste no desenvolvimento de uma MEC extensível frente a diferentes modelos de execução e de dados. Adicionalmente, esta proposta trata de maneira ortogonal o modelo de execução e o modelo de dados, o que permite a avaliação de planos de execução de consultas (PEC) com fragmentos em diferentes modelos. Utilizou-se a técnica de framework de software para a especificação da MEC extensível, produzindo o framework QEEF (Query Execution Engine Framework). A extensibilidade da solução reflete-se em um meta-modelo, denominado QUEM (QUery Execution Meta-model), capaz de exprimir diferentes modelos em um meta-PEC. O framework QEEF pré-processa um meta-PEC e produz um PEC final a ser avaliado pela MEC instanciada. Como parte da validação desta proposta, instanciou-se o QEEF para diferentes modelos de execução e de dados. / [en] Querying processing in traditional Database Management Systems (DBMS) has been extensively studied in the literature and adopted in industry. Such success is, in part, due to the performance of their Query Execution Engines (QEE) for supporting the traditional query execution model. The advent of new query scenarios, mainly due to the web computational model, has motivate the research on new execution models such as: adaptive and continuous, and on semistructured data models, such as XML, both not natively supported by traditional query engines. This thesis proposes the development of an extensible QEE adapted to the new execution and data models. Achieving this goal, we use a software design approach based on framework technique to produce the Query Execution Engine Framework (QEEF). Moreover, we address the question of the orthogonality between execution and data models, witch allows for executing query execution plans (QEP) with fragments in different models. The extensibility of our solution is specified by in a QEP by an execution meta- model named QUEM (QUery Execution Meta-model) used to express different models in a meta-QEP. During query evaluation, the latter is pre-processed by the QEEF producing a final QEP to be evaluated by the running QEE. The QEEF is instantiated for different execution and data models as part of the validation of this proposal.
42

Predição de dados estruturados utilizando a formulação Perceptron com aplicação em planejamento de caminhos

Coelho, Maurício Archanjo Nunes 18 June 2010 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-03-07T15:27:21Z No. of bitstreams: 1 mauricioarchanjonunescoelho.pdf: 2468130 bytes, checksum: 3f05daa8428e367942c4ad560b6375f2 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-03-10T12:19:13Z (GMT) No. of bitstreams: 1 mauricioarchanjonunescoelho.pdf: 2468130 bytes, checksum: 3f05daa8428e367942c4ad560b6375f2 (MD5) / Made available in DSpace on 2017-03-10T12:19:13Z (GMT). No. of bitstreams: 1 mauricioarchanjonunescoelho.pdf: 2468130 bytes, checksum: 3f05daa8428e367942c4ad560b6375f2 (MD5) Previous issue date: 2010-06-18 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / O problema de planejamento de caminhos apresenta diversas subáreas, muitas das quais já extensamente abordadas na literatura. Uma dessas áreas em especial é a de determinação de caminhos, os algoritmos empregados para a solução deste problema dependem que os custos estipulados para os ambientes ou mapas sejam confiáveis. A dificuldade está justamente na definição dos custos referentes a cada tipo de área ou terreno nos mapas a serem examinados. Como se pode observar, o problema mencionado inclui a dificuldade em se determinar qual o custo de cada característica relevante presente no mapa, bem como os custos de suas possíveis combinações. A proposta deste trabalho é mostrar como é feita a predição desses custos em novos ambientes tendo como base a predição de dados estruturados definindo um aprendizado funcional entre domínios de entrada e saída, estruturados e arbitrários. O problema de aprendizado em questão é normalmente formulado como um problema de otimização convexa de máxima margem bastante similar a formulação de máquinas de vetores suporte multi-classe. Como técnica de solução realizou-se a implementação do algoritmo MMP (Maximum Margin Planning) (RATLIFF; BAGNELL; ZINKEVICH, 2006). Como contribuição, desenvolveu-se e implementou-se dois algoritmos alternativos, o primeiro denominado Perceptron Estruturado e o segundo Perceptron Estruturado com Margem, ambos os métodos de relaxação baseados na formulação do Perceptron. Os mesmos foram analisados e comparados. Posteriormente temos a exploração dos ambientes por um agente inteligente utilizando técnicas de aprendizado por reforço. Tornando todo o processo, desde a análise do ambiente e descoberta de custos, até sua exploração e planejamento do caminho, um completo processo de aprendizado. / The problem of path planning has several sub-areas, many of which are widely discussed in the literature. One of these areas in particular is the determination of paths, the algorithms used to solve this problem depend on the reliability of the estimated costs in the environments and maps. The difficulty is precisely the definition of costs for each type of area or land on the maps to be examined. As you can see, the problem mentioned includes the difficulty in determining what the cost of each relevant characteristic on the map, and the costs of their possible combinations. The purpose of this study is to show how the prediction of these costs is made into new environments based on the prediction of structured data by defining functional learning areas between input and output, structured and arbitrary. The problem of learning in question is usually formulated as a convex optimization problem of maximum margin very similar to the formulation of multiclass support vector machines. A solution technic was performed through implementation of the algorithm MMP (Maximum Margin Planning) (RATLIFF; BAGNELL; ZINKEVICH, 2006). As a contribution, two alternative algorithms were developed and implemented, the first named Structured Perceptron, and the second Structured Perceptron with Margin both methods of relaxation based formulation of the Perceptron. They were analyzed and compared. Posteriorly we have the exploitation of the environment by an intelligent agent using reinforcement learning techniques. This makes the whole process, from the environment analysis and discovery of cost to the exploitation and path planning, a complete learning process.
43

Odpovídání na otázky nad strukturovanými daty / Question Answering over Structured Data

Birger, Mark January 2017 (has links)
Tato práce se zabývá problematikou odpovídání na otázky nad strukturovanými daty. Ve většině případů jsou strukturovaná data reprezentována pomocí propojených grafů, avšak ukrytí koncové struktury dát je podstatné pro využití podobných systémů jako součástí rozhraní s přirozeným jazykem. Odpovídající systém byl navržen a vyvíjen v rámci této práce. V porovnání s tradičními odpovídajícími systémy, které jsou založené na lingvistické analýze nebo statistických metodách, náš systém zkoumá poskytnutý graf a ve výsledků generuje sémantické vazby na základě vstupních párů otázka-odpověd'. Vyvíjený systém je nezávislý na struktuře dát, ale pro účely vyhodnocení jsme využili soubor dát z Wikidata a DBpedia. Kvalita výsledného systému a zkoumaného přístupu byla vyhodnocena s využitím připraveného datasetu a standartních metrik.
44

Auditable Computations on (Un)Encrypted Graph-Structured Data

Servio Ernesto Palacios Interiano (8635641) 29 July 2020 (has links)
<div>Graph-structured data is pervasive. Modeling large-scale network-structured datasets require graph processing and management systems such as graph databases. Further, the analysis of graph-structured data often necessitates bulk downloads/uploads from/to the cloud or edge nodes. Unfortunately, experience has shown that malicious actors can compromise the confidentiality of highly-sensitive data stored in the cloud or shared nodes, even in an encrypted form. For particular use cases —multi-modal knowledge graphs, electronic health records, finance— network-structured datasets can be highly sensitive and require auditability, authentication, integrity protection, and privacy-preserving computation in a controlled and trusted environment, i.e., the traditional cloud computation is not suitable for these use cases. Similarly, many modern applications utilize a "shared, replicated database" approach to provide accountability and traceability. Those applications often suffer from significant privacy issues because every node in the network can access a copy of relevant contract code and data to guarantee the integrity of transactions and reach consensus, even in the presence of malicious actors.</div><div><br></div><div>This dissertation proposes breaking from the traditional cloud computation model, and instead ship certified pre-approved trusted code closer to the data to protect graph-structured data confidentiality. Further, our technique runs in a controlled environment in a trusted data owner node and provides proof of correct code execution. This computation can be audited in the future and provides the building block to automate a variety of real use cases that require preserving data ownership. This project utilizes trusted execution environments (TEEs) but does not rely solely on TEE's architecture to provide privacy for data and code. We thoughtfully examine the drawbacks of using trusted execution environments in cloud environments. Similarly, we analyze the privacy challenges exposed by the use of blockchain technologies to provide accountability and traceability.</div><div><br></div><div>First, we propose AGAPECert, an Auditable, Generalized, Automated, Privacy-Enabling, Certification framework capable of performing auditable computation on private graph-structured data and reporting real-time aggregate certification status without disclosing underlying private graph-structured data. AGAPECert utilizes a novel mix of trusted execution environments, blockchain technologies, and a real-time graph-based API standard to provide automated, oblivious, and auditable certification. This dissertation includes the invention of two core concepts that provide accountability, data provenance, and automation for the certification process: Oblivious Smart Contracts and Private Automated Certifications. Second, we contribute an auditable and integrity-preserving graph processing model called AuditGraph.io. AuditGraph.io utilizes a unique block-based layout and a multi-modal knowledge graph, potentially improving access locality, encryption, and integrity of highly-sensitive graph-structured data. Third, we contribute a unique data store and compute engine that facilitates the analysis and presentation of graph-structured data, i.e., TruenoDB. TruenoDB offers better throughput than the state-of-the-art. Finally, this dissertation proposes integrity-preserving streaming frameworks at the edge of the network with a personalized graph-based object lookup.</div>
45

Decision Support Systems for Financial Market Surveillance

Alic, Irina 30 November 2016 (has links)
Entscheidungsunterstützungssysteme in der Finanzwirtschaft sind nicht nur für die Wis-senschaft, sondern auch für die Praxis von großem Interesse. Um die Finanzmarktüber-wachung zu gewährleisten, sehen sich die Finanzaufsichtsbehörden auf der einen Seite, mit der steigenden Anzahl von onlineverfügbaren Informationen, wie z.B. den Finanz-Blogs und -Nachrichten konfrontiert. Auf der anderen Seite stellen schnell aufkommen-de Trends, wie z.B. die stetig wachsende Menge an online verfügbaren Daten sowie die Entwicklung von Data-Mining-Methoden, Herausforderungen für die Wissenschaft dar. Entscheidungsunterstützungssysteme in der Finanzwirtschaft bieten die Möglichkeit rechtzeitig relevante Informationen für Finanzaufsichtsbehörden und Compliance-Beauftragte von Finanzinstituten zur Verfügung zu stellen. In dieser Arbeit werden IT-Artefakte vorgestellt, welche die Entscheidungsfindung der Finanzmarktüberwachung unterstützen. Darüber hinaus wird eine erklärende Designtheorie vorgestellt, welche die Anforderungen der Regulierungsbehörden und der Compliance-Beauftragten in Finan-zinstituten aufgreift.
46

Automatické generování testovacích dat informačních systémů / Automatic Test Input Generation for Information Systems

Naňo, Andrej January 2021 (has links)
ISAGENis a tool for the automatic generation of structurally complex test inputs that imitate real communication in the context of modern information systems . Complex, typically tree-structured data currently represents the standard means of transmitting information between nodes in distributed information systems. Automatic generator ISAGENis founded on the methodology of data-driven testing and uses concrete data from the production environment as the primary characteristic and specification that guides the generation of new similar data for test cases satisfying given combinatorial adequacy criteria. The main contribution of this thesis is a comprehensive proposal of automated data generation techniques together with an implementation, which demonstrates their usage. The created solution enables testers to create more relevant testing data, representing production-like communication in information systems.
47

Streamlining Certification Management with Automation and Certification Retrieval : System development using ABP Framework, Angular, and MongoDB / Effektivisering av certifikathantering med automatisering och certifikathämtning : Systemutveckling med ABP Framework, Angular och MongoDB

Hassan, Nour Al Dine January 2024 (has links)
This thesis examines the certification management challenge faced by Integrity360. The decentralized approach, characterized by manual processes and disparate data sources, leads to inefficient tracking of certification status and study progress. The main objective of this project was to construct a system that automates data retrieval, ensures a complete audit, and increases security and privacy.  Leveraging the ASP.NET Boilerplate (ABP) framework, Angular, and MongoDB, an efficient and scalable system was designed, developed, and built based on DDD (domain-driven design) principles for a modular and maintainable architecture. The implemented system automates data retrieval from the Credly API, tracks exam information, manages exam vouchers, and implements a credible authentication system with role-based access control.  With the time limitations behind the full-scale implementation of all the planned features, such as a dashboard with aggregated charts and automatic report generation, the platform significantly increases the efficiency and precision of employee certification management. Future work will include these advanced functionalities and integrations with external platforms to improve the system and increase its impact on operations in Integrity360.

Page generated in 0.0487 seconds