• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1016
  • 224
  • 97
  • 96
  • 69
  • 31
  • 29
  • 19
  • 19
  • 14
  • 12
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 2077
  • 745
  • 706
  • 585
  • 437
  • 357
  • 330
  • 310
  • 227
  • 221
  • 193
  • 189
  • 174
  • 165
  • 160
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
691

Hydrographic Surface Modeling Through A Raster Based Spline Creation Method

Alexander, Julie G 16 May 2014 (has links)
The United States Army Corp of Engineers relies on accurate and detailed surface models for various construction projects and preventative measures. To aid in these efforts, it is necessary to work for advancements in surface model creation. Current methods for model creation include Delaunay triangulation, raster grid interpolation, and Hydraulic Spline grid generation. While these methods produce adequate surface models, attempts for improved methods can still be made. A method for raster based spline creation is presented as a variation of the Hydraulic Spline algorithm. By implementing Hydraulic Splines in raster data instead of vector data, the model creation process is streamlined. This method is shown to be more efficient and less computationally expensive than previous methods of surface model creation due to the inherent advantages of raster data over vector data.
692

A study of three paradigms for storing geospatial data: distributed-cloud model, relational database, and indexed flat file

Toups, Matthew A 13 May 2016 (has links)
Geographic Information Systems (GIS) and related applications of geospatial data were once a small software niche; today nearly all Internet and mobile users utilize some sort of mapping or location-aware software. This widespread use reaches beyond mere consumption of geodata; projects like OpenStreetMap (OSM) represent a new source of geodata production, sometimes dubbed “Volunteered Geographic Information.” The volume of geodata produced and the user demand for geodata will surely continue to grow, so the storage and query techniques for geospatial data must evolve accordingly. This thesis compares three paradigms for systems that manage vector data. Over the past few decades these methodologies have fallen in and out of favor. Today, some are considered new and experimental (distributed), others nearly forgotten (flat file), and others are the workhorse of present-day GIS (relational database). Each is well-suited to some use cases, and poorly-suited to others. This thesis investigates exemplars of each paradigm.
693

Scrambling av databaser : Validering och implementering av scrambling av databas

Öberg, Fredrik January 2019 (has links)
The demands on how personal data is handled have recently become much more strict with new regulations such as GDPR. Which means companies need to review how they save and manage data. Furthermore, there is a whole indust- ry that works with analyzing and anonymizing databases to create testdata for companies to use for tests. How can these companies guarantee that they can hand over their database for this particular purpose. Easit AB wants a system to be built for scrambling databases so that the structure and data in the database are unrecognizable, which can then be submitted to Easit for analysis.With the main objective, using existing functionality in the Easit Test Engine ETE, see if you can scramble customers databases and data to unrecognizable so that the handover of the database can be done without risk. But also to validate the scrambling methods that the solution contains. / Kraven hur personlig data hanteras har på senare tid blivit mycket mer strikta med nya förordningar som GDPR. Vilket betyder att företag måste se över hur dom spara och hanterar data. Vidare så finns det en hel bransch som jobbar med att analysera och anonymisera databaser för att skapa testdata för företag att an- vända för tester. Hur kan dessa företag garantera att de kan lämna över deras da- tabas för just detta. Easit AB vill att ett system ska byggas för att scrambla data- baser så att struktur och data i databasen är oigenkännligt som sedan kan läm- nas över till Easit för analysering. Med Huvudmålet att med hjälp av befintlig funktionalitet i Easit Test Engine ETE kunna scrambla kunders databaser och data till oigenkännlighet så att överlämning av databasen kan ske utan risk för tester. Men även att validera de scramblingmetoder som lösningen innehåller.
694

NoSQL Database Selection Focused on Performance Criteria for Web-driven Applications

Kharboutli, Zacky January 2019 (has links)
This paper delivers a comparative analysis of the performance of three of the NoSQL technologies in Web applications. These technologies are graph stores, key-value stores, and document stores. The study aims to assist developers and organizationsin picking the suitable NoSQL solution for their application. For this purpose, three identical e-book applications were developed. Each of these is connected to adatabase from the selected technologies to examine how they perform compared toeach other against various performance measures.
695

Measurement of discontinuous drug exposure in large healthcare databases / Méthodes de mesure de l’exposition médicamenteuse discontinue à partir des grandes bases de données en santé

Palmaro, Aurore 20 January 2017 (has links)
Le contexte international de la pharmacoépidémiologie, marqué par la mise en œuvre d’un nombre croissant d’études multi-sources, a fait émerger un certain nombre de questionnements autour de la gestion de données conflictuelles ou de l’impact des choix méthodologiques sur les résultats.Accroître la confiance dans ces études observationnelles et renforcer leur crédibilité face aux données issues des essais cliniques représente un enjeu majeur, qui dépend étroitement de la robustesse des conclusions produites. Dans ce domaine, la mesure de l’exposition médicamenteuse revêt donc une importance toute particulière, tant pour des études portant sur l’estimation d’un risque ou d’un critère d’efficacité, que lors de la description des modalités d’utilisation en vie réelle. L’exposition médicamenteuse reste un phénomène complexe qui se caractérise la plupart du temps par des cycles discontinus, marqués par des évolutions de doses et la présence de médicaments concomitants. Compte tenu des caractéristiques pharmacodynamiques et pharmacocinétiques / The multinational context of pharmacoepidemiology, and the resulting increased number ofmulti-sources studies have generated concerns in relation with conflicting results and the question of the impact of methodological choices on study results. Increasing the confidence in the conclusions derived from these observational studies is a crucial issue, which is closely related to the robustness of the evidence produced. In this area, impact of drug exposure measurement and risk window might be crucial.Drug exposure is mostly characterized by discontinuous episodes, marked by changes in doses and presence of concomitant medications. Considering the pharmacokinetic and pharmacodynamics characteristics specific to each individual drug, the way in which the drug exposure is presented is of great importance. However, methods used for handling drug exposure episodes in electronic healthcare databases are varying widely according studies. However, the impact of these methods
696

Utvärdering av signaturdatabaser i systemet Snort / Evaluation of Signature Databases in the System Snort

Steinvall, Daniel January 2019 (has links)
Konstant uppkoppling till internet idag är en självklarhet för många världen över. Internet bidrar till en global förbindelse som aldrig tidigare varit möjligt, vilken kan tyckas vara underbart i många avseenden. Dessvärre kan denna digitala förbindelse missbrukas och användas för ondsinta ändamål vilket har lett till behov av säkerhetslösningar som bland annat nätverks-intrångsdetektionssystem. Ett av de mest omtalade verktygen som är ett exempel på ett sådant system är Snort som studeras i denna studie. Utöver analysering av Snort, evalueras även olika signaturdatabasers detektionsförmåga av angrepp. Totalt exekverades 1143 angrepp från 2008-2019 och dessa utvärderades av tre Snort-versioner daterade 2012, 2016 och 2018. Varje Snort-version analyserade angreppen med 18 signaturdatabaser daterade 2011-2019 från tre olika utgivare. Resultaten visar att det stor skillnad mellan de olika utgivarnas signaturdatabaser där den bästa detekterade runt 70% av angreppen medan den sämsta endast detekterade runt 1%. Även hur Snort konfigurerades hade stor inverkan på resultatet där Snort med för-processorn detekterade omkring 15% fler angrepp än utan den. / For many people all over the world being constantly connected to the Internet is taken for granted. The Internet connects people globally in a way that has never been possible before, which in many ways is a fantastic thing. Unfortunately, this global connection can be abused for malicious purposes which have led to the need for security solutions such as network intrusion detection systems. One prominent example of such a system is Snort which is the subject of evaluation in this thesis. This study investigates the ability of signature databases for Snort to detect cyberattacks. In total, we executed 1143 attacks released between 2008-2019 and recorded the network traffic. We then analyzed the network traffic using three versions of Snort released 2012, 2016, and 2018. For each version, we used 18 different signature databases dated 2011-2019 from three different publishers. Our results show that there are a significant difference between the different publishers’ signature databases, where the best signature database detected around 70% of the attacks and the worst only detected around 1%. The configuration of Snort also had a significant impact on the results, where Snort with the pre-processor detected about 15% more attacks than without it.
697

Advanced techniques for graph analysis: a multimodal approach over planetary-scale data / Técnicas avançadas de análise de grafos: uma abordagem multimodal sobre dados em escala planetária

Gimenes, Gabriel Perri 12 February 2015 (has links)
Applications such as electronic commerce, computer networks, social networks, and biology (protein interaction), to name a few, have led to the production of graph-like data in planetary scale { possibly with millions of nodes and billions of edges. These applications pose challenging problems when the task is to use their data to support decision making processes by means of non-obvious and potentially useful patterns. In order to process such data for pattern discover, researchers and practitioners have used distributed processing resources organized in computational clusters. However, building and managing such clusters can be complex, bringing technical and financial issues that can be prohibitive in a variety of scenarios. Alternatively, it is desirable to process large scale graphs using only one computational node. To do so, we developed processes and algorithms according to three different approaches, building up towards an analytical set capable of revealing patterns, comprehension, and to help with the decision making process over planetary-scale graphs. / Aplicações como comércio eletrônico, redes de computadores, redes sociais e biologia (interação proteica), entre outras, levaram a produção de dados que podem ser representados como grafos à escala planetária { podendo possuir milhões de nós e bilhões de arestas. Tais aplicações apresentam problemas desafiadores quando a tarefa consiste em usar as informações contidas nos grafos para auxiliar processos de tomada de decisão através da descoberta de padrões não triviais e potencialmente utéis. Para processar esses grafos em busca de padrões, tanto pesquisadores como a indústria tem usado recursos de processamento distribuído organizado em clusters computacionais. Entretanto, a construção e manutenção desses clusters pode ser complexa, trazendo tanto problemas técnicos como financeiros que podem ser proibitivos em diversos casos. Por isso, torna-se desejável a capacidade de se processar grafos em larga escala usando somente um nó computacional. Para isso, foram desenvolvidos processos e algoritmos seguindo três abordagens diferentes, visando a definição de um arcabouço de análise capaz de revelar padrões, compreensão e auxiliar na tomada de decisão sobre grafos em escala planetária.
698

"Desenvolvimento de um Framework para Análise Visual de Informações Suportando Data Mining" / "Development of a Framework for Visual Analysis of Information with Data Mining suport"

Rodrigues Junior, Jose Fernando 22 July 2003 (has links)
No presente documento são reunidas as colaborações de inúmeros trabalhos das áreas de Bancos de Dados, Descoberta de Conhecimento em Bases de Dados, Mineração de Dados, e Visualização de Informações Auxiliada por Computador que, juntos, estruturam o tema de pesquisa e trabalho da dissertação de Mestrado: a Visualização de Informações. A teoria relevante é revista e relacionada para dar suporte às atividades conclusivas teóricas e práticas relatadas no trabalho. O referido trabalho, embasado pela substância teórica pesquisada, faz diversas contribuições à ciência em voga, a Visualização de Informações, apresentando-as através de propostas formalizadas no decorrer deste texto e através de resultados práticos na forma de softwares habilitados à exploração visual de informações. As idéias apresentadas se baseiam na exibição visual de análises numéricas estatísticas básicas, frequenciais (Frequency Plot), e de relevância (Relevance Plot). São relatadas também as contribuições à ferramenta FastMapDB do Grupo de Bases de Dados e Imagens do ICMC-USP em conjunto com os resultados de sua utilização. Ainda, é apresentado o Arcabouço, previsto no projeto original, para construção de ferramentas visuais de análise, sua arquitetura, características e utilização. Por fim, é descrito o Pipeline de visualização decorrente da junção entre o Arcabouço de visualização e a ferramenta FastMapDB. O trabalho se encerra com uma breve análise da ciência de Visualização de Informações com base na literatura estudada, sendo traçado um cenário do estado da arte desta disciplina com sugestões de futuros trabalhos. / In the present document are joined the collaborations of many works from the fields of Databases, Knowledge Discovery in Databases, Data Mining, and Computer-based Information Visualization, collaborations that, together, define the structure of the research theme and the work of the Masters Dissertation presented herein. This research topic is the Information Visualization discipline, and its relevant theory is reviewed and related to support the concluding activities, both theoretical and practical, reported in this work. The referred work, anchored by the theoretical substance that was studied, makes several contributions to the science in investigation, the Information Visualization, presenting them through formalized proposals described across this text, and through practical results in the form of software enabled to the visual exploration of information. The presented ideas are based on the visual exhibition of numeric analysis, named basic statistics, frequency analysis (Frequency Plot), and according to a relevance analysis (Relevance Plot). There are also reported the contributions to the FastMapDB tool, a visual exploration tool built by the Grupo de Bases de Dados e Imagens do ICMC-USP, the performed enhancements are listed as achieved results in the text. Also, it is presented the Framework, as previewed in this work's original proposal, projected to allow the construction of visual analysis tools; besides its description are listed its architecture, characteristics and utilization. At last, it is described the visualization Pipeline that emerges from the joining of the visualization Framework and the FastMapDB tool. The work ends with a brief analysis of the Information Visualization science based on the studied literature, it is delineated a scenario of the state of the art of this discipline along with suggestions for future work.
699

Genômica translacional: integrando dados clínicos e biomoleculares / Translational genomics: integrating clinical and biomolecular data

Miyoshi, Newton Shydeo Brandão 06 February 2013 (has links)
A utilização do conhecimento científico para promoção da saúde humana é o principal objetivo da ciência translacional. Para que isto seja possível, faz-se necessário o desenvolvimento de métodos computacionais capazes de lidar com o grande volume e com a heterogeneidade da informação gerada no caminho entre a bancada e a prática clínica. Uma barreira computacional a ser vencida é o gerenciamento e a integração dos dados clínicos, sócio-demográficos e biológicos. Neste esforço, as ontologias desempenham um papel essencial, por serem um poderoso artefato para representação do conhecimento. Ferramentas para gerenciamento e armazenamento de dados clínicos na área da ciência translacional que têm sido desenvolvidas, via de regra falham por não permitir a representação de dados biológicos ou por não oferecer uma integração com as ferramentas de bioinformática. Na área da genômica existem diversos modelos de bancos de dados biológicos (tais como AceDB e Ensembl), os quais servem de base para a construção de ferramentas computacionais para análise genômica de uma forma independente do organismo de estudo. Chado é um modelo de banco de dados biológicos orientado a ontologias, que tem ganhado popularidade devido a sua robustez e flexibilidade, enquanto plataforma genérica para dados biomoleculares. Porém, tanto Chado quanto os outros modelos de banco de dados biológicos não estão preparados para representar a informação clínica de pacientes. Este projeto de mestrado propõe a implementação e validação prática de um framework para integração de dados, com o objetivo de auxiliar a pesquisa translacional integrando dados biomoleculares provenientes das diferentes tecnologias omics com dados clínicos e sócio-demográficos de pacientes. A instanciação deste framework resultou em uma ferramenta denominada IPTrans (Integrative Platform for Translational Research), que tem o Chado como modelo de dados genômicos e uma ontologia como referência. Chado foi estendido para permitir a representação da informação clínica por meio de um novo Módulo Clínico, que utiliza a estrutura de dados entidade-atributo-valor. Foi desenvolvido um pipeline para migração de dados de fontes heterogêneas de informação para o banco de dados integrado. O framework foi validado com dados clínicos provenientes de um Hospital Escola e de um banco de dados biomoleculares para pesquisa de pacientes com câncer de cabeça e pescoço, assim como informações de experimentos de microarray realizados para estes pacientes. Os principais requisitos almejados para o framework foram flexibilidade, robustez e generalidade. A validação realizada mostrou que o sistema proposto satisfaz as premissas, levando à integração necessária para a realização de análises e comparações dos dados. / The use of scientific knowledge to promote human health is the main goal of translational science. To make this possible, it is necessary to develop computational methods capable of dealing with the large volume and heterogeneity of information generated on the road between bench and clinical practice. A computational barrier to be overcome is the management and integration of clinical, biological and socio-demographics data. In this effort, ontologies play a crucial role, being a powerful artifact for knowledge representation. Tools for managing and storing clinical data in the area of translational science that have been developed, usually fail due to the lack on representing biological data or not offering integration with bioinformatics tools. In the field of genomics there are many different biological databases (such as AceDB and Ensembl), which are the basis for the construction of computational tools for genomic analysis in an organism independent way. Chado is a ontology-oriented biological database model which has gained popularity due to its robustness and flexibility, as a generic platform for biomolecular data. However, both Chado as other models of biological databases are not prepared to represent the clinical information of patients. This project consists in the proposal, implementation and validation of a practical framework for data integration, aiming to help translational research integrating data coming from different omics technologies with clinical and socio-demographic characteristics of patients. The instantiation of the designed framework resulted in a computational tool called IPTrans (Integrative Platform for Translational Research), which has Chado as template for genomic data and uses an ontology reference. Chado was extended to allow the representation of clinical information through a new Clinical Module, which uses the data structure entity-attribute-value. We developed a pipeline for migrating data from heterogeneous sources of information for the integrated database. The framework was validated with clinical data from a School Hospital and a database for biomolecular research of patients with head and neck cancer. The main requirements were targeted for the framework flexibility, robustness and generality. The validation showed that the proposed system satisfies the assumptions leading to integration required for the analysis and comparisons of data.
700

Padrões de Fluxos de Processos em Banco de Dados Relacionais / Control-Flow Patterns in Relational Databases

Braghetto, Kelly Rosa 23 June 2006 (has links)
A representação e execução de processos de negócio têm gerado importantes desafios na área de Ciência da Computação. Um desses desafios é a escolha do melhor arcabouço formal para a especificação dos controles de fluxo padrões. Algumas linguagens defendem o uso de redes de Petri ou álgebras de processos como base formal. O uso de redes de Petri para especificar workflows clássicos é uma abordagem bastante conhecida. Entretanto, pesquisas recentes vêm difundindo o uso de novas extensões da álgebra de processos como uma alternativa para a especificação formal de workflows. A principal contribuição deste trabalho é a definição da Navigation Plan Definition Language (NPDL). A NPDL foi implementada como uma extensão da linguagem SQL. Ela é uma alternativa para a representação de workflows que utiliza a álgebra de processos como arcabouço formal. A NPDL promove uma separação explícita entre o ambiente de especificação e o ambiente de execução de um workflow. Esta separação propicia o reaproveitamento de passos de negócio e o uso das propriedades da álgebra de processos não só na modelagem, mas também no controle da execução dos processos. Após a especificação de um workflow por meio da NPDL, a execução dos passos que o definem é controlada pela ferramenta NavigationPlanTool. Essa ferramenta é a segunda contribuição deste trabalho de pesquisa. / The representation and execution of business processes have generated some important challenges in Computer Science. An important related concern is the choosing of the best formal foundation to represent control-flow patterns. Some of the workflow languages advocate the Petri nets or process algebra as formal foundation. The use of Petri nets is a famous approach to support classic workflows. On the other hand some researches are introducing modern process algebra extensions as an alternative formal foundation for representing workflows. The first contribution of this research is the definition of the Navigation Plan Definition Language (NPDL). NPDL was implemented as an extension of SQL language. It is an alternative to represent business processes using process algebra as formal foundation. NPDL provides the explicit separation between specification and execution workflow environment. This separation allows reusing of business steps and usage of process algebra properties in the process modeling and execution controlling tasks. After the definition of a workflow using NPDL, the business steps execution is carried out and controlled by a tool called NavigationPlanTool. This tool is the second contribution of this research.

Page generated in 0.0425 seconds