• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1609
  • 457
  • 422
  • 172
  • 114
  • 102
  • 61
  • 49
  • 40
  • 36
  • 29
  • 23
  • 21
  • 17
  • 16
  • Tagged with
  • 3655
  • 857
  • 806
  • 756
  • 608
  • 546
  • 420
  • 401
  • 392
  • 366
  • 310
  • 304
  • 298
  • 277
  • 265
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1171

Systém pro detekci rámce GPON / GPON Frame Detection System

Holík, Martin January 2018 (has links)
This diploma thesis deals with GPON frame detection system. Partial problems of designing databases, optical networks and management of database system are described in theoretical parts. Practical parts this thesis are focused on design of system for detecting GPON frames and script for analysing of traffic.
1172

Sinkhole Hazard Assessment in Minnesota Using a Decision Tree Model

Gao, Yongli, Alexander, E. Calvin 01 May 2008 (has links)
An understanding of what influences sinkhole formation and the ability to accurately predict sinkhole hazards is critical to environmental management efforts in the karst lands of southeastern Minnesota. Based on the distribution of distances to the nearest sinkhole, sinkhole density, bedrock geology and depth to bedrock in southeastern Minnesota and northwestern Iowa, a decision tree model has been developed to construct maps of sinkhole probability in Minnesota. The decision tree model was converted as cartographic models and implemented in ArcGIS to create a preliminary sinkhole probability map in Goodhue, Wabasha, Olmsted, Fillmore, and Mower Counties. This model quantifies bedrock geology, depth to bedrock, sinkhole density, and neighborhood effects in southeastern Minnesota but excludes potential controlling factors such as structural control, topographic settings, human activities and land-use. The sinkhole probability map needs to be verified and updated as more sinkholes are mapped and more information about sinkhole formation is obtained.
1173

Database Tuning using Evolutionary and Search Algorithms

Raneblad, Erica January 2023 (has links)
Achieving optimal performance of a database can be crucial for many businesses, and tuning its configuration parameters is a necessary step in this process. Many existing tuning methods involve complex machine learning algorithms and require large amounts of historical data from the system being tuned. However, training machine learning models can be problematic if a considerable amount of computational resources and data storage is required. This paper investigates the possibility of using less complex search algorithms or evolutionary algorithms to tune database configuration parameters, and presents a framework that employs Hill Climbing and Particle Swarm Optimization. The performance of the algorithms are tested on a PostgreSQL database using read-only workloads. Particle Swarm Optimization displayed the largest improvement in query response time, improving it by 26.09% compared to using the configuration parameters' default values. Given the improvement shown by Particle Swarm Optimization, evolutionary algorithms may be promising in the field of database tuning.
1174

Lagring och visualisering av information om stötdämpare

Settlin, Johan, Ekelund, Joar January 2019 (has links)
Att genom simuleringar få en förståelse för hur en stötdämpares inställningar påverkar dess egenskaper kan leda till förbättrad väghållning, ökad trafiksäkerhet samt snabbare varvtider på racerbanan. Genom att visualisera de simulerade data för att ge användare en uppfattning om hur inställningarna på stötdämparen kommer att bete sig i praktiken.Det här arbetet hade som mål att utforma en databas som efterliknar en stötdämpares egenskaper samt att visualisera dessa egenskaper på en webbsida. Kravinsamling gjordes genom intervjuer med experter och information införskaffades via litteraturstudier. Utifrån insamlade krav och fallstudier utvecklades en relationsdatabas som innehåller information om en dämpares komponenter och uppbyggnad samt ett visualiseringsverktyg där egenskaperna hos dämparen visualiserades på en webbsida. Databasen och visualiseringsverktyget sammanfogades sedan till en prototyp för att möjliggöra simulering av en dämpares egenskaper på webben.Resultatet av fallstudierna visade att databashanteringssystemet MySQL och grafbiblioteket Chart.js var bäst lämpade för prototypen utifrån de insamlade kraven. Funktionaliteten av protypen validerades av projektets uppdragsgivare och felmarginalen för simuleringarna var under 1%. Detta implicerar att databasmodellen som tagits fram håller god kvalitet och att resultatet visualiseras på ett korrekt och förståeligt sätt. / By perform simulations to achieve an understanding of how a shock absorbers setting affect its characteristics could result in improved road holding, increased roadworthiness and faster lap times at the racetrack. By visualizing the simulated data, users can get an understanding in how the settings on the shock absorber will behave.This work had as a goal to design a database that mimic a shock absorbers characteristic and to visualize these characteristics on a website. Requirements was gathered through interviews with experts and information was procured through literature studies. From the gathered requirements and case studies a relational database, that contain information about a shock absorbers components and construction, was developed. A visualization tool to visualize the characteristics of a shock absorber was also developed. The database and the visualization tool where then joined to create a prototype for simulating a shock absorbers characteristic on the web.The result from the case studies indicated that the database management system MySQL and the graph library Chart.js was best suited for the prototype, based on the collected requirements. The functionality of the prototype was validated by the client and the margin of error for the simulation was below 1%. This implies that the database model that has been produced is of good quality and that the visualization of the result is presented in a correct and apprehensible manner.
1175

Private Table Database Virtualization for DBaaS

Lehner, Wolfgang, Kiefer, Tim 03 November 2022 (has links)
Growing number of applications store data in relational databases. Moving database applications to the cloud faces challenges related to flexible and scalable management of data. The obvious strategy of hosting legacy database management systems (DMBSs) on virtualized cloud resources leads to sub optimal utilization and performance. However, the layered architecture inside the DBMS allows for virtualization and consolidation above the OS level which can lead to significantly better system utilization and application performance. Finding an optimal database cloud solution requires finding an assignment from virtual to physical resources as well as configurations for all components. Our goal is to provide a virtualization advisor that aids in setting up and operating a database cloud. By formulating analytic cost, workload, and resource models performance of cloud-hosted relational database services can be significantly improved.
1176

Automated dust storm detection using satellite images. Development of a computer system for the detection of dust storms from MODIS satellite images and the creation of a new dust storm database.

El-Ossta, Esam E.A. January 2013 (has links)
Dust storms are one of the natural hazards, which have increased in frequency in the recent years over Sahara desert, Australia, the Arabian Desert, Turkmenistan and northern China, which have worsened during the last decade. Dust storms increase air pollution, impact on urban areas and farms as well as affecting ground and air traffic. They cause damage to human health, reduce the temperature, cause damage to communication facilities, reduce visibility which delays both road and air traffic and impact on both urban and rural areas. Thus, it is important to know the causation, movement and radiation effects of dust storms. The monitoring and forecasting of dust storms is increasing in order to help governments reduce the negative impact of these storms. Satellite remote sensing is the most common method but its use over sandy ground is still limited as the two share similar characteristics. However, satellite remote sensing using true-colour images or estimates of aerosol optical thickness (AOT) and algorithms such as the deep blue algorithm have limitations for identifying dust storms. Many researchers have studied the detection of dust storms during daytime in a number of different regions of the world including China, Australia, America, and North Africa using a variety of satellite data but fewer studies have focused on detecting dust storms at night. The key elements of this present study are to use data from the Moderate Resolution Imaging Spectroradiometers on the Terra and Aqua satellites to develop more effective automated method for detecting dust storms during both day and night and generate a MODIS dust storm database. / Libyan Centre for Remote Sensing and Space Science / Appendix A was submitted with extra data files which are not available online.
1177

A Survey Of Persistent Graph Databases

Liu, Yufan 23 April 2014 (has links)
No description available.
1178

A graph database management system for a logistics-related service

Walldén, Marcus, Özkan, Aylin January 2016 (has links)
Higher demands on database systems have lead to an increased popularity of certain database system types in some niche areas. One such niche area is graph networks, such as social networks or logistics networks. An analysis made on such networks often focus on complex relational patterns that sometimes can not be solved efficiently by traditional relational databases, which has lead to the infusion of some specialized non-relational database systems. Some of the database systems that have seen a surge in popularity in this area are graph database systems. This thesis presents a prototype of a logistics network-related service using a graph database management system called Neo4j, which currently is the most popular graph database management system in use. The logistics network covered by the service is based on existing data from PostNord, Sweden’s biggest provider of logistics solutions, and primarily focuses on customer support and business to business. By creating a prototype of the service this thesis strives to indicate some of the positive and negative aspects of a graph database system, as well as give an indication of how a service using a graph database system could be created. The results indicate that Neo4j is very intuitive and easy to use, which would make it optimal for prototyping and smaller systems, but due to the used evaluation method more research in this area would need to be carried out in order to confirm these conclusions. / Högre krav på databassystem har lett till en ökad popularitet för vissa databassystemstyper i några nischområden. Ett sådant nischområde är grafnätverk, såsomsociala nätverk eller logistiknätverk. Analyser på grafnätverk fokuserar ofta påkomplexa relationsmönster som ibland inte kan lösas effektivt av traditionella relationsdatabassystem, vilket har lett till att vissa specialiserade icke-relationella databassystem har blivit populära alternativ. Många av de populära databassystemen inom detta område är grafdatabassystem. Detta arbete presenterar en prototyp av en logistiknätverksrelaterad tjänst som använder sig av ett grafdatabashanteringssystem som heter Neo4j, vilket är det mest använda grafdatabashanteringssystemet. Logistiknätverket som täcks av tjänsten är baserad på existerande data från PostNord, Sveriges ledande leverantör av logistiklösningar, och fokuserar primärt på kundsupport och företagsrelaterad analys. Genom att skapa en prototyp av tjänsten strävar detta arbete efter att uppvisa vissa av de positiva och negativa aspekterna av ett grafdatabashanteringssystem samt att visa hur en tjänst kan skapas genom att använda ett grafdatabashanteringssystem. Resultaten indikerar att Neo4j är väldigt intuitivt och lättanvänt, vilket skulle göra den optimal för prototyping och mindre system, men på grund av den använda evalueringsmetoden så behöver mer forskning inom detta område utföras innan dessa slutsatser kan bekräftas.
1179

Bookkeeping Procedures for the Application of the Concept of Pre-Allocation of Total Float

Ambani, Nikhil 03 December 2004 (has links)
With the increasing complexity in construction projects, monitoring project schedule and managing projects effectively is becoming increasingly important. Most projects being deadline oriented, timely completion becomes a must. Like every industry, the construction industry too lays a lot of emphasis on timely completion which makes it necessary to monitor the project schedule very closely. A schedule overrun is never predicted at the start of the project but during the course of the project, even the slightest change can result in delays. As per the current scheduling practices, float is considered free. It is an expiring resource and hence the party to the use the float first owns the float. The concept endorsed by the court for analyzing delay claims is the proximate cause concept. As per this concept, the party which is the immediate cause to a particular delay is held responsible for that delay irrespective of what has happened before in the project. Due the ambiguous nature of its interpretation, the present concept on float management has now become one the primary reasons for disputes amongst the participating parties. Parties in contract are always trying to appropriate float to suit their interests. This is why total float management has gained this level of importance in today's industry. To handle this issue of total float management more efficiently, Dr. Prateapusanond (2003) proposes a new concept of total float management as an effort towards a more fair and equitable system. This concept respects the dynamic nature of construction projects and recognizes float to be an asset for both parties. The new concept proposes to allocate float in the ratio 50:50 between the parties at the start of the project. This pre-allocated float owned by each party is called the Allowable Total Float (ATF). The implementation of this concept ensures that the parties are now aware that consumption of float in a way that it affects critical activities will expose them potential damages. This concept is an effort towards a more fair and equitable system for total float management. It appears impressive on paper but its practicality and applicability remains a major concern. This research is aimed at testing the practicality of the proposed concept of pre-allocation of total float. It introduces bookkeeping procedures that will facilitate the application of the concept of Pre-allocation of total float. These procedures have been developed and tested on certain case studies to make sure that they are robust. Once their ability to handle scheduling issues is determined, the bookkeeping procedure along with the concept of pre-allocation of total float is applied to a real construction project. This research presents an in depth analysis of the nature of the proposed concept of pre-allocation of total float, the scheduling issues which this concept does not address to, and certain assumptions which could be used in conjunction with the present concept to make it robust in nature. / Master of Science
1180

ANNIS: A graph-based query system for deeply annotated text corpora

Krause, Thomas 11 January 2019 (has links)
Diese Dissertation beschreibt das Design und die Implementierung eines effizienten Suchsystems für linguistische Korpora. Das bestehende und auf einer relationalen Datenbank basierende System ANNIS ist spezialisiert darin, Korpora mit verschiedenen Arten von Annotationen zu unterstützen und nutzt Graphen als einheitliche Repräsentation der verschiedener Annotationen. Für diese Dissertation wurde eine Hauptspeicher-Datenbank, die rein auf Graphen basiert, als Nachfolger für ANNIS entwickelt. Die Korpora werden in Kantenkomponenten partitioniert und für verschiedene Typen von Subgraphen werden unterschiedliche Implementationen zur Darstellung und Suche in diesen Komponenten genutzt. Operationen der Anfragesprache AQL (ANNIS Query Language) werden als Kombination von Erreichbarkeitsanfragen auf diesen verschiedenen Komponenten implementiert und jede Implementierung hat optimierte Funktionen für diese Art von Anfragen. Dieser Ansatz nutzt die verschiedenen Strukturen der unterschiedlichen Annotationsarten aus, ohne die einheitliche Darstellung als Graph zu verlieren. Zusätzliche Optimierungen, wie die parallele Ausführung von Teilen der Anfragen, wurden ebenfalls implementiert und evaluiert. Da AQL eine bestehende Implementierung besitzt und diese für Forscher offen als webbasierter Service zu Verfügung steht, konnten echte AQL-Anfragen aufgenommen werden. Diese dienten als Grundlage für einen Benchmark der neuen Implementierung. Mehr als 4000 Anfragen über 18 Korpora wurden zu einem realistischen Workload zusammengetragen, der sehr unterschiedliche Arten von Korpora und Anfragen mit einem breitem Spektrum von Komplexität enthält. Die neue graphbasierte Implementierung wurde mit der existierenden, die eine relationale Datenbank nutzt, verglichen. Sie führt den Anfragen im Workload im Vergleich ~10 schneller aus und die Experimente zeigen auch, dass die verschiedenen Implementierungen für die Kantenkomponenten daran einen großen Anteil haben. / This dissertation describes the design and implementation of an efficient system for linguistic corpus queries. The existing system ANNIS is based on a relational database and is focused on providing support for corpora with very different kinds of annotations and uses graphs as unified representations of the different annotations. For this dissertation, a main memory and solely graph-based successor of ANNIS has been developed. Corpora are divided into edge components and different implementations for representation and search of these components are used for different types of subgraphs. AQL operations are interpreted as a set of reachability queries on the different components and each component implementation has optimized functions for this type of queries. This approach allows exploiting the different structures of the different kinds of annotations without losing the common representation as a graph. Additional optimizations, like parallel executions of parts of the query, are also implemented and evaluated. Since AQL has an existing implementation and is already provided as a web-based service for researchers, real-life AQL queries have been recorded and thus can be used as a base for benchmarking the new implementation. More than 4000 queries from 18 corpora (from which most are available under an open-access license) have been compiled into a realistic workload that includes very different types of corpora and queries with a wide range of complexity. The new graph-based implementation was compared against the existing one, which uses a relational database. It executes the workload ~10 faster than the baseline and experiments show that the different graph storage implementations had a major effect in this improvement.

Page generated in 0.0586 seconds