Spelling suggestions: "subject:"database"" "subject:"catabase""
1171 |
Lagring och visualisering av information om stötdämpareSettlin, Johan, Ekelund, Joar January 2019 (has links)
Att genom simuleringar få en förståelse för hur en stötdämpares inställningar påverkar dess egenskaper kan leda till förbättrad väghållning, ökad trafiksäkerhet samt snabbare varvtider på racerbanan. Genom att visualisera de simulerade data för att ge användare en uppfattning om hur inställningarna på stötdämparen kommer att bete sig i praktiken.Det här arbetet hade som mål att utforma en databas som efterliknar en stötdämpares egenskaper samt att visualisera dessa egenskaper på en webbsida. Kravinsamling gjordes genom intervjuer med experter och information införskaffades via litteraturstudier. Utifrån insamlade krav och fallstudier utvecklades en relationsdatabas som innehåller information om en dämpares komponenter och uppbyggnad samt ett visualiseringsverktyg där egenskaperna hos dämparen visualiserades på en webbsida. Databasen och visualiseringsverktyget sammanfogades sedan till en prototyp för att möjliggöra simulering av en dämpares egenskaper på webben.Resultatet av fallstudierna visade att databashanteringssystemet MySQL och grafbiblioteket Chart.js var bäst lämpade för prototypen utifrån de insamlade kraven. Funktionaliteten av protypen validerades av projektets uppdragsgivare och felmarginalen för simuleringarna var under 1%. Detta implicerar att databasmodellen som tagits fram håller god kvalitet och att resultatet visualiseras på ett korrekt och förståeligt sätt. / By perform simulations to achieve an understanding of how a shock absorbers setting affect its characteristics could result in improved road holding, increased roadworthiness and faster lap times at the racetrack. By visualizing the simulated data, users can get an understanding in how the settings on the shock absorber will behave.This work had as a goal to design a database that mimic a shock absorbers characteristic and to visualize these characteristics on a website. Requirements was gathered through interviews with experts and information was procured through literature studies. From the gathered requirements and case studies a relational database, that contain information about a shock absorbers components and construction, was developed. A visualization tool to visualize the characteristics of a shock absorber was also developed. The database and the visualization tool where then joined to create a prototype for simulating a shock absorbers characteristic on the web.The result from the case studies indicated that the database management system MySQL and the graph library Chart.js was best suited for the prototype, based on the collected requirements. The functionality of the prototype was validated by the client and the margin of error for the simulation was below 1%. This implies that the database model that has been produced is of good quality and that the visualization of the result is presented in a correct and apprehensible manner.
|
1172 |
Private Table Database Virtualization for DBaaSLehner, Wolfgang, Kiefer, Tim 03 November 2022 (has links)
Growing number of applications store data in relational databases. Moving database applications to the cloud faces challenges related to flexible and scalable management of data. The obvious strategy of hosting legacy database management systems (DMBSs) on virtualized cloud resources leads to sub optimal utilization and performance. However, the layered architecture inside the DBMS allows for virtualization and consolidation above the OS level which can lead to significantly better system utilization and application performance. Finding an optimal database cloud solution requires finding an assignment from virtual to physical resources as well as configurations for all components. Our goal is to provide a virtualization advisor that aids in setting up and operating a database cloud. By formulating analytic cost, workload, and resource models performance of cloud-hosted relational database services can be significantly improved.
|
1173 |
Automated dust storm detection using satellite images. Development of a computer system for the detection of dust storms from MODIS satellite images and the creation of a new dust storm database.El-Ossta, Esam E.A. January 2013 (has links)
Dust storms are one of the natural hazards, which have increased in frequency in the recent years over Sahara desert, Australia, the Arabian Desert, Turkmenistan and northern China, which have worsened during the last decade. Dust storms increase air pollution, impact on urban areas and farms as well as affecting ground and air traffic. They cause damage to human health, reduce the temperature, cause damage to communication facilities, reduce visibility which delays both road and air traffic and impact on both urban and rural areas. Thus, it is important to know the causation, movement and radiation effects of dust storms. The monitoring and forecasting of dust storms is increasing in order to help governments reduce the negative impact of these storms. Satellite remote sensing is the most common method but its use over sandy ground is still limited as the two share similar characteristics. However, satellite remote sensing using true-colour images or estimates of aerosol optical thickness (AOT) and algorithms such as the deep blue algorithm have limitations for identifying dust storms. Many researchers have studied the detection of dust storms during daytime in a number of different regions of the world including China, Australia, America, and North Africa using a variety of satellite data but fewer studies have focused on detecting dust storms at night. The key elements of this present study are to use data from the Moderate Resolution Imaging Spectroradiometers on the Terra and Aqua satellites to develop more effective automated method for detecting dust storms during both day and night and generate a MODIS dust storm database. / Libyan Centre for Remote Sensing and Space Science / Appendix A was submitted with extra data files which are not available online.
|
1174 |
A Survey Of Persistent Graph DatabasesLiu, Yufan 23 April 2014 (has links)
No description available.
|
1175 |
A graph database management system for a logistics-related serviceWalldén, Marcus, Özkan, Aylin January 2016 (has links)
Higher demands on database systems have lead to an increased popularity of certain database system types in some niche areas. One such niche area is graph networks, such as social networks or logistics networks. An analysis made on such networks often focus on complex relational patterns that sometimes can not be solved efficiently by traditional relational databases, which has lead to the infusion of some specialized non-relational database systems. Some of the database systems that have seen a surge in popularity in this area are graph database systems. This thesis presents a prototype of a logistics network-related service using a graph database management system called Neo4j, which currently is the most popular graph database management system in use. The logistics network covered by the service is based on existing data from PostNord, Sweden’s biggest provider of logistics solutions, and primarily focuses on customer support and business to business. By creating a prototype of the service this thesis strives to indicate some of the positive and negative aspects of a graph database system, as well as give an indication of how a service using a graph database system could be created. The results indicate that Neo4j is very intuitive and easy to use, which would make it optimal for prototyping and smaller systems, but due to the used evaluation method more research in this area would need to be carried out in order to confirm these conclusions. / Högre krav på databassystem har lett till en ökad popularitet för vissa databassystemstyper i några nischområden. Ett sådant nischområde är grafnätverk, såsomsociala nätverk eller logistiknätverk. Analyser på grafnätverk fokuserar ofta påkomplexa relationsmönster som ibland inte kan lösas effektivt av traditionella relationsdatabassystem, vilket har lett till att vissa specialiserade icke-relationella databassystem har blivit populära alternativ. Många av de populära databassystemen inom detta område är grafdatabassystem. Detta arbete presenterar en prototyp av en logistiknätverksrelaterad tjänst som använder sig av ett grafdatabashanteringssystem som heter Neo4j, vilket är det mest använda grafdatabashanteringssystemet. Logistiknätverket som täcks av tjänsten är baserad på existerande data från PostNord, Sveriges ledande leverantör av logistiklösningar, och fokuserar primärt på kundsupport och företagsrelaterad analys. Genom att skapa en prototyp av tjänsten strävar detta arbete efter att uppvisa vissa av de positiva och negativa aspekterna av ett grafdatabashanteringssystem samt att visa hur en tjänst kan skapas genom att använda ett grafdatabashanteringssystem. Resultaten indikerar att Neo4j är väldigt intuitivt och lättanvänt, vilket skulle göra den optimal för prototyping och mindre system, men på grund av den använda evalueringsmetoden så behöver mer forskning inom detta område utföras innan dessa slutsatser kan bekräftas.
|
1176 |
Bookkeeping Procedures for the Application of the Concept of Pre-Allocation of Total FloatAmbani, Nikhil 03 December 2004 (has links)
With the increasing complexity in construction projects, monitoring project schedule and managing projects effectively is becoming increasingly important. Most projects being deadline oriented, timely completion becomes a must. Like every industry, the construction industry too lays a lot of emphasis on timely completion which makes it necessary to monitor the project schedule very closely. A schedule overrun is never predicted at the start of the project but during the course of the project, even the slightest change can result in delays.
As per the current scheduling practices, float is considered free. It is an expiring resource and hence the party to the use the float first owns the float. The concept endorsed by the court for analyzing delay claims is the proximate cause concept. As per this concept, the party which is the immediate cause to a particular delay is held responsible for that delay irrespective of what has happened before in the project. Due the ambiguous nature of its interpretation, the present concept on float management has now become one the primary reasons for disputes amongst the participating parties. Parties in contract are always trying to appropriate float to suit their interests. This is why total float management has gained this level of importance in today's industry.
To handle this issue of total float management more efficiently, Dr. Prateapusanond (2003) proposes a new concept of total float management as an effort towards a more fair and equitable system. This concept respects the dynamic nature of construction projects and recognizes float to be an asset for both parties. The new concept proposes to allocate float in the ratio 50:50 between the parties at the start of the project. This pre-allocated float owned by each party is called the Allowable Total Float (ATF). The implementation of this concept ensures that the parties are now aware that consumption of float in a way that it affects critical activities will expose them potential damages.
This concept is an effort towards a more fair and equitable system for total float management. It appears impressive on paper but its practicality and applicability remains a major concern. This research is aimed at testing the practicality of the proposed concept of pre-allocation of total float. It introduces bookkeeping procedures that will facilitate the application of the concept of Pre-allocation of total float. These procedures have been developed and tested on certain case studies to make sure that they are robust. Once their ability to handle scheduling issues is determined, the bookkeeping procedure along with the concept of pre-allocation of total float is applied to a real construction project. This research presents an in depth analysis of the nature of the proposed concept of pre-allocation of total float, the scheduling issues which this concept does not address to, and certain assumptions which could be used in conjunction with the present concept to make it robust in nature. / Master of Science
|
1177 |
ANNIS: A graph-based query system for deeply annotated text corporaKrause, Thomas 11 January 2019 (has links)
Diese Dissertation beschreibt das Design und die Implementierung eines effizienten Suchsystems für linguistische Korpora. Das bestehende und auf einer relationalen Datenbank basierende System ANNIS ist spezialisiert darin, Korpora mit verschiedenen Arten von Annotationen zu unterstützen und nutzt Graphen als einheitliche Repräsentation der verschiedener Annotationen. Für diese Dissertation wurde eine Hauptspeicher-Datenbank, die rein auf Graphen basiert, als Nachfolger für ANNIS entwickelt. Die Korpora werden in Kantenkomponenten partitioniert und für verschiedene Typen von Subgraphen werden unterschiedliche Implementationen zur Darstellung und Suche in diesen Komponenten genutzt. Operationen der Anfragesprache AQL (ANNIS Query Language) werden als Kombination von Erreichbarkeitsanfragen auf diesen verschiedenen Komponenten implementiert und jede Implementierung hat optimierte Funktionen für diese Art von Anfragen. Dieser Ansatz nutzt die verschiedenen Strukturen der unterschiedlichen Annotationsarten aus, ohne die einheitliche Darstellung als Graph zu verlieren. Zusätzliche Optimierungen, wie die parallele Ausführung von Teilen der Anfragen, wurden ebenfalls implementiert und evaluiert. Da AQL eine bestehende Implementierung besitzt und diese für Forscher offen als webbasierter Service zu Verfügung steht, konnten echte AQL-Anfragen aufgenommen werden. Diese dienten als Grundlage für einen Benchmark der neuen Implementierung. Mehr als 4000 Anfragen über 18 Korpora wurden zu einem realistischen Workload zusammengetragen, der sehr unterschiedliche Arten von Korpora und Anfragen mit einem breitem Spektrum von Komplexität enthält. Die neue graphbasierte Implementierung wurde mit der existierenden, die eine relationale Datenbank nutzt, verglichen. Sie führt den Anfragen im Workload im Vergleich ~10 schneller aus und die Experimente zeigen auch, dass die verschiedenen Implementierungen für die Kantenkomponenten daran einen großen Anteil haben. / This dissertation describes the design and implementation of an efficient system for linguistic corpus queries. The existing system ANNIS is based on a relational database and is focused on providing support for corpora with very different kinds of annotations and uses graphs as unified representations of the different annotations. For this dissertation, a main memory and solely graph-based successor of ANNIS has been developed. Corpora are divided into edge components and different implementations for representation and search of these components are used for different types of subgraphs. AQL operations are interpreted as a set of reachability queries on the different components and each component implementation has optimized functions for this type of queries. This approach allows exploiting the different structures of the different kinds of annotations without losing the common representation as a graph. Additional optimizations, like parallel executions of parts of the query, are also implemented and evaluated. Since AQL has an existing implementation and is already provided as a web-based service for researchers, real-life AQL queries have been recorded and thus can be used as a base for benchmarking the new implementation. More than 4000 queries from 18 corpora (from which most are available under an open-access license) have been compiled into a realistic workload that includes very different types of corpora and queries with a wide range of complexity. The new graph-based implementation was compared against the existing one, which uses a relational database. It executes the workload ~10 faster than the baseline and experiments show that the different graph storage implementations had a major effect in this improvement.
|
1178 |
Systém řízení báze dat v operační paměti / In-Memory Database Management SystemPehal, Petr January 2013 (has links)
The focus of this thesis is a proprietary database interface for management tables in memory. At the beginning, there is given a short introduction to the databases. Then the concept of in-memory database systems is presented. Also the main advantages and disadvantages of this solution are discussed. The theoretical introduction is ended by brief overview of existing systems. After that the basic information about energetic management system RIS are presented together with system's in-memory database interface. Further the work aims at the specification and design of required modifications and extensions of the interface. Then the implementation details and tests results are presented. In conclusion the results are summarized and future development is discussed.
|
1179 |
The design of a database of resources for rational therapySteyn, Genevieve Lee 06 1900 (has links)
The purpose of this study is to design a database of resources for rational therapy. An investigation of the current health situation and reorientation towards primary health care (PHC) in South Africa evidenced the need for a database of resources which would meet the demand for rational therapy information made on the Helderberg College Library by various user groups as well as make a contribution to the national health information infrastructure. Rational therapy is viewed as an approach within PHC that is rational, common-sense, wholistic and credible, focusing on the prevention and maintenance of health. A model of the steps in database design was developed. A user study identified users' requirements for design and the conceptual schema was developed. The entities, attributes, relationships and policies were presented and graphically summarised in an Entity-Relationship (E-R) diagram. The conceptual schema is the blueprint for further design and implementation of the database. / Information Science / M.Inf.
|
1180 |
Role-based Data ManagementJäkel, Tobias 29 May 2017 (has links) (PDF)
Database systems build an integral component of today’s software systems and as such they are the central point for storing and sharing a software system’s data while ensuring global data consistency at the same time. Introducing the primitives of roles and their accompanied metatype distinction in modeling and programming languages, results in a novel paradigm of designing, extending, and programming modern software systems. In detail, roles as modeling concept enable a separation of concerns within an entity. Along with its rigid core, an entity may acquire various roles in different contexts during its lifetime and thus, adapts its behavior and structure dynamically during runtime.
Unfortunately, database systems, as important component and global consistency provider of such systems, do not keep pace with this trend. The absence of a metatype distinction, in terms of an entity’s separation of concerns, in the database system results in various problems for the software system in general, for the application developers, and finally for the database system itself. In case of relational database systems, these problems are concentrated under the term role-relational impedance mismatch. In particular, the whole software system is designed by using different semantics on various layers. In case of role-based software systems in combination with relational database systems this gap in semantics between applications and the database system increases dramatically. Consequently, the database system cannot directly represent the richer semantics of roles as well as the accompanied consistency constraints. These constraints have to be ensured by the applications and the database system loses its single point of truth characteristic in the software system. As the applications are in charge of guaranteeing global consistency, their development requires more effort in data management. Moreover, the software system’s data management is distributed over several layers, which results in an unstructured software system architecture.
To overcome the role-relational impedance mismatch and bring the database system back in its rightful position as single point of truth in a software system, this thesis introduces the novel and tripartite RSQL approach. It combines a novel database model that represents the metatype distinction as first class citizen in a database system, an adapted query language on the database model’s basis, and finally a proper result representation. Precisely, RSQL’s logical database model introduces Dynamic Data Types, to directly represent the separation of concerns within an entity type on the schema level. On the instance level, the database model defines the notion of a Dynamic Tuple that combines an entity with the notion of roles and thus, allows for dynamic structure adaptations during runtime without changing an entity’s overall type.
These definitions build the main data structures on which the database system operates. Moreover, formal operators connecting the query language statements with the database model data structures, complete the database model. The query language, as external database system interface, features an individual data definition, data manipulation, and data query language. Their statements directly represent the metatype distinction to address Dynamic Data Types and Dynamic Tuples, respectively. As a consequence of the novel data structures, the query processing of Dynamic Tuples is completely redesigned. As last piece for a complete database integration of a role-based notion and its accompanied metatype distinction, we specify the RSQL Result Net as result representation. It provides a novel result structure and features functionalities to navigate through query results. Finally, we evaluate all three RSQL components in comparison to a relational database system. This assessment clearly demonstrates the benefits of the roles concept’s full database integration.
|
Page generated in 0.0348 seconds