Spelling suggestions: "subject:"database managemement lemsystems"" "subject:"database managemement atemsystems""
31 |
Lagring och visualisering av information om stötdämpareSettlin, Johan, Ekelund, Joar January 2019 (has links)
Att genom simuleringar få en förståelse för hur en stötdämpares inställningar påverkar dess egenskaper kan leda till förbättrad väghållning, ökad trafiksäkerhet samt snabbare varvtider på racerbanan. Genom att visualisera de simulerade data för att ge användare en uppfattning om hur inställningarna på stötdämparen kommer att bete sig i praktiken.Det här arbetet hade som mål att utforma en databas som efterliknar en stötdämpares egenskaper samt att visualisera dessa egenskaper på en webbsida. Kravinsamling gjordes genom intervjuer med experter och information införskaffades via litteraturstudier. Utifrån insamlade krav och fallstudier utvecklades en relationsdatabas som innehåller information om en dämpares komponenter och uppbyggnad samt ett visualiseringsverktyg där egenskaperna hos dämparen visualiserades på en webbsida. Databasen och visualiseringsverktyget sammanfogades sedan till en prototyp för att möjliggöra simulering av en dämpares egenskaper på webben.Resultatet av fallstudierna visade att databashanteringssystemet MySQL och grafbiblioteket Chart.js var bäst lämpade för prototypen utifrån de insamlade kraven. Funktionaliteten av protypen validerades av projektets uppdragsgivare och felmarginalen för simuleringarna var under 1%. Detta implicerar att databasmodellen som tagits fram håller god kvalitet och att resultatet visualiseras på ett korrekt och förståeligt sätt. / By perform simulations to achieve an understanding of how a shock absorbers setting affect its characteristics could result in improved road holding, increased roadworthiness and faster lap times at the racetrack. By visualizing the simulated data, users can get an understanding in how the settings on the shock absorber will behave.This work had as a goal to design a database that mimic a shock absorbers characteristic and to visualize these characteristics on a website. Requirements was gathered through interviews with experts and information was procured through literature studies. From the gathered requirements and case studies a relational database, that contain information about a shock absorbers components and construction, was developed. A visualization tool to visualize the characteristics of a shock absorber was also developed. The database and the visualization tool where then joined to create a prototype for simulating a shock absorbers characteristic on the web.The result from the case studies indicated that the database management system MySQL and the graph library Chart.js was best suited for the prototype, based on the collected requirements. The functionality of the prototype was validated by the client and the margin of error for the simulation was below 1%. This implies that the database model that has been produced is of good quality and that the visualization of the result is presented in a correct and apprehensible manner.
|
32 |
Object oriented database management systemsNassis, Antonios 11 1900 (has links)
Modern data intensive applications, such as multimedia systems require the ability to store and manipulate complex data. The classical Database Management Systems (DBMS), such as relational databases, cannot support these types of applications efficiently. This dissertation presents the salient features of Object Database Management Systems (ODBMS) and Persistent Programming Languages (PPL), which have been developed to address the data management needs of these difficult applications. An 'impedance mismatch' problem occurs in the traditional DBMS because the data and computational aspects of the application are implemented using two different systems, that of query and programming language. PPL's provide facilities to cater for both persistent and transient data within the same language, hence avoiding the impedance mismatch problem. This dissertation presents a method of implementing a PPL by extending the language C++ with pre-compiled classes. The classes are first developed and then used to implement object persistence in two simple applications. / Computing / M. Sc. (Information Systems)
|
33 |
Optimized approach to decision fusion of heterogeneous data for breast cancer diagnosis.Jesneck, JL, Nolte, LW, Baker, JA, Floyd, CE, Lo, JY 08 1900 (has links)
As more diagnostic testing options become available to physicians, it becomes more difficult to combine various types of medical information together in order to optimize the overall diagnosis. To improve diagnostic performance, here we introduce an approach to optimize a decision-fusion technique to combine heterogeneous information, such as from different modalities, feature categories, or institutions. For classifier comparison we used two performance metrics: The receiving operator characteristic (ROC) area under the curve [area under the ROC curve (AUC)] and the normalized partial area under the curve (pAUC). This study used four classifiers: Linear discriminant analysis (LDA), artificial neural network (ANN), and two variants of our decision-fusion technique, AUC-optimized (DF-A) and pAUC-optimized (DF-P) decision fusion. We applied each of these classifiers with 100-fold cross-validation to two heterogeneous breast cancer data sets: One of mass lesion features and a much more challenging one of microcalcification lesion features. For the calcification data set, DF-A outperformed the other classifiers in terms of AUC (p < 0.02) and achieved AUC=0.85 +/- 0.01. The DF-P surpassed the other classifiers in terms of pAUC (p < 0.01) and reached pAUC=0.38 +/- 0.02. For the mass data set, DF-A outperformed both the ANN and the LDA (p < 0.04) and achieved AUC=0.94 +/- 0.01. Although for this data set there were no statistically significant differences among the classifiers' pAUC values (pAUC=0.57 +/- 0.07 to 0.67 +/- 0.05, p > 0.10), the DF-P did significantly improve specificity versus the LDA at both 98% and 100% sensitivity (p < 0.04). In conclusion, decision fusion directly optimized clinically significant performance measures, such as AUC and pAUC, and sometimes outperformed two well-known machine-learning techniques when applied to two different breast cancer data sets. / Dissertation
|
34 |
Efficient Processing of Range Queries in Main MemorySprenger, Stefan 11 March 2019 (has links)
Datenbanksysteme verwenden Indexstrukturen, um Suchanfragen zu beschleunigen. Im Laufe der letzten Jahre haben Forscher verschiedene Ansätze zur Indexierung von Datenbanktabellen im Hauptspeicher entworfen. Hauptspeicherindexstrukturen versuchen möglichst häufig Daten zu verwenden, die bereits im Zwischenspeicher der CPU vorrätig sind, anstatt, wie bei traditionellen Datenbanksystemen, die Zugriffe auf den externen Speicher zu optimieren. Die meisten vorgeschlagenen Indexstrukturen für den Hauptspeicher beschränken sich jedoch auf Punktabfragen und vernachlässigen die ebenso wichtigen Bereichsabfragen, die in zahlreichen Anwendungen, wie in der Analyse von Genomdaten, Sensornetzwerken, oder analytischen Datenbanksystemen, zum Einsatz kommen.
Diese Dissertation verfolgt als Hauptziel die Fähigkeiten von modernen Hauptspeicherdatenbanksystemen im Ausführen von Bereichsabfragen zu verbessern. Dazu schlagen wir zunächst die Cache-Sensitive Skip List, eine neue aktualisierbare Hauptspeicherindexstruktur, vor, die für die Zwischenspeicher moderner Prozessoren optimiert ist und das Ausführen von Bereichsabfragen auf einzelnen Datenbankspalten ermöglicht. Im zweiten Abschnitt analysieren wir die Performanz von multidimensionalen Bereichsabfragen auf modernen Serverarchitekturen, bei denen Daten im Hauptspeicher hinterlegt sind und Prozessoren über SIMD-Instruktionen und Multithreading verfügen. Um die Relevanz unserer Experimente für praktische Anwendungen zu erhöhen, schlagen wir zudem einen realistischen Benchmark für multidimensionale Bereichsabfragen vor, der auf echten Genomdaten ausgeführt wird. Im letzten Abschnitt der Dissertation präsentieren wir den BB-Tree als neue, hochperformante und speichereffziente Hauptspeicherindexstruktur. Der BB-Tree ermöglicht das Ausführen von multidimensionalen Bereichs- und Punktabfragen und verfügt über einen parallelen Suchoperator, der mehrere Threads verwenden kann, um die Performanz von Suchanfragen zu erhöhen. / Database systems employ index structures as means to accelerate search queries. Over the last years, the research community has proposed many different in-memory approaches that optimize cache misses instead of disk I/O, as opposed to disk-based systems, and make use of the grown parallel capabilities of modern CPUs. However, these techniques mainly focus on single-key lookups, but neglect equally important range queries. Range queries are an ubiquitous operator in data management commonly used in numerous domains, such as genomic analysis, sensor networks, or online analytical processing.
The main goal of this dissertation is thus to improve the capabilities of main-memory database systems with regard to executing range queries. To this end, we first propose a cache-optimized, updateable main-memory index structure, the cache-sensitive skip list, which targets the execution of range queries on single database columns. Second, we study the performance of multidimensional range queries on modern hardware, where data are stored in main memory and processors support SIMD instructions and multi-threading. We re-evaluate a previous rule of thumb suggesting that, on disk-based systems, scans outperform index structures for selectivities of approximately 15-20% or more. To increase the practical relevance of our analysis, we also contribute a novel benchmark consisting of several realistic multidimensional range queries applied to real- world genomic data. Third, based on the outcomes of our experimental analysis, we devise a novel, fast and space-effcient, main-memory based index structure, the BB- Tree, which supports multidimensional range and point queries and provides a parallel search operator that leverages the multi-threading capabilities of modern CPUs.
|
35 |
AQuESStillger, Michael 21 January 2000 (has links)
Die parallele Anfragebearbeitung für relationale Datenbankmanagementsysteme (RDBMS) ist wegen ihrer unterschiedlichen Arten der Ausführungsparallelität und den Eigenschaften der zugrunde liegenden parallelen Architektur ein äusserst komplexes Problem. Systemänderungen zur Laufzeit der Anfrage können zusätzlich ein dynamisches Verhalten der ausführenden Komponenten erfordern, um eine nahezu optimale Antwortzeit zu gewährleisten. Diese Arbeit stellt einen neuen, flexiblen Ansatz für die Optimierung und Abarbeitung von komplexen Anfragen vor, der besonders die dynamische Optimierung berücksichtigt. Insbesondere werden in der Arbeit folgende Teile präsentiert: 1. die Architektur eines neuen, verteilt-kooperierenden Komponentensystems beeinflusst von agenten-orientierten Konzepten; 2. der Entwurf und die Realisierung einer neuen Kommunikationsinfrastruktur für die identifizierten Systemkomponenten; 3. der Entwurf und die Implementierung eines flexiblen Anfrageoptimierers mit einem neuen, zufallsbasierten Algorithmus; und 4. der Entwurf und die Realisierung einer parallel arbeitenden Ausführungskomponente unter besonderer Berücksichtigung der dynamischen Anfrageoptimierung. Bei der Entwicklung der Konzepte standen neben den spezifischen Anforderungen für RDBMS besonders die Konfigurierbarkeit und die Erweiterbarkeit des verteilten Systems im Vordergrund. / Parallel query evaluation for relational database management systems (RDBSM) still remains a challenging problem. Modern systems must show near optimal performance in spite of running in a heterogeneous hardware environment, exploiting different ways of parallelism and dealing with unpredictable system load. This thesis paper presents a dynamic and flexible system addressing the issues of optimization and evaluation of relational queries for a distributed and dynamic environment. In particular, this work consists of: 1) the architecture of a distributed system which was inspired by the concepts of software agents, 2) the architecture and the implementation of a communication infrastructure for the system components, 3) the architecture and the implementation of a new query optimization algorithm, and 4) the concept and the implementation of a new query evaluation engine for parallel execution, which enables runtime optimization of queries. Furthermore, the design supports the extension and the configuration of the system and its components.
|
36 |
ATTuneDB: uma ferramenta de apoio à sintonia de SGBDs baseada na identificação do regime de operação através de modelo probabilísticoMachado, Leonardo Ribeiro 31 March 2011 (has links)
Submitted by Silvana Teresinha Dornelles Studzinski (sstudzinski) on 2016-03-17T16:13:01Z
No. of bitstreams: 1
Leonardo Ribeiro Machado_.pdf: 1406241 bytes, checksum: d0229eb3cc9a08809b94e758fa60d7e6 (MD5) / Made available in DSpace on 2016-03-17T16:13:01Z (GMT). No. of bitstreams: 1
Leonardo Ribeiro Machado_.pdf: 1406241 bytes, checksum: d0229eb3cc9a08809b94e758fa60d7e6 (MD5)
Previous issue date: 2011-03-31 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / O desempenho de um SGBD é um fator crítico a ser considerado durante a sua utilização. Diversas técnicas são atualmente empregadas na tentativa de aumentar o desempenho de um SGBD. Esta pesquisa integra tecnologias de agentes e de mineração de dados para a criação de modelos probabilísticos (bayesianos) de decisão aptos a auxiliar no processo de melhoria de desempenho de um SGBD. Este modelo é usado, então, como base da ferramenta ATTuneDB de sintonia de SGBD. A partir da carga real de operação de um SGBD PostgreSQL, a ferramenta utiliza este modelo para identificar o regime de trabalho do SGBD e encontrar o melhor conjunto de valores para os parâmetros deste SGBD, apoiando o administrador do SGBD na tarefa de otimizar o desempenho deste. / The performance of a DBMS is a critical factor to be considered while using it. Several techniques are currently employed in an attempt to increase the performance of a DBMS. This research integrates agent technologies and data mining for building probabilistic decision models (Bayesian) able to assist the performance improvement process of a DBMS. This model is used to build the ATTuneDB DBMS fine-tuning tool. Receiving information about the real workload being submitted to a PostgreSQL DBMS, and using the probabilistic model, the tool is able to identify the type of the workload, and find the best set of value for the parameters of this DBMS, thus, supporting the DBA on the task of optimizing the DBMS performance.
|
37 |
Dynamic monitoring, modeling and management of performance and resources for applications in cloudXiong, Pengcheng 06 November 2012 (has links)
Emerging trends in Cloud computing bring numerous benefits, such as higher performance, fast and flexible provisioning of applications and capacities, lower infrastructure costs, and
almost unlimited scalability. However, the increasing complexity of automated performance and resource management
for applications in Cloud computing presents novel challenges that demand enhancement to classical control-based approaches.
An important challenge that Cloud service providers often face is a resource sharing dilemma under
workload variation. Cloud service providers pursue higher resource utilization, because the higher the utilization, the lower the hardware cost, operating cost and maintenance cost.
On the other hand, resource utilizations cannot be too high or the service provider's revenue could be jeopardized due to the inability to meet application-level service-level objectives (SLOs).
A crucial research question is how to generate as much revenue as possible by satisfying service-level agreements
while reducing costs as much as possible in order to maximize the profit for Cloud service providers.
To this end, the classical control-based approaches show great potential to address the resource sharing dilemma, which could be classified into three major categories, i.e., admission control, queueing and scheduling, and resource allocation. However, it is a challenging task to apply classical control-based approaches directly to computer systems, where first-principle models are generally not available. It becomes even more difficult due to the dynamics seen in real computer systems including workload variations, multi-tier dependencies, and resource bottleneck shifts.
Fundamentally, the main contributions of this thesis are the efforts
to enhance classical control-based approaches by leveraging other techniques
to address the increasing complexity of automated performance and resource management in the Cloud
through dynamic monitoring, modeling and management of performance and resources.
More specifically, (1) an admission control approach
is enhanced by leveraging decision theory to achieve the most profitable service-level compliance;
(2) a critical resource identification approach
is enhanced by leveraging statistical machine learning to automatically and adaptively identify critical resources;
and (3) a resource allocation approach
is enhanced by leveraging hierarchical resource management to achieve the highest resource utilization.
Concretely, the enhanced control-based approaches are implemented in
a collection of real control systems: ActiveSLA, vPerfGuard and ERController.
The control systems are applied to different real applications, such as OLTP and OLAP database applications and distributed multi-tier web applications, with different workload intensities, type and mix, in different Cloud environments.
All the experimental results show that the prototype control systems outperform existing classical control-based approaches.
Finally, this thesis opens new avenues to address the increasing complexity of automated performance and resource management
through enhancement of classical control-based approaches in Cloud environments. Future work
will consistently follow the direction of new avenues to address the new challenges that arise with the advent of new hardware technology, new software frameworks and new computing paradigms.
|
38 |
Development Of A Gis-based Monitoring And Management System For Underground Mining SafetySalap, Seda 01 September 2008 (has links) (PDF)
Mine safety is of paramount concern to the mining industry. The generation of a Geographic Information Systems (GIS) which can administrate relevant spatial data and metadata of underground mining safety efficiently is a very vital issue in this sense. In an effort to achieve a balance of safety and productivity, GIS can contribute to the creation of a safe working environment in underground (U/G) mining. Such a system should serve to a continuous risk analysis and be designed for applications in case of emergency. Concept for safety should require three fundamental components, namely (i) constructive safety / (ii) surveillance and maintenance / and (iii) emergency.
The implementation has to be carried out in a Web-Based Geographic Information System. The process comprises first the safety concept as the application domain model and then a conceptual model was generated in terms of Entity- Relationship Diagrams. After the implementation of the logical model a user interface was developed and GIS was tested. Finally, one should deal with the question if it is possible to extend the method of resolution used to a national GIS infrastructure.
|
39 |
Towards interoperable and knowledge-based electronic health records using archetype methodology /Chen, Rong, January 2009 (has links)
Diss. (sammanfattning) Linköping : Linköpings universitet, 2009. / Härtill 5 uppsatser.
|
40 |
XML και σχεσιακές βάσεις δεδομένων: πλαίσιο αναφοράς και αξιολόγησης / XML and relational databases: a frame of report and evaluationΠαλιανόπουλος, Ιωάννης 16 May 2007 (has links)
Η eXtensible Markup Language (XML) είναι εμφανώς το επικρατέστερο πρότυπο για αναπαράσταση δεδομένων στον Παγκόσμιο Ιστό. Αποτελεί μια γλώσσα περιγραφής δεδομένων, κατανοητή τόσο από τον άνθρωπο, όσο και από τη μηχανή. Η χρήση της σε αρχικό στάδιο περιορίστηκε στην ανταλλαγή δεδομένων, αλλά λόγω της εκφραστικότητάς της (σε αντίθεση με το σχεσιακό μοντέλο) μπορεί να αποτελέσει ένα αποτελεσματικό \"όχημα\" μεταφοράς και αποθήκευσης πληροφορίας. Οι σύγχρονες εφαρμογές κάνουν χρήση της τεχνολογίας XML εξυπηρετώντας ανάγκες διαλειτουργικότητας και επικοινωνίας. Ωστόσο, θεωρείται βέβαιο ότι η χρήση της σε επίπεδο υποδομής θα ενδυναμώσει περαιτέρω τις σύγχρονες εφαρμογές. Σε επίπεδο υποδομής, μια βάση δεδομένων που διαχειρίζεται την γλώσσα XML είναι σε θέση να πολλαπλασιάσει την αποδοτικότητά της, εφόσον η βάση δεδομένων μετατρέπεται σε βάση πληροφορίας. Έτσι, όσο οι εφαρμογές γίνονται πιο σύνθετες και απαιτητικές, η ενδυνάμωση των βάσεων δεδομένων με τεχνολογίες που φέρουν/εξυπηρετούν τη σημασιολογία των προβλημάτων υπόσχεται αποτελεσματικότερη αντιμετώπιση στο παραπάνω μέτωπο. Αλλά ποιος είναι ο καλύτερος τρόπος αποδοτικού χειρισμού των XML εγγράφων (XML documents); Με μια πρώτη ματιά η απάντηση είναι προφανής. Εφόσον ένα XML έγγραφο αποτελεί παράδειγμα μιας σχετικά νέας τεχνολογίας, γιατί να μη χρησιμοποιηθούν ειδικά συστήματα για το χειρισμό της; Αυτό είναι πράγματι μια βιώσιμη προσέγγιση και υπάρχει σημαντική δραστηριότητα στην κοινότητα των βάσεων δεδομένων που εστιάζει στην εκμετάλλευση αυτής της προσέγγισης. Μάλιστα, για το σκοπό αυτό, έχουν δημιουργηθεί ειδικά συστήματα βάσεων δεδομένων, οι επονομαζόμενες \"Εγγενείς XML Βάσεις Δεδομένων\" (Native XML Databases). Όμως, το μειονέκτημα της χρήσης τέτοιων συστημάτων είναι ότι αυτή η προσέγγιση δεν αξιοποιεί την πολυετή ερευνητική δραστηριότητα που επενδύθηκε για την τεχνολογία των σχεσιακών βάσεων δεδομένων. Είναι πράγματι γεγονός ότι δεν αρκεί η σχεσιακή τεχνολογία και επιβάλλεται η ανάγκη για νέες τεχνικές; Ή μήπως με την κατάλληλη αξιοποίηση των υπαρχόντων συστημάτων μπορεί να επιτευχθεί ποιοτική ενσωμάτωση της XML; Σε αυτήν την εργασία γίνεται μια μελέτη που αφορά στην πιθανή χρησιμοποίηση των σχεσιακών συστημάτων βάσεων δεδομένων για το χειρισμό των XML εγγράφων. Αφού αναλυθούν θεωρητικά οι τρόποι με τους οποίους γίνεται αυτό, στη συνέχεια εκτιμάται πειραματικά η απόδοση σε δύο από τα πιο δημοφιλή σχεσιακά συστήματα βάσεων δεδομένων. Σκοπός είναι η χάραξη ενός πλαισίου αναφοράς για την αποτίμηση και την αξιολόγηση των σχεσιακών βάσεων δεδομένων που υποστηρίζουν XML (XML-enabled RDBMSs). / The eXtensible Markup Language (XML) is obviously the prevailing model for data representation in the World Wide Web (WWW). It is a data description language comprehensible by both humans and computers. Its usage in an initial stage was limited to the exchange of data, but it can constitute an effective \"vehicle\" for transporting, handling and storing of information, due to its expressiveness (contrary to the relational model). Contemporary applications make heavy use of the XML technology in order to support communication and interoperability . However, supporting XML at the infrastructure level would reduce application development time, would make applications almost automatically complient to standards and would make them less error prone. In terms of infrastructure, a database able to handle XML properly would be beneficial to a wide range of applications thus multiplying its efficiency. In this way, as long as the applications become more complex and demanding, the strengthening of databases with technologies that serve the nature of problems, promises more effective confrontation with this topic. But how can XML documents be supported at the infrastructure level? At a first glance, the question is rhetorical. Since XML constitutes a relatively new technology, new XML-aware infrastructures can be built from scratch. This is indeed a viable approach and there is a considerable activity in the research community of databases, which focuses on the exploitation of this approach. In particular, this is the reason why special database systems have been created, called \"Native XML Databases\". However, the disadvantage of using such systems is that this approach does not build on existing knowledge currently present in the relational database field. The research question would be whether relational technology is able to support correctly XML data. In this thesis, we present a study concerned with the question whether relational database management systems (RDBMSs) provide suitable ground for handling XML documents. Having theoretically analyzed the ways with which RDBMSs handle XML, the performance in two of the most popular relational database management systems is then experimentally assessed. The aim is to draw a frame of report on the assessment and the evaluation of relational database management systems that support XML (XML-enabled RDBMSs).
|
Page generated in 0.083 seconds