Spelling suggestions: "subject:"database"" "subject:"catabase""
1101 |
The design and analysis of concurrency control algorithms in different network environments /Singhal, Anoop January 1985 (has links)
No description available.
|
1102 |
A comparison of Neo4j and MySQL for a traditional information applicationNaisan, Raheb January 2013 (has links)
Grafdatabasen och NoSQL-rörelsen har på senaste tid fått mycket uppmärksamhet ochpopularitet. Grafdatabasen har ett rykte om sig att vara snabb och e ektiv för applikationstypersom innehåller enorma mängder data och många komplexa relationer.Studier undersökta i denna rapport visar att de tidigare utförda experimenten jämfördatabaserna för applikationer som har gynnat grafmodellen. Denna rapport har som syfteatt dels undersöka databaserna men även att utföra ett experiment. Syftet med experimentetär att ta reda på ifall grafdatabasen Neo4j kan ersätta relationsdatabasen MySQLför en traditionell informationsapplikation som vanligtvis implementeras med en relationsdatabas.Studiens resultat visar att Neo4j presterar väldigt bra vid insättning och sökning, docktar studien upp era faktorer som spelar roll för valet av databas. Bristen av säkerhet ochstöd är sådana faktorer som gör att relationsdatabasen kan vara det optimala valet för entraditionell informationsapplikation. / Graph databases and the NoSQL movement has recently gained much attention and popularity.Graph databases has a reputation for being fast and e cient for application typesthat contain huge amounts of data and many complex relationships.Studies examined in this report show that the previous conducted experiments comparedatabases for applications that have favored the graph model. This report aims to examineboth databases, but also to perform an experiment. The purpose of this experiment is to nd out if the graph database Neo4j can replace the relational database MySQL for a traditionalinformation application which is usually implemented using a relational database.Results demonstrate that Neo4j performs very well at insertion and retrieval, however,the study addresses several factors that play a role in the choice of database. The lack ofsecurity and support are some factors that could make the relational database the bestchoice for a traditional information application.
|
1103 |
Replikation: Prestanda med MongoDBNirfelt, Sebastian January 2016 (has links)
Förmågan att lagra data är en stor bidragande faktor till att vetenskapen ständigt rört sig framåt. Under några tusen år har människan utvecklats från att lagra data på grottväggar till hårddiskar och kraven på prestanda, tillgång och felsäkerhet ökar i rasande takt. För att hantera data i det moderna samhället utvecklas ständigt nya metoder och en av dessa metoder är replikation. Den här undersökningen testar hur replikation påverkar prestandan i en distribuerad MongoDB-lösning. Testerna i undersökningen är automatiserade och körs mot databasen i olika konfigurationer för att se hur prestandan förändras. / The ability to store data is a contributing factor in making science constantly move forward. In a few thousand years man has evolved from storing information on cave walls to hard drives and requirements in performance, availability and fault tolerance are rapidly increasing. To manage information in modern society new methods are constantly evolving and one of these methods is replication. This study tests how replication affects the performance in a distributed MongoDB solution. The tests in this survey are automated and run against the database in different configurations to see how performance changes.
|
1104 |
Lock-based concurrency control for XMLAhmed, Namiruddin January 2006 (has links)
No description available.
|
1105 |
COSMOS next generation - A public knowledge base leveraging chemical and biological data to support the regulatory assessment of chemicalsYang, C., Cronin, M.T.D., Arvidson, K.B., Bienfait, B., Enoch, S.J., Heldreth, B., Hobocienski, B., Muldoon-Jacobs, K., Lan, Y., Madden, J.C., Magdziarz, T., Marusczyk, J., Mostrag, A., Nelms, M., Neagu, Daniel, Przybylak, K., Rathman, J.F., Park, J., Richarz, A.-N., Richard, A.M., Ribeiro, J.V., Sacher, O., Schwab, C., Volarath, P., Worth, A.P. 29 March 2022 (has links)
Yes / The COSMOS Database (DB) was originally established to provide reliable data for cosmetics-related chemicals within the COSMOS Project funded as part of the SEURAT-1 Research Initiative. The database has subsequently been maintained and developed further into COSMOS Next Generation (NG), a combination of database and in silico tools, essential components of a knowledge base. COSMOS DB provided a cosmetics inventory as well as other regulatory inventories, accompanied by assessment results and in vitro and in vivo toxicity data. In addition to data content curation, much effort was dedicated to data governance - data authorisation, characterisation of quality, documentation of meta information, and control of data use. Through this effort, COSMOS DB was able to merge and fuse data of various types from different sources. Building on the previous effort, the COSMOS Minimum Inclusion (MINIS) criteria for a toxicity database were further expanded to quantify the reliability of studies. COSMOS NG features multiple fingerprints for analysing structure similarity, and new tools to calculate molecular properties and screen chemicals with endpoint-related public profilers, such as DNA and protein binders, liver alerts and genotoxic alerts. The publicly available COSMOS NG enables users to compile information and execute analyses such as category formation and read-across. This paper provides a step-by-step guided workflow for a simple read-across case, starting from a target structure and culminating in an estimation of a NOAEL confidence interval. Given its strong technical foundation, inclusion of quality-reviewed data, and provision of tools designed to facilitate communication between users, COSMOS NG is a first step towards building a toxicological knowledge hub leveraging many public data systems for chemical safety evaluation. We continue to monitor the feedback from the user community at support@mn-am.com.
|
1106 |
Query processing in heterogeneous distributed database management systemsBhasker, Bharat 20 September 2005 (has links)
The goal of this work is to present an advanced query processing algorithm formulated and developed in support of heterogeneous distributed database management systems. Heterogeneous distributed database management systems view the integrated data through an uniform global schema. The query processing algorithm described here produces an inexpensive strategy for a query expressed over the global schema. The research addresses the following aspects of query processing: (1) Formulation of a low level query language to express the fundamental heterogeneous database operations; (2) Translation of the query expressed over the global schema to an equivalent query expressed over a conceptual schema; (3) An estimation methodology to derive the intermediate result sizes of the database operations; (4) A query decomposition algorithm to generate an efficient sequence of the basic database operations to answer the query. This research addressed the first issue by developing an algebraic query language called cluster algebra. The cluster algebra consists of the following operations: (a) Selection, union, intersection and difference, which are extensions of their relational algebraic counterparts to heterogeneous databases; (b) Normal-join and normal-projection which replace their counterparts, join and projection, in the relational algebra; (c) Two new operators embed and unembed to restructure the database schema. The second issue of the query translation was addressed by development of an algorithm that translates a cluster algebra query expressed over the virtual views to an equivalent cluster algebra query expressed over the conceptual databases. A non-parametric estimation methodology to estimate the result size of a cluster algebra operation was developed to address the third issue described above. Finally, this research developed a query decomposition algorithm, applicable to the relational and non-relational databases, that decomposes a query by computing all profitable semi-join operations, followed by the determination of the best sequence of join operations per processing site. The join optimization is performed by formulating a zero-one integer linear program that uses the non-parametric estimation technique to compute the sizes of intermediate results. The query processing algorithm was implemented in the context of DAVID, a heterogeneous distributed database management system. / Ph. D.
|
1107 |
Detecting edges in noisy face database imagesQahwaji, Rami S.R. January 2003 (has links)
no / No Abstract
|
1108 |
A transportable natural language front-end to data base management systemsSafigan, Steve J. 01 August 2012 (has links)
Although some success has been achieved in the design of front-end natural language processors to data base management systems, transporting the processor to various data base management systems has proven to be elusive. A transportable system must be modular; it must be able to adapt to radically different data domains; and it must be able to communicate with many different data managers. The system developed accomplishes this by maintaining its own knowledge base distinct from the target data base management system, so that no communication is needed between the natural language processor and the data manager during the parse. The knowledge base is developed by interviewing the system administrator about the structure and meaning of the elements in the target data base. The natural language processor then converts the natural language query into an unambiguous intermediate-language query, which is easily converted to P the target query language using simple syntactic methods. / Master of Science
|
1109 |
Heuristics for laying out information graphsLavinus, Joseph W. 30 December 2008 (has links)
The representation of information in modern database systems is complicated by the need to represent relationships among pieces of information. A natural representation for such databases is the information graph that associates the pieces of information with vertices in the graph and the relationships with edges. Five characteristics of this representation are noteworthy. First, each vertex has a size (in bytes) sufficient to store its corresponding piece of information. Second, retrieval in an information graph may follow a number of patterns; in particular, retrieval of adjacent vertices via edge traversals must be efficient. Third, in many applications such as a dictionary or bibliographic archive, the information graph may be considered static. Fourth, the ultimate home for an information graph is likely to be a roughly linear medium such as a magnetic disk or CD-ROM. Finally, information graphs are quite large-hundreds of thousands of vertices and tens of megabytes in size. / Master of Science
|
1110 |
An automation and data management system for an electronic autobalanceMurphy, Bertram Wayne 28 July 2010 (has links)
A typical application for an electronic autobalance is discussed, and the requirements for an autobalance automation system are developed. The design of six application programs that satisfy these requirements is presented, and the operation and interaction of these programs is discussed in detail. Typical weighing sessions of the autobalance while running under the automation system are described. Current status of the autobalance automation system is outlined, and recommendations for future action are made. / Master of Science
|
Page generated in 0.042 seconds