Spelling suggestions: "subject:"database"" "subject:"catabase""
431 |
Performance of update algorithms for replicated dataGarcia-Molina, Hector. January 1900 (has links)
Revision of Thesis (Ph. D.)--Stanford, 1979. / Includes bibliographical references (p. [313]-315) and index.
|
432 |
Geo-Demoraphic analysis in support of the United States Army Reserve (USAR) Unit Positioning and Quality Assessment Model (UPQUAM) /Fair, Martin Lynn. January 2004 (has links) (PDF)
Thesis (M.S. in Operations Research)--Naval Postgraduate School, June 2004. / Thesis advisor(s): David H. Olwell. Includes bibliographical references (p. 115). Also available online.
|
433 |
Adaptive scheduling algorithm selection in a streaming query systemPielech, Bradford Charles. January 2003 (has links)
Thesis (M.S.)--Worcester Polytechnic Institute. / Keywords: streaming query; query processing; database. Includes bibliographical references (p. 57-59).
|
434 |
An online interactive spreadsheet approach to data analysis and manipulationTan, Meifang. January 2002 (has links)
Thesis (M.S.)--University of Florida, 2002. / Title from title page of source document. Includes vita. Includes bibliographical references.
|
435 |
The influence of retrieval system on the outcomes of ERIC searches by graduate students /Evans, Mary Marcum, January 1995 (has links)
Thesis (Ph. D.)--University of Oklahoma, 1995. / Includes bibliographical references (leaves 79-86).
|
436 |
Move my data to the cloud: an online cost-minimizing approachZhang, Linquan, 张琳泉 January 2012 (has links)
Cloud computing has rapidly emerged as a new computation paradigm, providing agile and scalable resource access in a utility-like fashion. Processing of massive amounts of data has been a primary usage of the clouds in practice. While many efforts have been devoted to designing the computation models (e.g., MapReduce), one important issue has been largely neglected in this respect: how do we efficiently move the data, practically generated from different geographical locations over time, into a cloud for effective processing? The usual approach of shipping data using hard disks lacks flexibility and security. As the first dedicated effort, this paper tackles this massive, dynamic data migration issue. Targeting a cloud encompassing disparate data centers of different resource charges, we model the cost-minimizing data migration problem, and propose efficient offline and online algorithms, which optimize the routes of data into the cloud and the choice of the data center to aggregate the data for processing, at any give time. Three online algorithms are proposed to practically guide data migration over time. With no need of any future information on the data generation pattern, an online lazy migration (OLM) algorithm achieves a competitive ratio as low as 2:55 under typical system settings, and a work function algorithm (WFA) has a linear 2K-1 (K is the number of data centers) competitive ratio. The rest one randomized fixed horizon control algorithm (RFHC) achieves 1+ 1/(l+1 ) κ/λ competitive ratio in theory with a lookahead window of l into the future, where κ and λ are protocol parameters. We conduct extensive experiments to evaluate our online algorithms, using real-world meteorological data generation traces, under realistic cloud settings. Comparisons among online and offline algorithms show a close-to-offline-optimum performance and demonstrate the effectiveness of our online algorithms in practice. / published_or_final_version / Computer Science / Master / Master of Philosophy
|
437 |
Enhanced classification through exploitation of hierarchical structuresPunera, Kunal Vinod Kumar 28 August 2008 (has links)
Not available / text
|
438 |
FORM DRIVEN CONCEPTUAL DATA MODELING (DATABASE DESIGN, EXPERT SYSTEMS, CONCEPTUAL).CHOOBINEH, JOOBIN. January 1985 (has links)
Conceptual data schema is constructed from the analysis of the business forms which are used in an enterprise. In order to peform the analysis a data model, a forms model, and heuristics to map from the forms model to the data model are developed. The data model we use is an extended version of the Entity-Relationship Model. Extensions include the addition of the min-max cardinalities and generalization hierarchy. By extending the min-max cardinalities to attributes we capture a number of significant characteristics of the entities in a concise manner. We introduce a hierarchical model of forms. The model specifies various properties of each form field within the form such as their origin, hierarchical structure, and cardinalities. The inter-connection of the forms is expressed by specifying which form fields flow from one form to another. The Expert Database Design System creates a conceptual schema by incrementally integrating related collections of forms. The rules of the expert system are divided into six groups: (1) Form Selection, (2) Entity Identification, (3) Attribute Attachment, (4) Relationship Identification, (5) Cardinality Identification, and (6) Integrity Constraints. The rules of the first group use knowledge about the form flow to determine the order in which forms are analyzed. The rules in other groups are used in conjunction with a designer dialogue to identify entities, relationships, and attributes of a schema that represents the collection of forms.
|
439 |
Implementing Effective Biocuration Process, Training, and Quality Management Protocols on Undergraduate Biocuration of Amyotrophic Lateral SclerosisTrue, Rachel Wilcox 18 August 2015 (has links)
Biocuration is manual scientific collection, annotation and validation of literary information of biological and model organisms into a single database. Successful biocuration processes involve those with an extensive collection of literature, a user-friendly database interfaces for entering and analyzing data from published papers, and highly regulated training and quality assurance protocols. Due to the rapid expansion of biomedical literature, an efficient and accurate biocuration process has become more valuable due to the magnitude of data available in published literature. As the biocuration process incorporates undergraduates, it is critical that the medium for data collection is simple, ergonomic, and infallible. A reconstructed FileMaker Pro database was introduced to previously trained undergraduate students for process evaluation. Streamlining the biocuration process and grouping data structure to be more intuitive were two goals the new database interface hoped to achieve. The creation of a rigorous training program and strict quality management protocol is needed to prepare the lab for the introduction of efficient biocuration processes. Through the database designing process, training protocols were drafted to effectively call the biocurator’s attention to important changes in the interface design. Upon prototyping the database, entry errors were reviewed, training protocols were adjusted, and the quality protocols were drafted. When the combination of undergraduate biocurators and the reconstructed database under these new protocols was compared to statistics in the biocuration field, results proved to show increase in both productivity rates as well as accuracy rates. By having such efficiency at the undergraduate level, subject matter experts will no longer be required to perform this type of research and can focus on analysis. This will increase research productivity and reduce costs in the overall biocuration process. With over 12,000 published papers regarding Amyotrophic Lateral Sclerosis on Pubmed in 2014 alone, this revolutionary combination could lead to quickly finding a suitable cure for these patients.
|
440 |
On indexing large databases for advanced data modelsSamoladas, Vasilis 04 April 2011 (has links)
Not available / text
|
Page generated in 0.0318 seconds