• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 417
  • 73
  • 18
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 14
  • 7
  • 5
  • 5
  • 3
  • 3
  • Tagged with
  • 667
  • 667
  • 269
  • 217
  • 192
  • 152
  • 128
  • 122
  • 96
  • 83
  • 80
  • 67
  • 56
  • 54
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Move my data to the cloud: an online cost-minimizing approach

Zhang, Linquan, 张琳泉 January 2012 (has links)
Cloud computing has rapidly emerged as a new computation paradigm, providing agile and scalable resource access in a utility-like fashion. Processing of massive amounts of data has been a primary usage of the clouds in practice. While many efforts have been devoted to designing the computation models (e.g., MapReduce), one important issue has been largely neglected in this respect: how do we efficiently move the data, practically generated from different geographical locations over time, into a cloud for effective processing? The usual approach of shipping data using hard disks lacks flexibility and security. As the first dedicated effort, this paper tackles this massive, dynamic data migration issue. Targeting a cloud encompassing disparate data centers of different resource charges, we model the cost-minimizing data migration problem, and propose efficient offline and online algorithms, which optimize the routes of data into the cloud and the choice of the data center to aggregate the data for processing, at any give time. Three online algorithms are proposed to practically guide data migration over time. With no need of any future information on the data generation pattern, an online lazy migration (OLM) algorithm achieves a competitive ratio as low as 2:55 under typical system settings, and a work function algorithm (WFA) has a linear 2K-1 (K is the number of data centers) competitive ratio. The rest one randomized fixed horizon control algorithm (RFHC) achieves 1+ 1/(l+1 ) κ/λ competitive ratio in theory with a lookahead window of l into the future, where κ and λ are protocol parameters. We conduct extensive experiments to evaluate our online algorithms, using real-world meteorological data generation traces, under realistic cloud settings. Comparisons among online and offline algorithms show a close-to-offline-optimum performance and demonstrate the effectiveness of our online algorithms in practice. / published_or_final_version / Computer Science / Master / Master of Philosophy
162

Enhanced classification through exploitation of hierarchical structures

Punera, Kunal Vinod Kumar 28 August 2008 (has links)
Not available / text
163

FORM DRIVEN CONCEPTUAL DATA MODELING (DATABASE DESIGN, EXPERT SYSTEMS, CONCEPTUAL).

CHOOBINEH, JOOBIN. January 1985 (has links)
Conceptual data schema is constructed from the analysis of the business forms which are used in an enterprise. In order to peform the analysis a data model, a forms model, and heuristics to map from the forms model to the data model are developed. The data model we use is an extended version of the Entity-Relationship Model. Extensions include the addition of the min-max cardinalities and generalization hierarchy. By extending the min-max cardinalities to attributes we capture a number of significant characteristics of the entities in a concise manner. We introduce a hierarchical model of forms. The model specifies various properties of each form field within the form such as their origin, hierarchical structure, and cardinalities. The inter-connection of the forms is expressed by specifying which form fields flow from one form to another. The Expert Database Design System creates a conceptual schema by incrementally integrating related collections of forms. The rules of the expert system are divided into six groups: (1) Form Selection, (2) Entity Identification, (3) Attribute Attachment, (4) Relationship Identification, (5) Cardinality Identification, and (6) Integrity Constraints. The rules of the first group use knowledge about the form flow to determine the order in which forms are analyzed. The rules in other groups are used in conjunction with a designer dialogue to identify entities, relationships, and attributes of a schema that represents the collection of forms.
164

On indexing large databases for advanced data models

Samoladas, Vasilis 04 April 2011 (has links)
Not available / text
165

Bridging data integration technology and e-commerce

Lo, Chi-lik, Eric., 盧至力. January 2003 (has links)
published_or_final_version / abstract / toc / Computer Science and Information Systems / Master / Master of Philosophy
166

View update and temporal correctness in real-time database systems

Cheng, Chun-kong., 鄭振剛. January 2000 (has links)
published_or_final_version / abstract / toc / Computer Science and Information Systems / Master / Master of Philosophy
167

Maintenance of association rules in large databases

李守敦, Lee, Sau-dan. January 1997 (has links)
published_or_final_version / Computer Science / Master / Master of Philosophy
168

Implementing QT-selectors and updates for a primary memory version of Aldat

Tsakalis, Maria. January 1987 (has links)
No description available.
169

Discretionary data bases as public goods : a theory and some experimental findings

Thorn, Brian K. 05 1900 (has links)
No description available.
170

Metadata view graphs : a framework for query optimization and metadata management

Pittges, Jeff 12 1900 (has links)
No description available.

Page generated in 0.0844 seconds