• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 5
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 36
  • 36
  • 14
  • 14
  • 10
  • 9
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Role-based Access Control for the Open Grid Services Architecture – Data Access and Integration (OGSA-DAI)

Pereira, Anil L. 12 June 2007 (has links)
No description available.
32

Semantic and Role-Based Access Control for Data Grid Systems

Muppavarapu, Vineela 11 December 2009 (has links)
No description available.
33

Temporal Conjunctive Queries in Expressive DLs with Non-simple Roles

Baader, Franz, Borgwardt, Stefan, Lippmann, Marcel 20 June 2022 (has links)
In Ontology-Based Data Access (OBDA), user queries are evaluated over a set of facts under the open world assumption, while taking into account background knowledge given in the form of a Description Logic (DL) ontology. Motivated by situation awareness applications, temporal conjunctive queries (TCQs) have recently been proposed as a useful extension of traditional OBDA to support the processing of temporal information. This paper extends the existing complexity analysis of TCQ entailment to very expressive DLs underlying the OWL 2 standard, and in contrast to previous work also allows for queries containing transitive roles. / This is an extended version of the paper “Temporal Conjunctive Queries in Expressive Description Logics with Transitive Roles”, published in the Proceedings of the 28th Australasian Joint Conference on Artificial Intelligence (AI’15).
34

Objekt-relationsmappning i datacentrerad applikation / Object-Relational Mapping in a Data-Centric Application

Öjebo, Erik January 2009 (has links)
<p>Denna rapport presenterar en undersökning av sex olika objekt-relationsmappningsramverk, nämligen Entity Framework, LINQ to SQL, NHibernate, Castle ActiveRecord, MyGeneration Doodads och Subsonic. Undersökningen redogör för styrkor och svagheter hos de olika ramverken samt diskuterar när respektive ramverk är lämpligt att använda.</p><p>De ramverk som bedömdes vara mest intressanta var NHibernate och Entity Framework, då de erbjuder flexibel mappning mellan domänmodellen och det underliggande databasschemat samt god tillgång till dokumentation och litteratur.</p><p>Undersökningen användes som grund för att besluta vilket av de aktuella ramverken som skulle användas vid en omskrivning av en existerande applikation för IT-konsultföretaget Sogeti. Det ramverk som ansågs mest lämpligt för applikationen var NHibernate.</p> / <p>This report presents a study of six different object-relational mapping frameworks, namely Entity Framework, LINQ to SQL, NHibernate, Castle ActiveRecord, MyGeneration Doodads and Subsonic. The study describes the strengths and weaknesses of the various frameworks and discusses when each framework is appropriate to use.</p><p>The frameworks that were judged to be the most interesting were NHibernate and Entity Framework, since they provide flexible mapping between the domain model and the underlying database schema as well as good availability of documentation and literature.</p><p>The study was used as a basis for deciding which of the frameworks that should be used in a rewrite of an existing application for the IT consulting company Sogeti. The framework that was considered the most appropriate for the application was NHibernate.</p>
35

Objekt-relationsmappning i datacentrerad applikation / Object-Relational Mapping in a Data-Centric Application

Öjebo, Erik January 2009 (has links)
Denna rapport presenterar en undersökning av sex olika objekt-relationsmappningsramverk, nämligen Entity Framework, LINQ to SQL, NHibernate, Castle ActiveRecord, MyGeneration Doodads och Subsonic. Undersökningen redogör för styrkor och svagheter hos de olika ramverken samt diskuterar när respektive ramverk är lämpligt att använda. De ramverk som bedömdes vara mest intressanta var NHibernate och Entity Framework, då de erbjuder flexibel mappning mellan domänmodellen och det underliggande databasschemat samt god tillgång till dokumentation och litteratur. Undersökningen användes som grund för att besluta vilket av de aktuella ramverken som skulle användas vid en omskrivning av en existerande applikation för IT-konsultföretaget Sogeti. Det ramverk som ansågs mest lämpligt för applikationen var NHibernate. / This report presents a study of six different object-relational mapping frameworks, namely Entity Framework, LINQ to SQL, NHibernate, Castle ActiveRecord, MyGeneration Doodads and Subsonic. The study describes the strengths and weaknesses of the various frameworks and discusses when each framework is appropriate to use. The frameworks that were judged to be the most interesting were NHibernate and Entity Framework, since they provide flexible mapping between the domain model and the underlying database schema as well as good availability of documentation and literature. The study was used as a basis for deciding which of the frameworks that should be used in a rewrite of an existing application for the IT consulting company Sogeti. The framework that was considered the most appropriate for the application was NHibernate.
36

Efficient placement design and storage cost saving for big data workflow in cloud datacenters / Conception d'algorithmes de placement efficaces et économie des coûts de stockage pour les workflows du big data dans les centres de calcul de type cloud

Ikken, Sonia 14 December 2017 (has links)
Les workflows sont des systèmes typiques traitant le big data. Ces systèmes sont déployés sur des sites géo-distribués pour exploiter des infrastructures cloud existantes et réaliser des expériences à grande échelle. Les données générées par de telles expériences sont considérables et stockées à plusieurs endroits pour être réutilisées. En effet, les systèmes workflow sont composés de tâches collaboratives, présentant de nouveaux besoins en terme de dépendance et d'échange de données intermédiaires pour leur traitement. Cela entraîne de nouveaux problèmes lors de la sélection de données distribuées et de ressources de stockage, de sorte que l'exécution des tâches ou du job s'effectue à temps et que l'utilisation des ressources soit rentable. Par conséquent, cette thèse aborde le problème de gestion des données hébergées dans des centres de données cloud en considérant les exigences des systèmes workflow qui les génèrent. Pour ce faire, le premier problème abordé dans cette thèse traite le comportement d'accès aux données intermédiaires des tâches qui sont exécutées dans un cluster MapReduce-Hadoop. Cette approche développe et explore le modèle de Markov qui utilise la localisation spatiale des blocs et analyse la séquentialité des fichiers spill à travers un modèle de prédiction. Deuxièmement, cette thèse traite le problème de placement de données intermédiaire dans un stockage cloud fédéré en minimisant le coût de stockage. A travers les mécanismes de fédération, nous proposons un algorithme exacte ILP afin d’assister plusieurs centres de données cloud hébergeant les données de dépendances en considérant chaque paire de fichiers. Enfin, un problème plus générique est abordé impliquant deux variantes du problème de placement lié aux dépendances divisibles et entières. L'objectif principal est de minimiser le coût opérationnel en fonction des besoins de dépendances inter et intra-job / The typical cloud big data systems are the workflow-based including MapReduce which has emerged as the paradigm of choice for developing large scale data intensive applications. Data generated by such systems are huge, valuable and stored at multiple geographical locations for reuse. Indeed, workflow systems, composed of jobs using collaborative task-based models, present new dependency and intermediate data exchange needs. This gives rise to new issues when selecting distributed data and storage resources so that the execution of tasks or job is on time, and resource usage-cost-efficient. Furthermore, the performance of the tasks processing is governed by the efficiency of the intermediate data management. In this thesis we tackle the problem of intermediate data management in cloud multi-datacenters by considering the requirements of the workflow applications generating them. For this aim, we design and develop models and algorithms for big data placement problem in the underlying geo-distributed cloud infrastructure so that the data management cost of these applications is minimized. The first addressed problem is the study of the intermediate data access behavior of tasks running in MapReduce-Hadoop cluster. Our approach develops and explores Markov model that uses spatial locality of intermediate data blocks and analyzes spill file sequentiality through a prediction algorithm. Secondly, this thesis deals with storage cost minimization of intermediate data placement in federated cloud storage. Through a federation mechanism, we propose an exact ILP algorithm to assist multiple cloud datacenters hosting the generated intermediate data dependencies of pair of files. The proposed algorithm takes into account scientific user requirements, data dependency and data size. Finally, a more generic problem is addressed in this thesis that involve two variants of the placement problem: splittable and unsplittable intermediate data dependencies. The main goal is to minimize the operational data cost according to inter and intra-job dependencies

Page generated in 0.0683 seconds