• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 29
  • 29
  • 7
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Mise en oeuvre de politiques de protection de données à caractère personnel : ine approche reposant sur la réécriture de requêtes SPARQL

Oulmakhzoune, Said 29 April 2013 (has links) (PDF)
With the constant proliferation of information systems around the globe, the need for decentralized and scalable data sharing mechanisms has become a major factor of integration in a wide range of applications. Literature on information integration across autonomous entities has tacitly assumed that the data of each party can be revealed and shared to other parties. A lot of research, concerning the management of heterogeneous sources and database integration, has been proposed, for example based on centralized or distributed mediators that control access to data managed by different parties. On the other hand, real life data sharing scenarios in many application domains like healthcare, e-commerce market, e-government show that data integration and sharing are often hampered by legitimate and widespread data privacy and security concerns. Thus, protecting the individual data may be a prerequisite for organizations to share their data in open environments such as Internet. Work undertaken in this thesis aims to ensure security and privacy requirements of software systems, which take the form of web services, using query rewriting principles. The user query (SPARQL query) is rewritten in such a way that only authorized data are returned with respect to some confidentiality and privacy preferences policy. Moreover, the rewriting algorithm is instrumented by an access control model (OrBAC) for confidentiality constraints and a privacy-aware model (PrivOrBAC) for privacy constraints. A secure and privacy-preserving execution model for data services is then defined. Our model exploits the services¿ semantics to allow service providers to enforce locally their privacy and security policies without changing the implementation of their data services i.e., data services are considered as black boxes. We integrate our model to the architecture of Axis 2.0 and evaluate its efficiency in the healthcare application domain.
12

Linked-OWL: A new approach for dynamic linked data service workflow composition

Ahmad, Hussien, Dowaji, Salah 01 June 2013 (has links)
The shift from Web of Document into Web of Data based on Linked Data principles defined by Tim Berners-Lee posed a big challenge to build and develop applications to work in Web of Data environment. There are several attempts to build service and application models for Linked Data Cloud. In this paper, we propose a new service model for linked data "Linked-OWL" which is based on RESTful services and OWL-S and copes with linked data principles. This new model shifts the service concept from functions into linked data things and opens the road for Linked Oriented Architecture (LOA) and Web of Services as part and on top of Web of Data. This model also provides high level of dynamic service composition capabilities for more accurate dynamic composition and execution of complex business processes in Web of Data environment.
13

Coordination fiable de services de données à base de politiques active

Alfonso Espinosa-Oviedo, Javier 28 October 2013 (has links) (PDF)
Nous proposons une approche pour ajouter des propriétés non-fonctionnelles (traitement d'exceptions, atomicité, sécurité, persistance) à des coordinations de services. L'approche est basée sur un Modèle de Politiques Actives (AP Model) pour représenter les coordinations de services avec des propriétés non-fonctionnelles comme une collection de types. Dans notre modèle, une coordination de services est représentée comme un workflow compose d'un ensemble ordonné d'activité. Chaque activité est en charge d'implante un appel à l'opération d'un service. Nous utilisons le type Activité pour représenter le workflow et ses composants (c-à-d, les activités du workflow et l'ordre entre eux). Une propriété non-fonctionnelle est représentée comme un ou plusieurs types de politiques actives, chaque politique est compose d'un ensemble de règles événement-condition-action qui implantent un aspect d'un propriété. Les instances des entités du modèle, politique active et activité peuvent être exécutées. Nous utilisons le type unité d'exécution pour les représenter comme des entités dont l'exécution passe par des différents états d'exécution en exécution. Lorsqu'une politique active est associée à une ou plusieurs unités d'exécution, les règles vérifient si l'unité d'exécution respecte la propriété non-fonctionnelle implantée en évaluant leurs conditions sur leurs états d'exécution. Lorsqu'une propriété n'est pas vérifiée, les règles exécutant leurs actions pour renforcer les propriétés en cours d'exécution. Nous avons aussi proposé un Moteur d'exécution de politiques actives pour exécuter un workflow orientés politiques actives modélisé en utilisant notre AP Model. Le moteur implante un modèle d'exécution qui détermine comment les instances d'une AP, une règle et une activité interagissent entre elles pour ajouter des propriétés non-fonctionnelles (NFP) à un workflow en cours d'exécution. Nous avons validé le modèle AP et le moteur d'exécution de politiques actives en définissant des types de politiques actives pour adresser le traitement d'exceptions, l'atomicité, le traitement d'état, la persistance et l'authentification. Ces types de politiques actives ont été utilisés pour implanter des applications à base de services fiables, et pour intégrer les données fournies par des services à travers des mashups.
14

Design and Development of a Dynamic Web App Library for HydroShare

Henrichsen, Alexander Hart 07 June 2022 (has links)
This paper documents the design and creation of an App Library for HydroShare water resources data and software discovery and sharing system. This App Library was developed to simplify the discovery process for using environmental web applications and to lower hosting requirements for such a repository. To accomplish this goal, I created the HydroShare App Library as a standalone web application using the React JavaScript framework. The App Library application uses the existing HydroShare resource connectors to allow the registration of all web applications within the App Library without having external software requirements. This allows the HydroShare App Library to be a centralized location for web app developers to register their tools and models using their preferred software while allowing water resources managers, engineers, scientists, and decision-makers to find these tools in a single location. The developed HydroShare App Library allows the discovery of all web applications that are included in the HydroShare ecosystem and not just CUAHSI-owned web apps. This is done by using a dynamic table with React that automatically updates the user interface without having to reload entire pages. This approach allows this web app to reduce processing for the App Library by only rendering web app entries that are relevant to the current user. This allows the App Library to grow and continue to be effective as more web applications are registered in HydroShare and are discoverable within the App Library.
15

An Integrated End-User Data Service for HPC Centers

Monti, Henry Matthew 16 January 2013 (has links)
The advent of extreme-scale computing systems, e.g., Petaflop supercomputers, High Performance Computing (HPC) cyber-infrastructure, Enterprise databases, and experimental facilities such as large-scale particle colliders, are pushing the envelope on dataset sizes.  Supercomputing centers routinely generate and consume ever increasing amounts of data while executing high-throughput computing jobs. These are often result-datasets or checkpoint snapshots from long-running simulations, but can also be input data from experimental facilities such as the Large Hadron Collider (LHC) or the Spallation Neutron Source (SNS). These growing datasets are often processed by a geographically dispersed user base across multiple different HPC installations.  Moreover, end-user workflows are also increasingly distributed in nature with massive input, output, and even intermediate data often being transported to and from several HPC resources or end-users for further processing or visualization. The growing data demands of applications coupled with the distributed nature of HPC workflows, have the potential to place significant strain on both the storage and network resources at HPC centers. Despite this potential impact, rather than stringently managing HPC center resources, a common practice is to leave application-associated data management to the end-user, as the user is intimately aware of the application's workflow and data needs. This means end-users must frequently interact with the local storage in HPC centers, the scratch space, which is used for job input, output, and intermediate data. Scratch is built using a parallel file system that supports very high aggregate I/O throughput, e.g., Lustre, PVFS, and GPFS. To ensure efficient I/O and faster job turnaround, use of scratch by applications is encouraged.  Consequently, job input and output data are required to be moved in and out of the scratch space by end-users before and after the job runs, respectively. In practice, end-users arbitrarily stage and offload data as and when they deem fit, without any consideration to the center's performance, often leaving data on the scratch long after it is needed. HPC centers resort to "purge" mechanisms that sweep the scratch space to remove files found to be no longer in use, based on not having been accessed in a preselected time threshold called the purge window that commonly ranges from a few days to a week. This ad-hoc data management ignores the interactions between different users' data storage and transmission demands, and their impact on center serviceability leading to suboptimal use of precious center resources. To address the issues of exponentially increasing data sizes and ad-hoc data management, we present a fresh perspective to scratch storage management by fundamentally rethinking the manner in which scratch space is employed. Our approach is twofold. First, we re-design the scratch system as a "cache" and build "retention", "population", and "eviction"  policies that are tightly integrated from the start, rather than being add-on tools. Second, we aim to provide and integrate the necessary end-user data delivery services, i.e. timely offloading (eviction) and just-in-time staging (population), so that the center's scratch space usage can be optimized through coordinated data movement. Together, these two combined approaches create our Integrated End-User Data Service, wherein data transfer and placement on the scratch space are scheduled with job execution. This strategy allows us to couple job scheduling with cache management, thereby bridging the gap between system software tools and scratch storage management. It enables the retention of only the relevant data for the duration it is needed. Redesigning the scratch as a cache captures the current HPC usage pattern more accurately, and better equips the scratch storage system to serve the growing datasets of workloads. This is a fundamental paradigm shift in the way scratch space has been managed in HPC centers, and outweighs providing simple purge tools to serve a caching workload. / Ph. D.
16

STANDARD USER DATA SERVICES FOR SPACECRAFT APPLICATIONS

Smith, Joseph F., Hwang, Chailan, Fowell, Stuart, Plummer, Chris 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / The Consultative Committee for Space Data Systems (CCSDS) is an international organization of national space agencies that is branching out to provide new standards to enhanced reuse of spacecraft equipment and software. These Spacecraft Onboard Interface (SOIF) standards will be directed towards a spacecraft architecture, as a distributed system of processors and busses. This paper will review the services that are being proposed for SOIF. These services include a Command and Data Acquisition Service, a Time Distribution Service, a Message Transfer Service, a File Transfer Service, and a CCSDS Packet Service. An Instrument & Subsystem “Plug & Play Service is currently under study, but is included in this paper for completeness.
17

HyQoZ - Optimisation de requêtes hybrides basée sur des contrats SLA / HyQoZ – SLA-aware hybrid query optimization

Lopez-Enriquez, Carlos-Manuel 23 October 2014 (has links)
On constate aujourd’hui une explosion de la quantité de données largement distribuées et produites par différents dispositifs (e.g. capteurs, dispositifs informatiques, réseaux, processus d’analyse) à travers de services dits de données. Dans ce contexte, il s’agit d’évaluer des requêtes dites hybrides car ils intègrent des aspects de requêtes classiques, mobiles et continues fournies par des services de données, statiques ou mobiles, en mode push ou pull. L’objectif de ma thèse est de proposer une approche pour l’optimisation de ces requêtes hybrides basée sur des préférences multicritère (i.e. SLA – Service Level Agreement). Le principe consiste à combiner les services de données et calcule pour construire un évaluateur de requêtes adapté au SLA requis par l’utilisateur, tout en considérant les conditions de QoS des services et du réseau. / Today we are witnesses of the explosion of data producer massively by largely distributed of data produced by different devices (e.g. sensors, personal computers, laptops, networks) by means of data services. In this context, It is about evaluate queries named hybrid because they entails aspects related with classic queries, mobile and continuous provided by static or nomad data services in mode push or pull. The objective of my thesis is to propose an approach to optimize hybrid queries based in multi-criteria preferences (i.e. SLA – Service Level Agreement). The principle is to combine data services to construct a query evaluator adapted to the preferences expressed in the SLA whereas the state of services and network is considered as QoS measures.
18

Analýza uživatelské roviny mobilních sítí 4. generace / User plane analysis in 4th generation mobile networks

Velsh, Ilya January 2014 (has links)
The thesis describes the 2G, 3G and 4G mobile systems with a focus on the user plane. It decipt the problem of key performance indicators focusing on the characteristics of the user plane. It contains analysis of the basic data transmission services and requirements for their quality. The thesis also describes the user plane protocol stacks.
19

Energy Modeling and Management for Data Services in Multi-Tier Mobile Cloud Architectures

Xu, Zichen 21 November 2016 (has links)
No description available.
20

Master Data Management a jeho využití v praxi / Master Data Management and its usage in practice

Kukačka, Pavel January 2011 (has links)
This thesis deals with the Master Data Management (MDM), specifically its implementation. The main objectives are to analyze and capture the general approaches of MDM implementation including best practices, describe and evaluate the implementation of MDM project using Microsoft SQL Server 2008 R2 Master Data Services (MDS) realized in the Czech environment and on the basis of the above theoretical background, experiences of implemented project and available technical literature create a general procedure for implementation of the MDS tool. To achieve objectives above are used these procedures: exploration of information resources (printed, electronic and personal appointments with consultants of Clever Decision), cooperation on project realized by Clever Decision and analysis of tool Microsoft SQL Server 2008 R2 Master Data Services. Contributions of this work are practically same as its goals. The main contribution is creation of a general procedure for implementation of the MDS tool. The thesis is divided into two parts. The first (theoretically oriented) part deals with basic concepts (including definition against other systems), architecture, implementing styles, market trends and best practices. The second (practically oriented) part deals at first with implementation of realized MDS project and hereafter describes a general procedure for implementation of the MDS tool.

Page generated in 0.0792 seconds