• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

SINS: um ambiente para geração de aplicações a partir de serviços

Larentis Júnior, Sérgio 10 January 2008 (has links)
Made available in DSpace on 2015-03-05T13:59:42Z (GMT). No. of bitstreams: 0 Previous issue date: 10 / Nenhuma / A arquitetura SOA (service oriented architecture) possibilita que serviços sejam desenvolvidos em linguagens diversas, orquestrados e combinados de modo a se obter aplicações, as chamadas Composite Applications. Apesar dos grandes avanços no que diz respeito à Web Services e IDEs de desenvolvimento, ainda não há um ambiente que permita a geração destas Composite Applications sem a necessidade de codificação e sem a necessidade de intervenção de um profissional da área de desenvolvimento de software. O SINS, apresentado por este trabalho, é um ambiente capaz de gerar Composite Applications em tempo real, consumindo serviços pré-existentes. Tendo a vantagem de não necessitar de codificação ou da intervenção de um profissional da área de software / SOA (service oriented architecture) makes possible that services developed in several languages be organized and combined to obtain applications (called Composite Applications). Despite the advances in Web Services and development IDEs, there is not an environment capable of generate Composite Application without the need of coding and without the need of a software development professional participation. SINS, showed in this work, is an environment capable of generate Composite Applications in real time, consuming the existent services with the advantage that it doesn’t need any coding to do this task
2

Multi-scale hydrological information system using an OGC standards-based architecture

Dong, Jingqi 08 July 2011 (has links)
A Multi-Scale Hydrological Information System (HIS) includes three levels of HIS, which are the national CUAHSI HIS, the Texas HIS and the local Capital Area Council of Governments (CAPCOG) HIS. The CUAHSI Hydrologic Information System has succeeded in putting water data together using a Services-Oriented Architecture (SOA). However, maintaining the current metadata catalog service has been problematic. An Open Geospatial Consortium (OGC) standard transformation procedure is happening to transfer the current web services into OGC adopted services and models. The transformation makes CUAHSI HIS compliant with the international OGC standards and to have the capability to host tremendous water data. On a scaled down level, the Texas HIS has been built for the specific Texas hydrologic data, concerning the variables and the web services listed in this thesis. The CAPCOG emergency response system was initiated for the purpose of the Texas flash flood warning, including several data services, such as the USGS NWIS, the City of Austin (COA) and the Lower Colorado River Authority (LCRA). By applying the consistent mechanism, which is the OGC standards-based SOA, in these three scales of HIS, three catalogs of services can be created within the architecture, and hydrologic data services included in different catalogs can be searched across. Each catalog of services has a different scale or purpose. A technique, called KiWIS developed by the KISTERS Company, of publishing OGC standard web services through the WISKI hydrologic database was then described. The technique has been applied to the City of Austin’s water data hosted at CRWR. The OGC standard transformation progress reviewed in the thesis and the technique described can give a reference on how to synthesize Multi-Scale HIS within a standard mechanism. / text
3

CURARE : curating and managing big data collections on the cloud / CURARE : curation et gestion de collections de données volumineuses sur le cloud

Kemp, Gavin 26 September 2018 (has links)
L'émergence de nouvelles plateformes décentralisées pour la création de données, tel que les plateformes mobiles, les capteurs et l'augmentation de la disponibilité d'open data sur le Web, s'ajoute à l'augmentation du nombre de sources de données disponibles et apporte des données massives sans précédent à être explorées. La notion de curation de données qui a émergé se réfère à la maintenance des collections de données, à la préparation et à l'intégration d'ensembles de données (data set), les combinant avec une plateforme analytique. La tâche de curation inclut l'extraction de métadonnées implicites et explicites ; faire la correspondance et l'enrichissement des métadonnées sémantiques afin d'améliorer la qualité des données. La prochaine génération de moteurs de gestion de données devrait promouvoir des techniques avec une nouvelle philosophie pour faire face au déluge des données. Ils devraient aider les utilisateurs à comprendre le contenue des collections de données et à apporter une direction pour explorer les données. Un scientifique peut explorer les collections de données pas à pas, puis s'arrêter quand le contenu et la qualité atteignent des niveaux satisfaisants. Notre travail adopte cette philosophie et la principale contribution est une approche de curation des données et un environnement d'exploration que nous avons appelé CURARE. CURARE est un système à base de services pour curer et explorer des données volumineuses sur les aspects variété et variabilité. CURARE implémente un modèle de collection de données, que nous proposons, visant représenter le contenu structurel des collections des données et les métadonnées statistiques. Le modèle de collection de données est organisé sous le concept de vue et celle-ci est une structure de données qui pourvoit une perspective agrégée du contenu des collections des données et de ses parutions (releases) associées. CURARE pourvoit des outils pour explorer (interroger) des métadonnées et pour extraire des vues en utilisant des méthodes analytiques. Exploiter les données massives requière un nombre considérable de décisions de la part de l'analyste des données pour trouver quelle est la meilleure façon pour stocker, partager et traiter les collections de données afin d'en obtenir le maximum de bénéfice et de connaissances à partir de ces données. Au lieu d'explorer manuellement les collections des données, CURARE fournit de outils intégrés à un environnement pour assister les analystes des données à trouver quelle est la meilleure collection qui peut être utilisée pour accomplir un objectif analytique donné. Nous avons implémenté CURARE et expliqué comment le déployer selon un modèle d'informatique dans les nuages (cloud computing) utilisant des services de science des donnés sur lesquels les services CURARE sont branchés. Nous avons conçu des expériences pour mesurer les coûts de la construction des vues à partir des ensembles des données du Grand Lyon et de Twitter, afin de pourvoir un aperçu de l'intérêt de notre approche et notre environnement de curation de données / The emergence of new platforms for decentralized data creation, such as sensor and mobile platforms and the increasing availability of open data on the Web, is adding to the increase in the number of data sources inside organizations and brings an unprecedented Big Data to be explored. The notion of data curation has emerged to refer to the maintenance of data collections and the preparation and integration of datasets, combining them to perform analytics. Curation tasks include extracting explicit and implicit meta-data; semantic metadata matching and enrichment to add quality to the data. Next generation data management engines should promote techniques with a new philosophy to cope with the deluge of data. They should aid the user in understanding the data collections’ content and provide guidance to explore data. A scientist can stepwise explore into data collections and stop when the content and quality reach a satisfaction point. Our work adopts this philosophy and the main contribution is a data collections’ curation approach and exploration environment named CURARE. CURARE is a service-based system for curating and exploring Big Data. CURARE implements a data collection model that we propose, used for representing their content in terms of structural and statistical meta-data organised under the concept of view. A view is a data structure that provides an aggregated perspective of the content of a data collection and its several associated releases. CURARE provides tools focused on computing and extracting views using data analytics methods and also functions for exploring (querying) meta-data. Exploiting Big Data requires a substantial number of decisions to be performed by data analysts to determine which is the best way to store, share and process data collections to get the maximum benefit and knowledge from them. Instead of manually exploring data collections, CURARE provides tools integrated in an environment for assisting data analysts determining which are the best collections that can be used for achieving an analytics objective. We implemented CURARE and explained how to deploy it on the cloud using data science services on top of which CURARE services are plugged. We have conducted experiments to measure the cost of computing views based on datasets of Grand Lyon and Twitter to provide insight about the interest of our data curation approach and environment

Page generated in 0.1144 seconds