• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 5
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 36
  • 36
  • 14
  • 14
  • 10
  • 9
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Software Architecture for Client-Server Telemetry Data Analysis

Brockett, Douglas M., Aramaki, Nancy J. 10 1900 (has links)
International Telemetering Conference Proceedings / October 17-20, 1994 / Town & Country Hotel and Conference Center, San Diego, California / An increasing need among telemetry data analysts for new mechanisms for efficient access to high-speed data in distributed environments has led BBN to develop a new architecture for data analysis. The data sets of concern can be from either real-time or post-test sources. This architecture consists of an expandable suite of tools based upon a data distribution software "backbone" which allows the interchange of high volume data streams among server processes and client workstations. One benefit of this architecture is that it allows one to assemble software systems from a set of off-the-shelf, interoperable software modules. This modularity and interoperability allows these systems to be configurable and customizable, while requiring little applications programming by the system integrator.
2

Understanding and Addressing Collaboration Challenges for the Effective Use of Multi-User CAD

French, David James 01 March 2016 (has links)
Multi-user computer-aided design (CAD) is an emerging technology that promises to facilitate collaboration, enhance product quality, and reduce product development lead times by allowing multiple engineers to work on the same design at the same time. The BYU site of the NSF Center for e-Design has developed advanced multi-user CAD prototypes that have demonstrated the feasibility and value of this technology. Despite the possibilities that this software opens up for enhanced collaboration, there are now a new variety of challenges and opportunities to understand and address. For multi-user CAD to be used effectively in a modern engineering environment, it is necessary to understand and address both human and technical collaboration challenges. The purpose of this dissertation is to understand and address these challenges. Two studies were performed to better understand the human side of engineering collaboration: (1) engineers from multiple companies were interviewed to assess the collaboration challenges they experience, and (2) players of the multi-player game Minecraft were surveyed and studied to understand how a multi-user environment affects design collaboration. Methods were also developed to address two important technical challenges in multi-user CAD: (1) a method for detecting undo conflicts, and (2) additional methods for administering data access. This research addresses some of the important human and technical collaboration challenges in multi-user CAD. It enhances our understanding of collaboration challenges in engineering industry and how multi-user CAD will help address some of those challenges. It also enhances our understanding of how a multi-user design environment will affect design collaboration. The method developed for detecting conflicts that occur during local undo in multi-user CAD can be used to block conflicts from occurring and provide the user with some information about the cause of the conflict so they can collaborate to resolve it. The methods developed for administering data access in multi-user CAD will help protect against unauthorized access to sensitive data.
3

Practical transparent persistence

Ibrahim, Ali Hussein, 1980- 23 March 2011 (has links)
Many enterprise applications persist data beyond their lifetimes, usually in a database management system. Orthogonal persistence provides a clean programming model for communicating with databases. A program using orthogonal persistence operates over persistent and non-persistent data uniformly. However, a straightforward implementation of orthogonal persistence results in a large number of small queries each of which incurs a large overhead when accessing a remote database. In addition, the program cannot take advantage of a database's query optimizations for large and complex queries. Instead, most programs compose smaller queries into a single large query explicitly and send the query to the database through a command-level interface. These explicit queries compromise the modularity of programs because they do not compose well and they contain information about the program's future data access patterns. Consequently, programs with explicit queries are harder to maintain and reason about. In this thesis, we first define transparent persistence, a relaxation of orthogonal persistence. We show how transparent persistence in current tools can be made more practical by developing AutoFetch. The key idea in AutoFetch is to dynamically observe a program's data access patterns and use that information to reduce the number of queries. While AutoFetch is constrained by existing Java technology and tools, Remote Batch Invocation (RBI) adds the batch statement to the Java language. The batch statement is a general purpose mechanism for optimizing distributed communication using batching. RBI-DB specializes the ideas in RBI for databases. Both of these ideas help bridge the performance gap between orthogonally persistent systems and traditional database interfaces. / text
4

PERFORMANCE EVALUATION AND OPTIMIZATION OF THE UNSTRUCTURED CFD CODE UNCLE

Gupta, Saurabh 01 January 2006 (has links)
Numerous advancements made in the field of computational sciences have made CFD a viable solution to the modern day fluid dynamics problems. Progress in computer performance allows us to solve a complex flow field in practical CPU time. Commodity clusters are also gaining popularity as computational research platform for various CFD communities. This research focuses on evaluating and enhancing the performance of an in-house, unstructured, 3D CFD code on modern commodity clusters. The fundamental idea is to tune the codes to optimize the cache behavior of the node on commodity clusters to achieve enhanced code performance. Accordingly, this work presents discussion of various available techniques for data access optimization and detailed description of those which yielded improved code performance. These techniques were tested on various steady, unsteady, laminar, and turbulent test cases and the results are presented. The critical hardware parameters which influenced the code performance were identified. A detailed study investigating the effect of these parameters on the code performance was conducted and the results are presented. The successful single node improvements were also efficiently tested on parallel platform. The modified version of the code was also ported to different hardware architectures with successful results. Loop blocking is established as a predictor of code performance.
5

An APIfication Approach to Facilitate the Access and Reuse of Open Data

González Mora, César 10 September 2021 (has links)
Nowadays, there is a tendency to publish data on the Web, due to the benefits it brings to the society and the new legislation that encourage the opening of data. These collections of open data, also known as datasets, are typically published in open data portals by governments and institutions around the world in order to make it open -- available on the Web in a free and reusable manner. The common behaviour tends to be that publishers expose their data as individual tabular datasets. Open data is considered highly valuable because promoting the use of public information produces transparency, innovation and other social, political and economic benefits. Especially, this importance is also considerable in situational scenarios, where a small group of consumers (developers or data scientists) with specific needs require thematic data for a short life cycle. In order that these data consumers easily assess whether the data is adequate for their purpose there are different mechanisms. For example, SPARQL endpoints have become very useful for the consumption of open data, and particularly, Linked Open Data (LOD). Moreover, in order to access open data in a straightforward manner, Web Application Programming Interfaces (APIs) are also highly recommended feature of open data portals. However, accessing this open data is a difficult task since current Open Data platforms do not generally provide suitable strategies to access their data. On the one hand, accessing open data through SPARQL endpoints is a difficult task because it requires knowledge in different technologies, which is challenging especially for novice developers. Moreover, LOD is not usually available since most used formats in open data government portals are tabular. On the other hand, although providing Web APIs would facilitate developers to easily access open data for reusing it from open data portals’ catalogs, there is a lack of suitable Web APIs in open data portals. Moreover, in most cases, the currently available APIs only allow to access catalog’s metadata or to download entire data resources (i.e. coarse-grain access to data), hampering the reuse of data. In addition, as the open data is commonly published individually without considering potential relationships with other datasets, reusing several open datasets together is not a trivial task, thus requiring mechanisms that allow data consumers to integrate and access tabular open data published on the Web. Therefore, open data is not being used to its full potential because it is not easily accessible. As the access to open data is thus still limited for end-users, particularly those without programming skills, we propose a model-based approach to automatically generate Web APIs from open data. This APIfication approach takes into account the access to multiple integrated tabular datasets and the consumption of data in situational scenarios. Firstly, we focus on data that can be integrated by means of join and union operations. Then, we coin the term disposable Web APIs as an alternative mechanism for the consumption of open data in situational scenarios. These disposable Web APIs are created on-the-fly to be used temporarily by a user to consume specific open data. Accordingly, the main objective is to provide suitable mechanisms to easily access and reuse open data on the fly and in an integrated manner, solving the problem of difficult access through SPARQL endpoints for most data consumers and the lack of suitable Web APIs with easy access to open data. With this approach, we address both open data publishers and consumers, as long as the publishers will be able to include a Web API within their data, and data consumers or reusers will be benefited in those cases that a Web API pointing to the open data is missing. The results of the experiments conducted led us to conclude that users consider our generated Web APIs as easy to use, providing the desired open data, even though coming from different datasets and especially in situational scenarios. / Esta tesis ha sido financiada por la Universidad de Alicante mediante un contrato destinado a la formación predoctoral, y por la Generalitat Valenciana mediante una subvención para la contratación de personal investigador de carácter predoctoral (ACIF2019).
6

Komunikace OPC serverů se systémem MES (COMES) / Communication OPC servers with system MES (COMES)

Hromek, Jiří January 2013 (has links)
The presenting master`s thesis is concerned with leveraging the CCI system COMES firm COMPAS as OPC client. It was described data transfer architecture based OPC server OPC Client with OPC specifications and standards. Further, it was done the analysis of OPC servers from different manufacturers. The output of the thesis is conception and testing methodology of communication CCI module mode OPC client and OPC servers from different manufacturers.
7

Regional Water Quality Data Viewer Tool: An Open-Source to Support Research Data Access

Dolder, Danisa 07 June 2021 (has links)
Water quality data collection, storage, and access is a difficult task and significant work has gone into methods to store and disseminate these data. We present a tool to disseminate research in a simple method that does not replace but extends and leverages these tools. In the United States, the federal government maintains two systems to fill that role for hydrological data: the U.S. Geological Survey (USGS) National Water Information System (NWIS) and the U.S. Environmental Protection Agency (EPA) Storage and Retrieval System (STORET), since superseded by the Water Quality Portal (WQP). The Consortium of the Universities for the Advancement of Hydrologic Science, Inc (CUAHSI) has developed the Hydrologic Information System (HIS) to standardize search and discovery of these data as well as other observational time series datasets. Additionally, CUAHSI developed and maintains HydroShare.org as a web portal for researchers to store and share hydrology data in a variety of formats including spatial geographic information system data. We present the Tethys Platform based Water Quality Data Viewer (WQDV) web application that uses these systems to provide researchers and local monitoring organizations with a simple method to archive, view, analyze, and distribute water quality data. WQDV provides an archive for non-official or preliminary research data and access to those data that have been collected but need to be distributed prior to review or inclusion in the state database. WQDV can also accept subsets of data downloaded from other sources, such as the EPA WQP. WQDV helps users understand what local data are available and how they relate to the data in larger databases. WQDV presents data in spatial (maps) and temporal (time series graphs) forms to help the users analyze and potentially screen the data sources before export for additional analysis. WQDV provides a convenient method for interim data to be widely disseminated and easily accessible in the context of a subset of official data. We present WQDV using a case study of data from Utah Lake, Utah, United States of America.
8

ENABLING MULTI-PARTY COLLABORATIVE DATA ACCESS

Athamnah, Malek January 2018 (has links)
Cloud computing has brought availability of services at unprecedented scales but data accessibility considerations become more complex due to involvement of multiple parties in providing the infrastructure. In this thesis, we discuss the problem of enabling cooperative data access in a multi-cloud environment where the data is owned and managed by multiple enterprises. We consider a multi-party collaboration scheme whereby a set of parties collectively decide accessibility to data from individual parties using different data models such as relational databases, and graph databases. In order to implement desired business services, parties need to share a selected portion of information with one another. We consider a model with a set of authorization rules over the joins of basic relations, and such rules are defined by these cooperating parties. The accessible information is constrained by these rules. Specifically, the following critical issues were examined: Combine rule enforcement and query planning and devise an algorithm which simultaneously checks for the enforceability of each rule and generation of minimum cost plan of its execution using a cost metric whenever the enforcement is possible; We also consider other forms of limiting the access to the shared data using safety properties and selection conditions. We proposed algorithms for both forms to remove any conflicts or violations between the limited accesses and model queries; Used graph databases with our authorization rules and query planning model to conduct similarity search between tuples, where we represent the relational database tuples as a graph with weighted edges, which enables queries involving "similarity" across the tuples. We proposed an algorithm to exploit the correlations between attributes to create virtual attributes that can be used to catch much of the data variance, and enhance the speed at which similarity search occurs; Proposed a framework for defining test functionalities their composition, and their access control. We discussed an algorithm to determine the realization of the given test via valid compositions of individual functionalities in a way to minimize the number of parties involved. The research significance resides in solving real-world issues that arise in using cloud services for enterprises After extensive evaluations, results revealed: collaborative data access model improves the security during cooperative data processes; systematic and efficient solving access rules conflict issues minimizes the possible data leakage; and, a systematic approach tackling control failure diagnosis helps reducing troubleshooting times and all that improve availability and resiliency. The study contributes to the knowledge, literature, and practice. This research opens up the space for further studies in various aspects of secure data cooperation in large-scale cyber and cyber-physical infrastructures. / Computer and Information Science
9

Multi-utilisation de données complexes et hétérogènes : application au domaine du PLM pour l’imagerie biomédicale / Multi-use of complex and heterogenous data : application in the domain of PLM for biomedical imaging

Pham, Cong Cuong 15 June 2017 (has links)
L’émergence des technologies de l’information et de la communication (TIC) au début des années 1990, notamment internet, a permis de produire facilement des données et de les diffuser au reste du monde. L’essor des bases de données, le développement des outils applicatifs et la réduction des coûts de stockage ont conduit à l’augmentation quasi exponentielle des quantités de données au sein de l’entreprise. Plus les données sont volumineuses, plus la quantité d’interrelations entre données augmente. Le grand nombre de corrélations (visibles ou cachées) entre données rend les données plus entrelacées et complexes. Les données sont aussi plus hétérogènes, car elles peuvent venir de plusieurs sources et exister dans de nombreux formats (texte, image, audio, vidéo, etc.) ou à différents degrés de structuration (structurées, semi-structurées, non-structurées). Les systèmes d’information des entreprises actuelles contiennent des données qui sont plus massives, complexes et hétérogènes. L’augmentation de la complexité, la globalisation et le travail collaboratif font qu’un projet industriel (conception de produit) demande la participation et la collaboration d’acteurs qui viennent de plusieurs domaines et de lieux de travail. Afin d’assurer la qualité des données, d’éviter les redondances et les dysfonctionnements des flux de données, tous les acteurs doivent travailler sur un référentiel commun partagé. Dans cet environnement de multi-utilisation de données, chaque utilisateur introduit son propre point de vue quand il ajoute de nouvelles données et informations techniques. Les données peuvent soit avoir des dénominations différentes, soit ne pas avoir des provenances vérifiables. Par conséquent, ces données sont difficilement interprétées et accessibles aux autres acteurs. Elles restent inexploitées ou non exploitées au maximum afin de pouvoir les partager et/ou les réutiliser. L’accès aux données (ou la recherche de données), par définition est le processus d’extraction des informations à partir d’une base de données en utilisant des requêtes, pour répondre à une question spécifique. L’extraction des informations est une fonction indispensable pour tout système d’information. Cependant, cette dernière n’est jamais facile car elle représente toujours un goulot majeur d’étranglement pour toutes les organisations (Soylu et al. 2013). Dans l’environnement de données complexes, hétérogènes et de multi-utilisation de données, fournir à tous les utilisateurs un accès facile et simple aux données devient plus difficile pour deux raisons : - Le manque de compétences techniques. Pour formuler informatiquement une requête complexe (les requêtes conjonctives), l’utilisateur doit connaitre la structuration de données, c’est-à-dire la façon dont les données sont organisées et stockées dans la base de données. Quand les données sont volumineuses et complexes, ce n’est pas facile d’avoir une compréhension approfondie sur toutes les dépendances et interrelations entre données, même pour les techniciens du système d’information. De plus, cette compréhension n’est pas forcément liée au savoir et savoir-faire du domaine et il est donc, très rare que les utilisateurs finaux possèdent les compétences suffisantes. - Différents points de vue des utilisateurs. Dans l’environnement de multi-utilisation de données, chaque utilisateur introduit son propre point de vue quand il ajoute des nouvelles données et informations techniques. Les données peuvent être nommées de manières très différentes et les provenances de données ne sont pas suffisamment fournies. / The emergence of Information and Comunication Technologies (ICT) in the early 1990s, especially the Internet, made it easy to produce data and disseminate it to the rest of the world. The strength of new Database Management System (DBMS) and the reduction of storage costs have led to an exponential increase of volume data within entreprise information system. The large number of correlations (visible or hidden) between data makes them more intertwined and complex. The data are also heterogeneous, as they can come from many sources and exist in many formats (text, image, audio, video, etc.) or at different levels of structuring (structured, semi-structured, unstructured). All companies now have to face with data sources that are more and more massive, complex and heterogeneous.technical information. The data may either have different denominations or may not have verifiable provenances. Consequently, these data are difficult to interpret and accessible by other actors. They remain unexploited or not maximally exploited for the purpose of sharing and reuse. Data access (or data querying), by definition, is the process of extracting information from a database using queries to answer a specific question. Extracting information is an indispensable function for any information system. However, the latter is never easy but it always represents a major bottleneck for all organizations (Soylu et al. 2013). In the environment of multiuse of complex and heterogeneous, providing all users with easy and simple access to data becomes more difficult for two reasons : - Lack of technical skills : In order to correctly formulate a query a user must know the structure of data, ie how the data is organized and stored in the database. When data is large and complex, it is not easy to have a thorough understanding of all the dependencies and interrelationships between data, even for information system technicians. Moreover, this understanding is not necessarily linked to the domain competences and it is therefore very rare that end users have sufficient theses such skills. - Different user perspectives : In the multi-use environment, each user introduces their own point of view when adding new data and technical information. Data can be namedin very different ways and data provenances are not sufficiently recorded. Consequently, they become difficultly interpretable and accessible by other actors since they do not have sufficient understanding of data semantics. The thesis work presented in this manuscript aims to improve the multi-use of complex and heterogeneous data by expert usiness actors by providing them with a semantic and visual access to the data. We find that, although the initial design of the databases has taken into account the logic of the domain (using the entity-association model for example), it is common practice to modify this design in order to adapt specific techniques needs. As a result, the final design is often a form that diverges from the original conceptual structure and there is a clear distinction between the technical knowledge needed to extract data and the knowledge that the expert actors have to interpret, process and produce data (Soylu et al. 2013). Based on bibliographical studies about data management tools, knowledge representation, visualization techniques and Semantic Web technologies (Berners-Lee et al. 2001), etc., in order to provide an easy data access to different expert actors, we propose to use a comprehensive and declarative representation of the data that is semantic, conceptual and integrates domain knowledge closeed to expert actors.
10

Answering Object Queries over Knowledge Bases with Expressive Underlying Description Logics

Wu, Jiewen January 2013 (has links)
Many information sources can be viewed as collections of objects and descriptions about objects. The relationship between objects is often characterized by a set of constraints that semantically encode background knowledge of some domain. The most straightforward and fundamental way to access information in these repositories is to search for objects that satisfy certain selection criteria. This work considers a description logics (DL) based representation of such information sources and object queries, which allows for automated reasoning over the constraints accompanying objects. Formally, a knowledge base K=(T, A) captures constraints in the terminology (a TBox) T, and objects with their descriptions in the assertions (an ABox) A, using some DL dialect L. In such a setting, object descriptions are L-concepts and object identifiers correspond to individual names occurring in K. Correspondingly, object queries are the well known problem of instance retrieval in the underlying DL knowledge base K, which returns the identifiers of qualifying objects. This work generalizes instance retrieval over knowledge bases to provide users with answers in which both identifiers and descriptions of qualifying objects are given. The proposed query paradigm, called assertion retrieval, is favoured over instance retrieval since it provides more informative answers to users. A more compelling reason is related to performance: assertion retrieval enables a transfer of basic relational database techniques, such as caching and query rewriting, in the context of an assertion retrieval algebra. The main contributions of this work are two-fold: one concerns optimizing the fundamental reasoning task that underlies assertion retrieval, namely, instance checking, and the other establishes a query compilation framework based on the assertion retrieval algebra. The former is necessary because an assertion retrieval query can entail a large volume of instance checking requests in the form of K|= a:C, where "a" is an individual name and "C" is a L-concept. This work thus proposes a novel absorption technique, ABox absorption, to improve instance checking. ABox absorption handles knowledge bases that have an expressive underlying dialect L, for instance, that requires disjunctive knowledge. It works particularly well when knowledge bases contain a large number of concrete domain concepts for object descriptions. This work further presents a query compilation framework based on the assertion retrieval algebra to make assertion retrieval more practical. In the framework, a suite of rewriting rules is provided to generate a variety of query plans, with a focus on plans that avoid reasoning w.r.t. the background knowledge bases when sufficient cached results of earlier requests exist. ABox absorption and the query compilation framework have been implemented in a prototypical system, dubbed CARE Assertion Retrieval Engine (CARE). CARE also defines a simple yet effective cost model to search for the best plan generated by query rewriting. Empirical studies of CARE have shown that the proposed techniques in this work make assertion retrieval a practical application over a variety of domains.

Page generated in 0.0642 seconds