31 |
Supporting Scientific Collaboration through Workflows and ProvenanceEllqvist, Tommy January 2010 (has links)
Science is changing. Computers, fast communication, and new technologies have created new ways of conducting research. For instance, researchers from different disciplines are processing and analyzing scientific data that is increasing at an exponential rate. This kind of research requires that the scientists have access to tools that can handle huge amounts of data, enable access to vast computational resources, and support the collaboration of large teams of scientists. This thesis focuses on tools that help support scientific collaboration. Workflows and provenance are two concepts that have proven useful in supporting scientific collaboration. Workflows provide a formal specification of scientific experiments, and provenance offers a model for documenting data and process dependencies. Together, they enable the creation of tools that can support collaboration through the whole scientific life-cycle, from specification of experiments to validation of results. However, existing models for workflows and provenance are often specific to particular tasks and tools. This makes it hard to analyze the history of data that has been generated over several application areas by different tools. Moreover, workflow design is a time-consuming process and often requires extensive knowledge of the tools involved and collaboration with researchers with different expertise. This thesis addresses these problems. Our first contribution is a study of the differences between two approaches to interoperability between provenance models: direct data conversion, and mediation. We perform a case study where we integrate three different provenance models using the mediation approach, and show the advantages compared to data conversion. Our second contribution serves to support workflow design by allowing multiple users to concurrently design workflows. Current workflow tools lack the ability for users to work simultaneously on the same workflow. We propose a method that uses the provenance of workflow evolution to enable real-time collaborative design of workflows. Our third contribution considers supporting workflow design by reusing existing workflows. Workflow collections for reuse are available, but more efficient methods for generating summaries of search results are still needed. We explore new summarization strategies that considers the workflow structure. <img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABgAAAAYCAYAAADgdz34AAADsElEQVR4nK2VTW9VVRSGn33OPgWpYLARbKWhQlCHTogoSkjEkQwclEQcNJEwlfgD/AM6NBo1xjhx5LyJ0cYEDHGkJqhtBGKUpm3SFii3vb2956wPB/t+9raEgSs52fuus89613rftdcNH8/c9q9++oe/Vzb5P+3McyNcfm2CcPj9af9w6gwjTwzvethx3Bx3x8xwd1wNM8dMcTNUHTfFLPnX6nVmZpeIYwf3cWD/PhbrvlPkblAzVFurKS6GmmGqqComaS+qmBoTI0Ncu3mXuGvWnrJ+ZSxweDgnkHf8ndVTdbiT3M7cQp2Z31dRTecHAfqydp4ejhwazh6Zezfnu98E1WIQwB3crEuJ2Y45PBTAQUVR9X4At66AppoEVO1Q8sgAOKJJjw6Am6OquDmvHskZ3R87gW+vlHz98zpmiqphkkRVbQtsfPTOC30lJKFbFTgp83bWh7Zx/uX1B6w3hI3NkkZTqEpBRDBRzG2AQHcwcYwEkOGkTERREbLQ/8HxJwuW7zdYrzfZ2iopy4qqEspKaDYravVm33k1R91Q69FA1VBRzFIVvXbx5AgXT44A8MWP81yfu0utIR2aVK3vfCnGrcUNxp8a7gKYKiLCvY2SUvo/aNtnM3e49ucK9S3p0aDdaT0UAVsKi2tVi6IWwNL9JvdqTdihaz79/l+u/rHMxmaJVMLkS2OoKKLWacdeE3IsSxctc2D5Qcl6vUlVVgNt+fkPPcFFmTw1xruvT7SCd7nuVhDQvECzJH90h0azRKoKFRkAmP5lKTWAGRdefoZL554FQNUxB92WvYeA5UN4PtSqwB2phKqsqMpBgAunRhFR3j49zuU3jnX8k6fHEQKXzh1jbmGDuYU6s4t1rt6socUeLLZHhYO2AHSHmzt19ihTZ48O8Hzl/AmunD/BjTvrvPfNX3hWsNpwJCvwYm+ngug4UilSCSq6k8YPtxDwfA+WRawIWFbgscDiULcCEaWqBFOlrLazurupOSHLqGnEKJAY8TwBEHumqUirAjNm52vEPPRV4p01XXMPAQhUBjcWm9QZwijwokgAeYHlHYA06KR1cT6ZvoV56pDUJQEjw0KeaMgj1hPEY4vz2A4eW0/e1qA7KtQdsxTYAG0H3iG4xyK1Y+xm7XmEPOJZDiENzLi2WZHngeOjj2Pe+sMg4GRYyLAsx7ME4FnsyTD9pr0PEc8zPGRAwKXBkYOPEd96cZRvf11g9MDe7e3R4Z4Q+vyEnn3P4t0XzK/W+ODN5/kPfRLewAJVEQ0AAAAASUVORK5CYII%3D" />
|
32 |
Dynamic constraint handling in mass customization systems : A database solutionKåhlman, Johan, Spånberger, Josef January 2020 (has links)
Purpose: The purpose of this study is to develop an architecture for a Mass Customization information system that allows for product customization restrictions being dynamically expressed through the database. Method: A study evaluating an artifact made using Design Science Research. The evaluation was made using both a quantitative and a qualitative method. Findings: Building upon a literature review to establish a knowledge base, an artifact was created using React and Node.js to build a web application combined with a Neo4j graph database. The artifact allows for products and their inherent restrictions to be dynamically added and modified through constraints defined in their data. This data can be used in the web application to assemble and customize components, where the constraints can be enforced by the web application in real time without any modification to the application. The artifact can enforce all constraints that were intended and it was considered as a better overall solution for handling constraints compared to the currently used solution by Divid, a market leading company in the usage of Mass Customization systems with constraint handling in the context of ventilation systems. Implications: The results implicate that the usage of graph database systems in Mass Customization systems holds great promise, specifically as a new way to handle constraints between components dynamically. Limitations: The results from the expert panel only reflects the opinions of Divid and might not be true for other companies interested in this area. The artifact solution was successful in its purpose to illustrate the concept of dynamic constraint handling. However, it is still unclear if the system holds up in a professional context with more complex rules and heavy demands on performance.
|
33 |
Zpracování grafu volání založené na dotazovacím jazyku / Query Language Based Call Graph ProcessingDudka, Kamil January 2009 (has links)
In this thesis, available tools for call graph generation, processing and visualization are analyzed. Based on this analysis, a call-graph processing tool is designed. The tool is then implemented and tested on call graphs generated from various real-world programs, including the Linux Kernel.
|
34 |
Data Build Tool (DBT) Jobs in HopsworksChen, Zidi January 2022 (has links)
Feature engineering at scale is always critical and challenging in the machine learning pipeline. Modern data warehouses enable data analysts to do feature engineering by transforming, validating and aggregating data in Structured Query Language (SQL). To help data analysts do this work, Data Build Tool (DBT), an open-source tool, was proposed to build and orchestrate SQL pipelines. Hopsworks, an open-source scalable feature store, would like to add support for DBT so that data scientists can do feature engineering in Python, Spark, Flink, and SQL in a single platform. This project aims to create a concept about how to build this support and then implement it. The project checks the feasibility of the solution using a sample DBT project. According to measurements, this working solution needs around 800 MB of space in the server and it takes more time than executing DBT commands locally. However, it persistently stores the results of each execution in HopsFS, which are available to users. By adding this novel support for SQL using DBT, Hopsworks might be one of the completest platforms for feature engineering so far. / Att utveckla funktioner i stor skala är alltid kritiskt och utmanande i pipeline för maskininlärning. Moderna datalager gör det möjligt för dataanalytiker att göra feature engineering genom att omvandla, validera och aggregera data i Structured Query Language (SQL). För att hjälpa dataanalytiker att utföra detta arbete föreslogs Data Build Tool (DBT), ett verktyg med öppen källkod, för att bygga och organisera SQL-pipelines. Hopsworks, ett skalbart funktionslager med öppen källkod, vill lägga till stöd för DBT så att datavetare kan göra funktionsutveckling i Python, Spark, Flink och SQL på en enda plattform. Det här projektet syftar till att skapa ett koncept för hur man bygger detta stöd och sedan genomföra det. Projektet kontrollerar lösningens genomförbarhet med hjälp av ett exempel på DBT-projekt. Enligt mätningar behöver denna fungerande lösning cirka 800 MB utrymme på servern och det tar mer tid än att utföra DBT-kommandon lokalt. Den lagrar dock permanent resultaten av varje körning i HopsFS, vilka är tillgängliga för användarna. Genom att lägga till detta nya stöd för SQL med DBT kan Hopsworks vara en av de mest kompletta plattformarna för funktionsutveckling hittills.
|
35 |
IMPLEMENTATION FOR A COHERENT KEYWORD-BASED XML QUERY LANGUAGEPotturi, Venkatakalyan 12 June 2007 (has links)
No description available.
|
36 |
A HYBRID APPROACH TO RETRIEVING WEB DOCUMENTS AND SEMANTIC WEB DATAImmaneni, Trivikram 18 January 2008 (has links)
No description available.
|
37 |
Application for data mining in manufacturing databasesFang, Cheng-Hung January 1996 (has links)
No description available.
|
38 |
Express query language and templates and rules: Two languages for advanced software system integrationsHuang, Lizhong January 1999 (has links)
No description available.
|
39 |
Intelligent Data Layer: : An approach to generating data layer from normalized database model.Buzo, Amir January 2012 (has links)
Model View Controller (MVC) software architecture is widely spread and commonly used in application’s development. Therefore generation of data layer for the database model is able to reduce cost and time. After research on current Object Relational Mapping (ORM) tools, it was discovered that there are generating tools like Data Access Object (DAO) and Hibernate, however their usage causes problems like inefficiency and slow performance due to many connections with database and set up time. Most of these tools are trying to solve specific problems rather than generating a data layer which is an important component and the bottom layer of database centred applications. The proposed solution to the problem is an engineering approach where we have designed a tool named Generated Intelligent Data Layer (GIDL). GIDL tool generates small models which create the main data layer of the system according to the Database Model. The goal of this tool is to enable and allow software developers to work only with object without deep knowledge in SQL. The problem of transaction and commit is solved by the tool. Also filter objects are constructed for filtering the database. GIDL tool reduced the number of connections and also have a cache where to store object lists and modify them. The tool is compared under the same environment with Hibernate and showed a better performance in terms of time evaluations for the same functions. GIDL tool is beneficial for software developers, because it generates the entire data layer.
|
40 |
Data Aggregation through Web Service Composition in Smart Camera NetworksRajapaksage, Jayampathi S 14 December 2010 (has links)
Distributed Smart Camera (DSC) networks are power constrained real-time distributed embedded systems that perform computer vision using multiple cameras. Providing data aggregation techniques that is criti-cal for running complex image processing algorithms on DSCs is a challenging task due to complexity of video and image data. Providing highly desirable SQL APIs for sophisticated query processing in DSC networks is also challenging for similar reasons. Research on DSCs to date have not addressed the above two problems. In this thesis, we develop a novel SOA based middleware framework on a DSC network that uses Distributed OSGi to expose DSC network services as web services. We also develop a novel web service composition scheme that aid in data aggregation and a SQL query interface for DSC net-works that allow sophisticated query processing. We validate our service orchestration concept for data aggregation by providing query primitive for face detection in smart camera network.
|
Page generated in 0.0175 seconds