• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 8
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 42
  • 42
  • 10
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Modelldriven arkitektur förbättrar hanteringen av problemet med import av data till ER-modeller / Model Driven Architecture improves managing the problem of migrating data to ER models

Freij, Urban January 2015 (has links)
I många sammanhang är det önskvärt att importera data från textfiler, excelfiler och liknande till en databas. För detta krävs att data i något skede översätts till en ER-modell (Entity Relationship), en modell som beskriver relevanta delar i ett databasschema. Modellen för hur denna översättning ser ut varierar från fall till fall. I det här examensarbetet har en applikation tagits fram för import av data till en ER-modell ur ett modellperspektiv i linje med Model Driven Architecture (MDA) ™. Vinsten ligger i att använda en metamodell som beskriver hur olika modeller för transformering från tabelldata till en ER-modell får se ut. Modellerna i sin tur beskriver hur transformeringen ska se ut. På så sätt kan flera olika modeller användas utan att ändringar i källkoden behöver göras. Metamodellen som beskriver transformeringen har visualiserats i ett klassdiagram. Klassdiagrammet beskriver schematiskt sambanden mellan tabeller som data ska importeras ifrån och den ER-modell som data ska överföras till. Metamodellen har transformerats till ett XML-schema.  Modellerna som ska användas har skrivits i en XML-fil som följer den transformerade metamodellen. / In many situations it is desirable to import data from text files, excel files and similar to a database. To do so the data needs to be translated at some stage to an ER model (Entity Relationship), i.e. a model describing relevant parts of a database schema. The approach for this translation varies from case to case. During this thesis an application has been developed to import data to an ER model from a modeling perspective, in line with the Model Driven Architecture (MDA) ™. The gain lies in using a metamodel that describes how different models for transformation from grid tables to an ER mode may look like. The models in turn describe how the transformation will look, thus allowing the usage of several different models without any need of changes to the source code. The metamodel describing the transformation of data can be visualized in a class diagram. The class diagram schematically describes the relationships between tables of data to be imported from and the ER model the data will be transferred to. Preferable is to write a model in an XML that conforms to the metamodel. Therefore the class diagram should be transformed into an XML schema that enables validation of the model in the XML file.
22

La structuration sémantique des contenus des documents audiovisuels selon les points de vue de la production

Bui Thi, Minh Phung 26 June 2003 (has links) (PDF)
Dans le contexte des progrès sensibles en termes de technologies de l'information et de normes associées à la vidéo, notre travail propose des outils de description flexible et à forte teneur sémantique des contenus audiovisuels au niveau du plan – le PSDS (Production Shot Description Scheme) – orientés par l'approche sémiotique et selon les points de vue des professionnels de la production. Nous constatons que les contenus audio-visuels ont une triple dimension sémantique – la sémantique technique, la sémantique du monde narratif et la sémiotique – et chaque niveau a sa propre description ontologique. La sémiotique complète la sémantique technique et thématique du contenu de la vidéo en expliquant pour quelles raisons les structures dynamiques du texte filmique peuvent produire ces interprétations sémantiques. Mobilisant tant des techniques d'analyse automatiques des média que des modèles existants, des terminologies, des théories et des discours du monde de cinéma, cette approche peut offrir une traduction naturelle et simple entre les différents niveaux sémantiques. La description orientée-objet à multiples points de vue des contenus implique de mettre en évidence dans l'arbre ontologique les unités significatives du domaine et leurs caractéristiques. Ces unités constituent les informations fondamentales pour l'appréhension du sens du récit. Elles sont représentées par des concepts qui constituent un réseau sémantique où les utilisateurs peuvent naviguer à la recherche d'informations. Chaque concept est un noeud du réseau sémantique du domaine visé. Représentées ensuite selon le formalisme Mpeg-7 et XML Schema, les connaissances intégrées dans les schémas de description du plan de la vidéo peuvent servir de base à la construction d'environnements interactifs d'édition des images. La vidéo y devient un flux informationnel dont les données peuvent être balisées, annotées, analysées et éditées. Les métadonnées analysées dans notre travail, comprenant des informations relevant de trois étapes de la production (pré-production, production et post-production) doivent permettre aux applications de gérer et manipuler les objets de la vidéo, ainsi que les représentations de leur sémantique, afin de les réutiliser dans plusieurs offres d'accès telles que l'indexation du contenu, la recherche, le filtrage, l'analyse et l'appréhension des images du film.
23

Aktualizace XML dat / Updating XML data

Mikuš, Tomáš January 2012 (has links)
Updating XML data is very wide area, which must solve a number of difficult problems. From designing language with sufficient expressive power to the XML data repository able to apply the changes. Ways to deal with them are few. From this perspective, is this work very closely dedicated only to the language XQuery. Thus, its extension for updates, for which the candidate recommendation by the W3C were published only recently. Another specialization of this work is to focus only on the XML data stored in the object­relational database with that repository will enforce the validity of documents to the scheme described in XML Schema. This requirement, combined with the possibility of updating of data in the repository is on the contradictory requirements. In this thesis is designed language based on XQuery language, designed and implemented evaluating of the update queries of the language on the store and a description and implementation of the store in object­relational database.
24

Inference of an XML Schema with the Knowledge of XML Operations / Inference of an XML Schema with the Knowledge of XML Operations

Mikula, Mário January 2012 (has links)
Recently, plenty of methods dealing with automatic inference of XML schema have been developed, however, most of them utilize XML documents as their only input. In this thesis we focus on extending inference by incorporating XML operations, in particular XQuery queries. We discuss how can XQuery queries help in improving the inference process and we propose an algorithm based on chosen improvements, extending an existing method of a key discovery, that can be integrated to methods inferring so-called initial grammar. By implementing it, we created the first solution of XML schema inference using XML documents along with XML operations.
25

XML schemų sudarymo ir normalizavimo metodika / Design and Normalization Methodology for XML Schema

Vyšniauskaitė, Ramutė 25 May 2005 (has links)
In this work analyses the transition from UML class diagrams to XML schema. The main problems, such as - UML does not include all the features required to describe a XML schema - are explored. Also, the principles of XML schema normalization, resemblances and differences between applying normal forms to XML documents and relational data bases are presented.
26

Types for XML with Application to Xcerpt

Wilk, Artur January 2008 (has links)
XML data is often accompanied by type information, usually expressed by some schema language. Sometimes XML data can be related to ontologies defining classes of objects, such classes can also be interpreted as types. Type systems proved to be extremely useful in programming languages, for instance to automatically discover certain kinds of errors. This thesis deals with an XML query language Xcerpt, which originally has no underlying type system nor any provision for taking advantage of existing type information. We provide a type system for Xcerpt; it makes possible type inference and checking type correctness. The system is descriptive: the types associated with Xcerpt constructs are sets of data terms and approximate the semantics of the constructs. A formalism of Type Definitions is adapted to specify such sets. The formalism may be seen as a simplification and abstraction of XML schema languages. The type inference method, which is the core of this work, may be seen as abstract interpretation. A non standard way of assuring termination of fixed point computations is proposed, as standard approaches are too inefficient. The method is proved correct wrt. the formal semantics of Xcerpt. We also present a method for type checking of programs. A success of type checking implies that the program is correct wrt. its type specification. This means that the program produces results of the specified type whenever it is applied to data of the given type. On the other hand, a failure of type checking suggests that the program may be incorrect. Under certain conditions (on the program and on the type specification), the program is actually incorrect whenever the proof attempt fails. A prototype implementation of the type system has been developed and usefulness of the approach is illustrated on example programs. In addition, the thesis outlines possibility of employing semantic types (ontologies) in Xcerpt. Introducing ontology classes into Type Definitions makes possible discovering some errors related to the semantics of data queried by Xcerpt. We also extend Xcerpt with a mechanism of combining XML queries with ontology queries. The approach employs an existing Xcerpt engine and an ontology reasoner; no modifications are required.
27

Adaptace XML dokumentů a integritní omezení v XML / XML Document Adaptation and Integrity Constraints in XML

Malý, Jakub January 2013 (has links)
This work examines XML data management and consistency -- more precisely the problem of document adaptation and the usage of integrity constraints. Changes in user requirements cause changes in schemas used in the systems and changes in the schemas subsequently make existing documents invalid. In this thesis, we introduce a formal framework for detecting changes between two versions of a schema and generating a transformation from the source to the target schema. Large-scale information systems depend on integrity constraints to be preserved and valid. In this work, we show how OCL can be used for XML data to define constraints at the abstract level, how such constraints can be translated to XPath expressions and Schematron schemas automatically and verified in XML documents.
28

Sdílení dat mezi informačními systémy založené na ontologiích / Ontology-Based Data Sharing among Information Systems

Hák, Lukáš Unknown Date (has links)
This thesis describes data sharing between information systems based on ontologies. In the first chapter shows up the term ontology and used terminology. Then this thesis analyses used basic methods, onthological languages and partially describes semantic web. In the third chapter are write out  utilities and plugins which are used for working with ontologies. The other chapters describe created ontology which are useful for car-selling. Especially ontology with cars, sellers and addresses . At the end of the thesis is explained suggested instrument to transfer existing XML to recording advertising in OWL language.
29

Vytváření OLAP modelů reportů na základě metadat / OLAP Reports Model Creating Based on Metadata

Franek, Zdenko January 2010 (has links)
An important part of knowledge of report creator is knowledge of database schema and database query language, from which the data for report are extracted. In the reporting services for database systems and Business Intelligence systems initiative arises to separate the position of database specialist from the position of reports maker. One of the solutions offers using metadata interlayer between the database schema and report. This interlayer is called the report model. Its use is not currently supported in the process of reporting, or is only very limited. The aim of this thesis is to suggest the possibility of using the report model in the process of building reports with an emphasis on the OLAP analysis.
30

Mining XML Integrity Constraints / Mining XML Integrity Constraints

Fajt, Stanislav January 2011 (has links)
The most important integrity constraints in XML are primary keys and foreign keys. In general, keys are essential in understanding both the structure and properties of data. They provide an instrument by which va- lues from a given set of attributes uniquely identify tuples in a database. As a result, keys are important to main database operations. Since XML beco- mes lingua franca for data exchange on the web, it is widely accepted as a model of real world data. Because XML documents in general can appear in any semi-structured form, structural constraints (including keys) are often imposed on the data that are to be modified or processed These constra- ints are formally defined in a schema.Unfortunately, in spite of the obvious advantages, the presence of a schema is not mandatory and many XML do- cuments are not joined with any. Consequently, no integrity constratins are specified in those documents, neither. This thesis is mainly focused on the inference of primary and foreign keys from XML documents. 1

Page generated in 0.0375 seconds