Spelling suggestions: "subject:"data model"" "subject:"mata model""
161 |
Integracija šema modula baze podataka informacionog sistema / Integration of Information System Database Module SchemasLuković Ivan 18 January 1996 (has links)
<p>Paralelan i nezavisan rad više projektanata na različitim modulima (podsistemima) nekog informacionog sistema, identifikovanim saglasno početnoj funkcionalnoj dekompoziciji realnog sistema, nužno dovodi do međusobno nekonzistentnih rešenja šema modula baze podataka. Rad se bavi pitanjima identifikacije i razrešavanja problema, vezanih za automatsko otkrivanje kolizija, koje nastaju pri paralelnom projektovanju različitih šema modula i problema vezanih za integraciju šema modula u jedinstvenu šemu baze podataka informacionog sistema.</p><p>Identifikovani su mogući tipovi kolizija šema modula, formulisan je i dokazan potreban i dovoljan uslov stroge i intenzionalne kompatibilnosti šema modula, što je omogućilo da se, u formi algoritama, prikažu postupci za ispitivanje stroge i intenzionalne kompatibilnosti šema modula. Formalizovan je i postupak integracije kompatibilnih šema u jedinstvenu (strogo pokrivajuću) šemu baze podataka. Dat je, takođe, prikaz metodologije primene algoritama za testiranje kompatibilnosti i integraciju šema modula u jedinstvenu šemu baze podataka informacionog sistema.</p> / <p>Parallel and independent work of a number of designers on different information system modules (i.e. subsystems), identified by the initial real system functional decomposition, necessarily leads to mutually inconsistent database (db) module schemas. The thesis considers the problems concerning automatic detection of collisions, that can appear during the simultaneous design of different db module schemas, and integration of db module schemas into the unique information system db schema.</p><p>All possible types of db module schema collisions have been identified. Necessary and sufficient condition of strong and intensional db module schema compatibility has been formu-lated and proved. It has enabled to formalize the process of db module schema strong and intensional compatibility checking and to construct the appropriate algorithms. The integration process of the unique (strong covering) db schema, on the basis of compatible db module schemas, is formalized, as well. The methodology of applying the algorithms for compatibility checking and unique db schema integration is also presented.</p>
|
162 |
Semantische Revisionskontrolle für die Evolution von Informations- und DatenmodellenHensel, Stephan 13 April 2021 (has links)
Stärker verteilte Systeme in der Planung und Produktion verbessern die Agilität und Wartbarkeit von Einzelkomponenten, wobei gleichzeitig jedoch deren Vernetzung untereinander steigt. Das stellt wiederum neue Anforderungen an die semantische Beschreibung der Komponenten und deren Verbindungen, wofür Informations- und Datenmodelle unabdingbar sind. Der Lebenszyklus dieser Modelle ist dabei von Änderungen geprägt, mit denen umgegangen werden muss. Heutige Revisionsverwaltungssysteme, die die industriell geforderte Nachvollziehbarkeit bereitstellen könnten, sind allerdings nicht auf die speziellen Anforderungen der Informations- und Datenmodelle zugeschnitten, wodurch Möglichkeiten einer konsistenten Evolution verringert werden.
Im Rahmen dieser Dissertation wurde ein Revision Management System zur durchgängigen Unterstützung der Evolution von Informations- und Datenmodellen entwickelt, das Revisionsverwaltungs- und Evolutionsmechanismen integriert. Besonderheit ist hierbei die technologieunabhängige mathematische und semantische Beschreibung, die eine Überführung des Konzepts in unterschiedliche Technologien ermöglicht. Beispielhaft wurde das Konzept für das Semantic Web als Weiterentwicklung des Open-Source-Projektes R43ples umgesetzt. / The increased distribution of systems in planning and production leads to improved agility and maintainability of individual components, whereas concurrently their cross-linking increases. This causes new requirements for the semantic description of components and links for which information and data models are indispensable. The life cycle of those models is characterized by changes that must be dealt with. However, today’s revision control systems would provide the required industrial traceability but are not enough for the specific requirements of information and data models. As a result, possibilities for a consistent evolution are reduced.
Within this thesis a revision management system was developed, integrating revision control and evolution mechanisms to support the evolution of information and data models. The key is the technology-independent mathematical and sematic description allowing the application of the concept within different technologies. Exemplarily the concept was implemented for the Semantic Web as an extension of the open source project R43ples.
|
163 |
Automatic sensor discovery and management to implement effective mechanism for data fusion and data aggregation / Découverte et gestion autonomique des capteurs pour une mise en oeuvre de mécanismes efficaces de fusion et d’agrégation de donnéesNachabe Ismail, Lina 06 October 2015 (has links)
Actuellement, des descriptions basées sur de simples schémas XML sont utilisées pour décrire un capteur/actuateur et les données qu’il mesure et fournit. Ces schémas sont généralement formalisés en utilisant le langage SensorML (Sensor Model Language), ne permettant qu’une description hiérarchique basique des attributs des objets sans aucune notion de liens sémantiques, de concepts et de relations entre concepts. Nous pensons au contraire que des descriptions sémantiques des capteurs/actuateurs sont nécessaires au design et à la mise en œuvre de mécanismes efficaces d’inférence, de fusion et de composition de données. Cette ontologie sémantique permettra de masquer l’hétérogénéité des données collectées et facilitera leur fusion et leur composition au sein d’un environnement de gestion de capteur similaire à celui d’une architecture ouverte orientée services. La première partie des travaux de cette thèse porte donc sur la conception et la validation d’une ontologie sémantique légère, extensible et générique de description des données fournies par un capteur/actuateur. Cette description ontologique de données brutes devra être conçue : • d’une manière extensible et légère afin d’être applicable à des équipements embarqués hétérogènes, • comme sous élément d’une ontologie de plus haut niveau (upper level ontology) utilisée pour modéliser les capteurs et actuateurs (en tant qu’équipements et non plus de données fournies), ainsi que les informations mesurées (information veut dire ici donnée de plus haut niveau issue du traitement et de la fusion des données brutes). La seconde partie des travaux de cette thèse portera sur la spécification et la qualification : • d’une architecture générique orientée service (SOA) permettant la découverte et la gestion d’un capteur/actuateur, et des données qu’il fournit (incluant leurs agrégation et fusion en s’appuyant sur les mécanismes de composition de services de l’architecture SOA), à l’identique d’un service composite de plus haut niveau, • d’un mécanisme amélioré de collecte de données à grande échelle, au dessus de cette ontologie descriptive. L’objectif des travaux de la thèse est de fournir des facilitateurs permettant une mise en œuvre de mécanismes efficaces de collecte, de fusion et d’agrégation de données, et par extension de prise de décisions. L’ontologie de haut niveau proposée sera quant à elle pourvue de tous les attributs permettant une représentation, une gestion et une composition des ‘capteurs, actuateurs et objets’ basées sur des architectures orientées services (Service Oriented Architecture ou SOA). Cette ontologie devrait aussi permettre la prise en compte de l’information transporter (sémantique) dans les mécanismes de routage (i.e. routage basé information). Les aspects liés à l’optimisation et à la modélisation constitueront aussi une des composantes fortes de cette thèse. Les problématiques à résoudre pourraient être notamment : • La proposition du langage de description le mieux adapté (compromis entre richesse, complexité et flexibilité), • La définition de la structure optimum de l’architecture de découverte et de gestion d’un capteur/actuateur, • L’identification d’une solution optimum au problème de la collecte à grande échelle des données de capteurs/actuateurs / The constant evolution of technology in terms of inexpensive and embedded wireless interfaces and powerful chipsets has leads to the massive usage and development of wireless sensor networks (WSNs). This potentially affects all aspects of our lives ranging from home automation (e.g. Smart Buildings), passing through e-Health applications, environmental observations and broadcasting, food sustainability, energy management and Smart Grids, military services to many other applications. WSNs are formed of an increasing number of sensor/actuator/relay/sink devices, generally self-organized in clusters and domain dedicated, that are provided by an increasing number of manufacturers, which leads to interoperability problems (e.g., heterogeneous interfaces and/or grounding, heterogeneous descriptions, profiles, models …). Moreover, these networks are generally implemented as vertical solutions not able to interoperate with each other. The data provided by these WSNs are also very heterogeneous because they are coming from sensing nodes with various abilities (e.g., different sensing ranges, formats, coding schemes …). To tackle this heterogeneity and interoperability problems, these WSNs’ nodes, as well as the data sensed and/or transmitted, need to be consistently and formally represented and managed through suitable abstraction techniques and generic information models. Therefore, an explicit semantic to every terminology should be assigned and an open data model dedicated for WSNs should be introduced. SensorML, proposed by OGC in 2010, has been considered an essential step toward data modeling specification in WSNs. Nevertheless, it is based on XML schema only permitting basic hierarchical description of the data, hence neglecting any semantic representation. Furthermore, most of the researches that have used semantic techniques for developing their data models are only focused on modeling merely sensors and actuators (this is e.g. the case of SSN-XG). Other researches dealt with data provided by WSNs, but without modelling the data type, quality and states (like e.g. OntoSensor). That is why the main aim of this thesis is to specify and formalize an open data model for WSNs in order to mask the aforementioned heterogeneity and interoperability between different systems and applications. This model will also facilitate the data fusion and aggregation through an open management architecture like environment as, for example, a service oriented one. This thesis can thus be split into two main objectives: 1)To formalize a semantic open data model for generically describing a WSN, sensors/actuators and their corresponding data. This model should be light enough to respect the low power and thus low energy limitation of such network, generic for enabling the description of the wide variety of WSNs, and extensible in a way that it can be modified and adapted based on the application. 2)To propose an upper service model and standardized enablers for enhancing sensor/actuator discovery, data fusion, data aggregation and WSN control and management. These service layer enablers will be used for improving the data collection in a large scale network and will facilitate the implementation of more efficient routing protocols, as well as decision making mechanisms in WSNs
|
164 |
Dolování asociačních pravidel z datových skladů / Association Rules Mining over Data WarehousesHlavička, Ladislav January 2009 (has links)
This thesis deals with association rules mining over data warehouses. In the first part the reader will be familiarized with terms like knowledge discovery in databases and data mining. The following part of the work deals with data warehouses. Further the association analysis, the association rules, their types and mining possibilities are described. The architecture of Microsoft SQL Server and its tools for working with data warehouses are presented. The rest of the thesis includes description and analysis of the Star-miner algorithm, design, implementation and testing of the application.
|
165 |
Databáze pohybujících se objektů / The Database of Moving ObjectsVališ, Jaroslav January 2008 (has links)
This work treats the representation of moving objects and operations over these objects. Introduces the support for spatio-temporal data in Oracle Database 10g and presents two designs of moving objects database structure. Upon these designs a database was implemented using the user-defined data types. Sample application provides a graphical output of stored spatial data and allows us to call an implemented spatio-temporal operations. Finally, an evaluation of achieved results is done and possible extensions of project are discussed.
|
166 |
Round-trip engineering concept for hierarchical UML models in AUTOSAR-based safety projectsPathni, Charu 30 September 2015 (has links)
Product development process begins at a very abstract level of understanding the requirements. The data needs to be passed on the next phase of development. This happens after every stage for further development and finally a product is made. This thesis deals with the data exchange process of software development process in specific. The problem lies in handling of data in terms of redundancy and versions of the data to be handled. Also, once data passed on to next stage, the ability to exchange it in reveres order is not existent in evident forms. The results found during this thesis discusses the solutions for the problem by getting all the data at same level, in terms of its format. Having the concept ready, provides an opportunity to use this data based on our requirements. In this research, the problem of data consistency, data verification is dealt with. This data is used during the development and data merging from various sources. The concept that is formulated can be expanded to a wide variety of applications with respect to development process. If the process involves exchange of data - scalability and generalization are the main foundation concepts that are contained within the concept.
|
167 |
Impact of ACA’s free screening policy on colorectal cancer outcomes and cost savings : Effect of removal of out-of-pocket cancer screening fee on screening, incidence, mortality, and cost savingsTogtokhjav, Oyun January 2023 (has links)
Colorectal cancer is the second leading cause of cancer-related deaths worldwide as of 2020. Early detection and diagnosis of colorectal cancer can greatly increase the chances of successful treatment and can also reduce the cost of care including treatment. It’s shown in recent years that the colorectal cancer screening rates have slowed nationwide which impacts the new diagnoses of colorectal cancer (CRC) and the ability to treat it at an early stage to avoid increase in mortality rate. The purpose of this research is to examine the impact of the Affordable Care Act 2010 ‘s policy to remove colorectal cancer screening fee for adults aged 50-75 on screening, incidence, and mortality rate of colorectal cancer using panel data model and employing sequential recursive system of equations method. Since a decision to get screened is an individual’s choice, this study explores methods to increase colorectal cancer screening rate with the help of behavioral economics theories. Results of the study show that Affordable Care Act’s policy to remove colorectal cancer screening fee has a significant impact on both colorectal cancer screening and incidence rates. The ACA’s policy is associated with an increase in colorectal cancer screening rate while associating with a decrease in cancer incidence rate. Relating to the colorectal cancer mortality rate, an effort was made to examine the effect of the Affordable Care Act's policy to remove colorectal cancer screening fee on the overall cost savings resulting from lives saved. However, since this study found no significant impact of the ACA's policy on the mortality rate of colorectal cancer, further exploration in this regard was not pursued. On the other hand, studies conducted to increase colorectal cancer screening rate by applying behavioral economics methods have shown that default method with an opt-out choice and financial incentive with a loss-framed messaging methods are proven effective. Therefore, these methods can be investigated to design and implement a nationwide initiative.
|
168 |
Measuring the Technical and Process Benefits of Test Automation based on Machine Learning in an Embedded Device / Undersökning av teknik- och processorienterade fördelar med testautomation baserad på maskininlärning i ett inbyggt systemOlsson, Jakob January 2018 (has links)
Learning-based testing is a testing paradigm that combines model-based testing with machine learning algorithms to automate the modeling of the SUT, test case generation, test case execution and verdict construction. A tool that implements LBT been developed at the CSC school at KTH called LBTest. LBTest utilizes machine learning algorithms with off-the-shelf equivalence- and model-checkers, and the modeling of user requirements by propositional linear temporal logic. In this study, it is be investigated whether LBT may be suitable for testing a micro bus architecture within an embedded telecommunication device. Furthermore ideas to further automate the testing process by designing a data model to automate user requirement generation are explored. / Inlärningsbaserad testning är en testningsparadigm som kombinerar model-baserad testning med maskininlärningsalgoritmer för att automatisera systemmodellering, testfallsgenering, exekvering av tester och utfallsbedömning. Ett verktyg som är byggt på LBT är LBTest, utvecklat på CSC skolan på KTH. LBTest nyttjar maskininlärningsalgoritmer med färdiga ekvivalent- och model-checkers, och modellerar användarkrav med linjär temporal logik. I denna studie undersöks det om det är lämpat att använda LBT för att testa en mikrobus arkitektur inom inbyggda telekommunikationsenheter. Utöver det undersöks även hur testprocessen skulle kunna ytterligare automatiseras med hjälp av en data modell för att automatisera generering av användarkrav.
|
169 |
Semantically-enriched and semi-Autonomous collaboration framework for the Web of Things. Design, implementation and evaluation of a multi-party collaboration framework with semantic annotation and representation of sensors in the Web of Things and a case study on disaster managementAmir, Mohammad January 2015 (has links)
This thesis proposes a collaboration framework for the Web of Things based on the concepts of Service-oriented Architecture and integrated with semantic web technologies to offer new possibilities in terms of efficient asset management during operations requiring multi-actor collaboration. The motivation for the project comes from the rise in disasters where effective cross-organisation collaboration can increase the efficiency of critical information dissemination. Organisational boundaries of participants as well as their IT capability and trust issues hinders the deployment of a multi-party collaboration framework, thereby preventing timely dissemination of critical data. In order to tackle some of these issues, this thesis proposes a new collaboration framework consisting of a resource-based data model, resource-oriented access control mechanism and semantic technologies utilising the Semantic Sensor Network Ontology that can be used simultaneously by multiple actors without impacting each other’s networks and thus increase the efficiency of disaster management and relief operations. The generic design of the framework enables future extensions, thus enabling its exploitation across many application domains. The performance of the framework is evaluated in two areas: the capability of the access control mechanism to scale with increasing number of devices, and the capability of the semantic annotation process to increase in efficiency as more information is provided. The results demonstrate that the proposed framework is fit for purpose.
|
170 |
Essays on methodologies in contingent valuation and the sustainable management of common pool resourcesKang, Heechan 15 March 2006 (has links)
No description available.
|
Page generated in 0.0667 seconds