• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 79
  • 8
  • 5
  • 5
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 124
  • 21
  • 20
  • 19
  • 15
  • 15
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Realisierung einer Schedulingumgebung für gemischt-parallele Anwendungen und Optimierung von layer-basierten Schedulingalgorithmen / Development of a scheduling support environment for mixed parallel applications and optimization of layer-based scheduling algorithms

Kunis, Raphael 25 January 2011 (has links) (PDF)
Eine Herausforderung der Parallelverarbeitung ist das Erreichen von Skalierbarkeit großer paralleler Anwendungen für verschiedene parallele Systeme. Das zentrale Problem ist, dass die Ausführung einer Anwendung auf einem parallelen System sehr gut sein kann, die Portierung auf ein anderes System in der Regel jedoch zu schlechten Ergebnissen führt. Durch die Verwendung des Programmiermodells der parallelen Tasks mit Abhängigkeiten kann die Skalierbarkeit für viele parallele Algorithmen deutlich verbessert werden. Die Programmierung mit parallelen Tasks führt zu Task-Graphen mit Abhängigkeiten zur Darstellung einer parallelen Anwendung, die auch als gemischt-parallele Anwendung bezeichnet wird. Die Grundlage für eine effiziente Abarbeitung einer gemischt-parallelen Anwendung bildet ein geeigneter Schedule, der eine effiziente Abbildung der parallelen Tasks auf die Prozessoren des parallelen Systems vorgibt. Für die Berechnung eines Schedules werden Schedulingalgorithmen eingesetzt. Ein zentrales Problem bei der Bestimmung eines Schedules für gemischt-parallele Anwendungen besteht darin, dass das Scheduling bereits für Single-Prozessor-Tasks mit Abhängigkeiten und ein paralleles System mit zwei Prozessoren NP-hart ist. Daher existieren lediglich Approximationsalgorithmen und Heuristiken um einen Schedule zu berechnen. Eine Möglichkeit zur Berechnung eines Schedules sind layerbasierte Schedulingalgorithmen. Diese Schedulingalgorithmen bilden zuerst Layer unabhängiger paralleler Tasks und berechnen den Schedule für jeden Layer separat. Eine Schwachstelle dieser Schedulingalgorithmen ist das Zusammenfügen der einzelnen Schedules zum globalen Schedule. Der vorgestellte Algorithmus Move-blocks bietet eine elegante Möglichkeit das Zusammenfügen zu verbessern. Dies geschieht durch eine Verschmelzung der Schedules aufeinander folgender Layer. Obwohl eine Vielzahl an Schedulingalgorithmen für gemischt-parallele Anwendungen existiert, gibt es bislang keine umfassende Unterstützung des Schedulings durch Programmierwerkzeuge. Im Besonderen gibt es keine Schedulingumgebung, die eine Vielzahl an Schedulingalgorithmen in sich vereint. Die Vorstellung der flexiblen, komponentenbasierten und erweiterbaren Schedulingumgebung SEParAT ist der zweite Fokus dieser Dissertation. SEParAT unterstützt verschiedene Nutzungsszenarien, die weit über das reine Scheduling hinausgehen, z.B. den Vergleich von Schedulingalgorithmen und die Erweiterung und Realisierung neuer Schedulingalgorithmen. Neben der Vorstellung der Nutzungsszenarien werden sowohl die interne Verarbeitung eines Schedulingdurchgangs als auch die komponentenbasierte Softwarearchitektur detailliert vorgestellt.
102

A Bayesian Network methodology for railway risk, safety and decision support

Mahboob, Qamar 24 March 2014 (has links) (PDF)
For railways, risk analysis is carried out to identify hazardous situations and their consequences. Until recently, classical methods such as Fault Tree Analysis (FTA) and Event Tree Analysis (ETA) were applied in modelling the linear and logically deterministic aspects of railway risks, safety and reliability. However, it has been proven that modern railway systems are rather complex, involving multi-dependencies between system variables and uncertainties about these dependencies. For train derailment accidents, for instance, high train speed is a common cause of failure; slip and failure of brake applications are disjoint events; failure dependency exists between the train protection and warning system and driver errors; driver errors are time dependent and there is functional uncertainty in derailment conditions. Failing to incorporate these aspects of a complex system leads to wrong estimations of the risks and safety, and, consequently, to wrong management decisions. Furthermore, a complex railway system integrates various technologies and is operated in an environment where the behaviour and failure modes of the system are difficult to model using probabilistic techniques. Modelling and quantification of the railway risk and safety problems that involve dependencies and uncertainties such as mentioned above are complex tasks. Importance measures are useful in the ranking of components, which are significant with respect to the risk, safety and reliability of a railway system. The computation of importance measures using FTA has limitation for complex railways. ALARP (As Low as Reasonably Possible) risk acceptance criteria are widely accepted as ’\'best practice’’ in the railways. According to the ALARP approach, a tolerable region exists between the regions of intolerable and negligible risks. In the tolerable region, risk is undertaken only if a benefit is desired. In this case, one needs to have additional criteria to identify the socio-economic benefits of adopting a safety measure for railway facilities. The Life Quality Index (LQI) is a rational way of establishing a relation between the financial resources utilized to improve the safety of an engineering system and the potential fatalities that can be avoided by safety improvement. This thesis shows the application of the LQI approach to quantifying the social benefits of a number of safety management plans for a railway facility. We apply Bayesian Networks and influence diagrams, which are extensions of Bayesian Networks, to model and assess the life safety risks associated with railways. Bayesian Networks are directed acyclic probabilistic graphical models that handle the joint distribution of random variables in a compact and flexible way. In influence diagrams, problems of probabilistic inference and decision making – based on utility functions – can be combined and optimized, especially, for systems with many dependencies and uncertainties. The optimal decision, which maximizes the total benefits to society, is obtained. In this thesis, the application of Bayesian Networks to the railway industry is investigated for the purpose of improving modelling and the analysis of risk, safety and reliability in railways. One example application and two real world applications are presented to show the usefulness and suitability of the Bayesian Networks for the quantitative risk assessment and risk-based decision support in reference to railways.
103

Change-effects analysis for effective testing and validation of evolving software

Santelices, Raul A. 17 May 2012 (has links)
The constant modification of software during its life cycle poses many challenges for developers and testers because changes might not behave as expected or may introduce erroneous side effects. For those reasons, it is of critical importance to analyze, test, and validate software every time it changes. The most common method for validating modified software is regression testing, which identifies differences in the behavior of software caused by changes and determines the correctness of those differences. Most research to this date has focused on the efficiency of regression testing by selecting and prioritizing existing test cases affected by changes. However, little attention has been given to finding whether the test suite adequately tests the effects of changes (i.e., behavior differences in the modified software) and which of those effects are missed during testing. In practice, it is necessary to augment the test suite to exercise the untested effects. The thesis of this research is that the effects of changes on software behavior can be computed with enough precision to help testers analyze the consequences of changes and augment test suites effectively. To demonstrate this thesis, this dissertation uses novel insights to develop a fundamental understanding of how changes affect the behavior of software. Based on these foundations, the dissertation defines and studies new techniques that detect these effects in cost-effective ways. These techniques support test-suite augmentation by (1) identifying the effects of individual changes that should be tested, (2) identifying the combined effects of multiple changes that occur during testing, and (3) optimizing the computation of these effects.
104

Fusion d'images de télédétection hétérogènes par méthodes crédibilistes / Fusion of heterogeneous remote sensing images by credibilist methods

Hammami, Imen 08 December 2017 (has links)
Avec l’avènement de nouvelles techniques d’acquisition d’image et l’émergence des systèmes satellitaires à haute résolution, les données de télédétection à exploiter sont devenues de plus en plus riches et variées. Leur combinaison est donc devenue essentielle pour améliorer le processus d’extraction des informations utiles liées à la nature physique des surfaces observées. Cependant, ces données sont généralement hétérogènes et imparfaites ce qui pose plusieurs problèmes au niveau de leur traitement conjoint et nécessite le développement de méthodes spécifiques. C’est dans ce contexte que s’inscrit cette thèse qui vise à élaborer une nouvelle méthode de fusion évidentielle dédiée au traitement des images de télédétection hétérogènes à haute résolution. Afin d’atteindre cet objectif, nous axons notre recherche, en premier lieu, sur le développement d’une nouvelle approche pour l’estimation des fonctions de croyance basée sur la carte de Kohonen pour simplifier l’opération d’affectation des masses des gros volumes de données occupées par ces images. La méthode proposée permet de modéliser non seulement l’ignorance et l’imprécision de nos sources d’information, mais aussi leur paradoxe. Ensuite, nous exploitons cette approche d’estimation pour proposer une technique de fusion originale qui permettra de remédier aux problèmes dus à la grande variété des connaissances apportées par ces capteurs hétérogènes. Finalement, nous étudions la manière dont la dépendance entre ces sources peut être considérée dans le processus de fusion moyennant la théorie des copules. Pour cette raison, une nouvelle technique pour choisir la copule la plus appropriée est introduite. La partie expérimentale de ce travail est dédiée à la cartographie de l’occupation des sols dans les zones agricoles en utilisant des images SPOT-5 et RADARSAT-2. L’étude expérimentale réalisée démontre la robustesse et l’efficacité des approches développées dans le cadre de cette thèse. / With the advent of new image acquisition techniques and the emergence of high-resolution satellite systems, remote sensing data to be exploited have become increasingly rich and varied. Their combination has thus become essential to improve the process of extracting useful information related to the physical nature of the observed surfaces. However, these data are generally heterogeneous and imperfect, which poses several problems in their joint treatment and requires the development of specific methods. It is in this context that falls this thesis that aimed at developing a new evidential fusion method dedicated to heterogeneous remote sensing images processing at high resolution. In order to achieve this objective, we first focus our research, firstly, on the development of a new approach for the belief functions estimation based on Kohonen’s map in order to simplify the masses assignment operation of the large volumes of data occupied by these images. The proposed method allows to model not only the ignorance and the imprecision of our sources of information, but also their paradox. After that, we exploit this estimation approach to propose an original fusion technique that will solve problems due to the wide variety of knowledge provided by these heterogeneous sensors. Finally, we study the way in which the dependence between these sources can be considered in the fusion process using the copula theory. For this reason, a new technique for choosing the most appropriate copula is introduced. The experimental part of this work isdevoted to land use mapping in case of agricultural areas using SPOT-5 and RADARSAT-2 images. The experimental study carried out demonstrates the robustness and effectiveness of the approaches developed in the framework of this thesis.
105

Accès à l'information dans les grandes collections textuelles en langue arabe / Information access in large Arabic textual collections

El Mahdaouy, Abdelkader 16 December 2017 (has links)
Face à la quantité d'information textuelle disponible sur le web en langue arabe, le développement des Systèmes de Recherche d'Information (SRI) efficaces est devenu incontournable pour retrouver l'information pertinente. La plupart des SRIs actuels de la langue arabe reposent sur la représentation par sac de mots et l'indexation des documents et des requêtes est effectuée souvent par des mots bruts ou des racines. Ce qui conduit à plusieurs problèmes tels que l'ambigüité et la disparité des termes, etc.Dans ce travail de thèse, nous nous sommes intéressés à apporter des solutions aux problèmes d'ambigüité et de disparité des termes pour l'amélioration de la représentation des documents et le processus de l'appariement des documents et des requêtes. Nous apportons quatre contributions au niveau de processus de représentation, d'indexation et de recherche d'information en langue arabe. La première contribution consiste à représenter les documents à la fois par des termes simples et des termes complexes. Cela est justifié par le fait que les termes simples seuls et isolés de leur contexte sont ambigus et moins précis pour représenter le contenu des documents. Ainsi, nous avons proposé une méthode hybride pour l’extraction de termes complexes en langue arabe, en combinant des propriétés linguistiques et des modèles statistiques. Le filtre linguistique repose à la fois sur l'étiquetage morphosyntaxique et la prise en compte des variations pour sélectionner les termes candidats. Pour sectionner les termes candidats pertinents, nous avons introduit une mesure d'association permettant de combiner l'information contextuelle avec les degrés de spécificité et d'unité. La deuxième contribution consiste à explorer et évaluer les systèmes de recherche d’informations permettant de tenir compte de l’ensemble des éléments d’indexation (termes simples et complexes). Par conséquent, nous étudions plusieurs extensions des modèles existants de RI pour l'intégration des termes complexes. En outre, nous explorons une panoplie de modèles de proximité. Pour la prise en compte des dépendances de termes dans les modèles de RI, nous introduisons une condition caractérisant de tels modèle et leur validation théorique. La troisième contribution permet de pallier le problème de disparité des termes en proposant une méthode pour intégrer la similarité entre les termes dans les modèles de RI en s'appuyant sur les représentations distribuées des mots (RDMs). L'idée sous-jacente consiste à permettre aux termes similaires à ceux de la requête de contribuer aux scores des documents. Les extensions des modèles de RI proposées dans le cadre de cette méthode sont validées en utilisant les contraintes heuristiques d'appariement sémantique. La dernière contribution concerne l'amélioration des modèles de rétro-pertinence (Pseudo Relevance Feedback PRF). Étant basée également sur les RDM, notre méthode permet d'intégrer la similarité entre les termes d'expansions et ceux de la requête dans les modèles standards PRF. La validation expérimentale de l'ensemble des contributions apportées dans le cadre de cette thèse est effectuée en utilisant la collection standard TREC 2002/2001 de la langue arabe. / Given the amount of Arabic textual information available on the web, developing effective Information Retrieval Systems (IRS) has become essential to retrieve relevant information. Most of the current Arabic SRIs are based on the bag-of-words representation, where documents are indexed using surface words, roots or stems. Two main drawbacks of the latter representation are the ambiguity of Single Word Terms (SWTs) and term mismatch.The aim of this work is to deal with SWTs ambiguity and term mismatch. Accordingly, we propose four contributions to improve Arabic content representation, indexing, and retrieval. The first contribution consists of representing Arabic documents using Multi-Word Terms (MWTs). The latter is motivated by the fact that MWTs are more precise representational units and less ambiguous than isolated SWTs. Hence, we propose a hybrid method to extract Arabic MWTs, which combines linguistic and statistical filtering of MWT candidates. The linguistic filter uses POS tagging to identify MWTs candidates that fit a set of syntactic patterns and handles the problem of MWTs variation. Then, the statistical filter rank MWT candidate using our proposed association measure that combines contextual information and both termhood and unithood measures. In the second contribution, we explore and evaluate several IR models for ranking documents using both SWTs and MWTs. Additionally, we investigate a wide range of proximity-based IR models for Arabic IR. Then, we introduce a formal condition that IR models should satisfy to deal adequately with term dependencies. The third contribution consists of a method based on Distributed Representation of Word vectors, namely Word Embedding (WE), for Arabic IR. It relies on incorporating WE semantic similarities into existing probabilistic IR models in order to deal with term mismatch. The aim is to allow distinct, but semantically similar terms to contribute to documents scores. The last contribution is a method to incorporate WE similarity into Pseud-Relevance Feedback PRF for Arabic Information Retrieval. The main idea is to select expansion terms using their distribution in the set of top pseudo-relevant documents along with their similarity to the original query terms. The experimental validation of all the proposed contributions is performed using standard Arabic TREC 2002/2001 collection.
106

A Runtime System for Data-Flow Task Programming on Multicore Architectures with Accelerators / Vers un support exécutif avec dépendance de données pour les architectures multicoeur avec des accélérateurs / Uma Ferramenta para Programação com Dependência de Dados em Arquiteturas Multicore com Aceleradores

Lima, Joao Vicente Ferreira 05 May 2014 (has links)
Dans cette thèse , nous proposons d’étudier des questions sur le parallélism de tâcheavec dépendance de données dans le cadre de machines multicoeur avec des accélérateurs.La solution proposée a été développée en utilisant l’interface de programmation hauteniveau XKaapi du projet MOAIS de l’INRIA Rhône-Alpes.D’abord nous avons étudié des questions liés à une approche d’exécution totalementasyncrone et l’ordonnancement par vol de travail sur des architectures multi-GPU. Le volde travail avec localité de données a montré des résultats significatifs, mais il ne prend pasen compte des différents ressources de calcul. Ensuite nous avons conçu une interface etune modèle de coût qui permettent d’écrire des politiques d’ordonnancement sur XKaapi.Finalement on a évalué XKaapi sur un coprocesseur Intel Xeon Phi en mode natif.Notre conclusion est double. D’abord nous avons montré que le modèle de programma-tion data-flow peut être efficace sur des accélérateurs tels que des GPUs ou des coproces-seurs Intel Xeon Phi. Ensuite, le support à des différents politiques d’ordonnancement estindispensable. Les modèles de coût permettent d’obtenir de performance significatifs surdes calculs très réguliers, tandis que le vol de travail permet de redistribuer la charge encours d’exécution. / In this thesis, we propose to study the issues of task parallelism with data dependencies onmulticore architectures with accelerators. We target those architectures with the XKaapiruntime system developed by the MOAIS team (INRIA Rhône-Alpes).We first studied the issues on multi-GPU architectures for asynchronous execution andscheduling. Work stealing with heuristics showed significant performance results, but didnot consider the computing power of different resources. Next, we designed a schedulingframework and a performance model to support scheduling strategies over XKaapi runtime.Finally, we performed experimental evaluations over the Intel Xeon Phi coprocessor innative execution.Our conclusion is twofold. First we concluded that data-flow task programming canbe efficient on accelerators, which may be GPUs or Intel Xeon Phi coprocessors. Second,the runtime support of different scheduling strategies is essential. Cost models providesignificant performance results over very regular computations, while work stealing canreact to imbalances at runtime. / Esta tese investiga os desafios no uso de paralelismo de tarefas com dependências dedados em arquiteturas multi-CPU com aceleradores. Para tanto, o XKaapi, desenvolvidono grupo de pesquisa MOAIS (INRIA Rhône-Alpes), é a ferramenta de programação basedeste trabalho.Em um primeiro momento, este trabalho propôs extensões ao XKaapi a fim de sobre-por transferência de dados com execução através de operações concorrentes em GPU, emconjunto com escalonamento por roubo de tarefas em multi-GPU. Os resultados experimen-tais sugerem que o suporte a asincronismo é importante à escalabilidade e desempenho emmulti-GPU. Apesar da localidade de dados, o roubo de tarefas não pondera a capacidadede processamento das unidades de processamento disponíveis. Nós estudamos estratégiasde escalonamento com predição de desempenho em tempo de execução através de modelosde custo de execução. Desenvolveu-se um framework sobre o XKaapi de escalonamentoque proporciona a implementação de diferentes algoritmos de escalonamento. Esta tesetambém avaliou o XKaapi em coprocessodores Intel Xeon Phi para execução nativa.A conclusão desta tese é dupla. Primeiramente, nós concluímos que um modelo deprogramação com dependências de dados pode ser eficiente em aceleradores, tais comoGPUs e coprocessadores Intel Xeon Phi. Não obstante, uma ferramenta de programaçãocom suporte a diferentes estratégias de escalonamento é essencial. Modelos de custo podemser usados no contexto de algoritmos paralelos regulares, enquanto que o roubo de tarefaspoder reagir a desbalanceamentos em tempo de execução.
107

Závislosti a práce s daty v RVP ZV / Dependencies, relations and work with data in the perspective of Framework Education Programme for Elementary Education

SRBOVÁ, Lucie January 2009 (has links)
The aim of the diploma thesis is a to develop materials for teaching the educational content ``Dependencies, relations and work with data{\crqq} in the perspective of Framework Education Programme for Elementary Education. The diploma thesis focuses on an analysis of what mathematical and key competences are developed when going through this topic and on the search for interrelations between cross-sectional topics. The diploma thesis includes a collection of problems for various grades of primary school and samples of model problems.
108

Långsiktig planering i projekt med agila metoder : En fallstudie på ett IT-företag

Krohn, Lisa January 2017 (has links)
Companies are constantly challenged to maintain but also to expand their market position. According to a study it’s problematic to carry out successful IT-projects. Dependencies, coordination and long term planning are three factors which has shown are difficult to master in bigger projects. The aim with the study is to in- vestigate how these aspects work in one case, through a case study on an IT com- pany. The goal is to contribute with research to the subject of long term planning in IT-projects where agile methods usually is used today. Previous research and theories as project management, agile project management, planning methods, scaling methods, coordination methods and change management are used as a base for the study. An exploratory qualitative study has been performed at the case company, the generated result composes of interviews with personnel from the company. The result showed how the methods and theories where used in the case company mainly focusing on long-term planning and workflows. Generally, the same thoughts where shared among the personnel, but because of the open questions the responder got encouraged to elaborate and explain their opinion leading to new information and aspects even when they agreed on the topic. When analyzing the result, I show that a clear framework, implementing of sim- ple work methods and the project management were all important to be able to conduct long-term planning and handle dependencies in projects. The conclusion was to implement the scaling method SAFe but also to use coordination methods. An important aspect is to include change management processes and theories when implementing or changing something. Further studies suggest that research in this article should be carried out on more companies and studies if the imple- mentation of the agila scaling method SAFe has a positive effect in a company. / Varje dag utmanas företag för att bibehålla men även utöka deras marknadspo- sition. Enligt en studie så är det problematiskt att genomföra lyckade IT-projekt. Beroenden, koordinering och långsiktig planering är tre faktorer som det visat sig vara svårt med i större projekt. Syftet med studien är att undersöka hur dessa aspekter fungerar på ett visst fall, genom en fallstudie på ett IT-företag. Målet är att kunna bidra med forskning till ämnet gällande långsiktig planering i IT-pro- jekt där agila metoder vanligtvis idag används. Tidigare forskning och teorier som projektledning, agil projektledning, planeringsmetoder, skalningsmetoder, koordineringsmetoder samt förändringsledning ligger till grund för studien. En explorativ kvalitativ studie har genomförts på fallföretaget. Det genererade re- sultatet baseras på intervjuer med personal på företaget. Resultatet visade hur det fungerade på fallföretaget gällande långsiktig planering och arbetssätt. I stora drag framfördes liknande tankar och åsikter, men då de ställda frågorna var öppet ställda så framkom även olika nyanser och delar från de olika respon- denterna. Analysen av resultatet visade att tydligare ramverk, implementering av arbetssätt och ledning var viktiga för att kunna planera långsiktigt och han- tera beroenden. Slutsatsen som drogs var att implementera skalningsmetoden SAFe men också använda sig av koordineringsmetoder. En viktig aspekt att inte förglömma är även förändringsarbetet vid införandet eller förändring av något. För vidare forskning skulle studien kunna göras på fler företag och även en stu- die på implementeringen av den agila skalningsmetoden SAFe skulle kunna gö- ras för att se om det skulle ge en positiv effekt för företaget.
109

Fuzz testování REST API / Fuzz Testing of REST API

Segedy, Patrik January 2020 (has links)
Táto práca sa zaoberá fuzz testovaním REST API. Po prezentovaní prehľadu techník používaných pri fuzz testovaní a posúdení aktuálnych nástrojov a výskumu zameraného na REST API fuzz testovanie, sme pristúpili k návrhu a implementácii nášho REST API fuzzeru. Základom nášho riešenia je odvodzovanie závislostí z OpenAPI formátu popisu REST API, umožňujúce stavové testovanie aplikácie. Náš fuzzer minimalizuje počet po sebe nasledujúcich 404 odpovedí od aplikácie a testuje aplikáciu viac do hĺbky. Problém prehľadávania dostupných stavov aplikácie je riešený pomocou usporiadania závislostí tak, aby sa maximalizovala pravdepodobnosť získania potrebných vstupných dát pre povinné parametre, v kombinácii s rozhodovaním, ktoré povinné parametre môžu využívať aj náhodne generované hodnoty. Implementácia je rozšírením Schemathesis projektu, ktorý generuje vstupy za pomoci Hypothesis knižnice. Implementovaný fuzzer je použitý na testovanie Red Hat Insights aplikácie, kde našiel 32 chýb, z čoho jednu chybu je možné reprodukovať len za pomoci stavového testovania.
110

Integracija šema modula baze podataka informacionog sistema / Integration of Information System Database Module Schemas

Luković Ivan 18 January 1996 (has links)
<p>Paralelan i nezavisan rad vi&scaron;e projektanata na različitim modulima (podsistemima) nekog informacionog sistema, identifikovanim saglasno početnoj funkcionalnoj dekompoziciji realnog sistema, nužno dovodi do međusobno nekonzistentnih re&scaron;enja &scaron;ema modula baze podataka. Rad se bavi pitanjima identifikacije i razre&scaron;avanja problema, vezanih za automatsko otkrivanje kolizija, koje nastaju pri paralelnom projektovanju različitih &scaron;ema modula i problema vezanih za integraciju &scaron;ema modula u jedinstvenu &scaron;emu baze podataka informacionog sistema.</p><p>Identifikovani su mogući tipovi kolizija &scaron;ema modula, formulisan je i dokazan potreban i dovoljan uslov stroge i intenzionalne kompatibilnosti &scaron;ema modula, &scaron;to je omogućilo da se, u formi algoritama, prikažu postupci za ispitivanje&nbsp;stroge i intenzionalne kompatibilnosti &scaron;ema modula. Formalizovan je i postupak integracije kompatibilnih &scaron;ema u jedinstvenu (strogo pokrivajuću) &scaron;emu baze podataka. Dat je, takođe, prikaz metodologije primene algoritama za testiranje kompatibilnosti i integraciju &scaron;ema modula u jedinstvenu &scaron;emu baze podataka informacionog sistema.</p> / <p>Parallel and independent work of a number of designers on different information system modules (i.e. subsystems), identified by the initial real system functional decomposition, necessarily leads to mutually inconsistent database (db) module schemas. The thesis considers the problems concerning automatic detection of collisions, that can appear during the simultaneous design of different db module schemas, and integration of db module schemas into the unique information system db schema.</p><p>All possible types of db module schema collisions have been identified. Necessary and sufficient condition of strong and intensional db module schema compatibility has been formu-lated and proved. It has enabled to formalize the process of&nbsp;db module schema strong and intensional compatibility checking and to construct the appropriate algorithms. The integration process of the unique (strong covering) db schema, on the basis of compatible db module schemas, is formalized, as well. The methodology of applying the algorithms for compatibility checking and unique db schema integration is also presented.</p>

Page generated in 0.074 seconds