Spelling suggestions: "subject:"data model"" "subject:"mata model""
81 |
Návrh controllingové koncepce s využitím systému "Business intelligence" / Designing a controlling concept using the business intelligence systemHejdánek, Michal January 2017 (has links)
The Master Thesis deals with the use of the controlling concept in business management, Business Intelligence systems and their interconnection. The aim is to propose this integration link on the example of a particular company and at the same time to give management recommendations in areas that will be evaluated in the analysis as insufficient. The theoretical part of this thesis is divided into three main chapters. The first deals with the definition of controlling, its tools and organizations. The second one describes business intelligence, not just the basic principles of this technology, but also the choice of tools and the implementation process are explained. The following chapter lists two software tools that combine controlling and business intelligence in practice. The practical part is divided into a general description of the company, analysis of the applied elements of controlling and then the proposal of solution of insufficient areas and the process of BI implementation, which contains 12 steps which would enable to implement the BI concept.
|
82 |
Constructing a Clinical Research Data Management SystemQuintero, Michael C. 04 November 2017 (has links)
Clinical study data is usually collected without knowing what kind of data is going to be collected in advance. In addition, all of the possible data points that can apply to a patient in any given clinical study is almost always a superset of the data points that are actually recorded for a given patient. As a result of this, clinical data resembles a set of sparse data with an evolving data schema. To help researchers at the Moffitt Cancer Center better manage clinical data, a tool was developed called GURU that uses the Entity Attribute Value model to handle sparse data and allow users to manage a database entity’s attributes without any changes to the database table definition. The Entity Attribute Value model’s read performance gets faster as the data gets sparser but it was observed to perform many times worse than a wide table if the attribute count is not sufficiently large. Ultimately, the design trades read performance for flexibility in the data schema.
|
83 |
Vliv výše životní úrovně na bytovou výstavbu v krajích České republiky a další determinanty bytové výstavby / The impact of standard of living on housing construction in regions in the Czech RepublicSochorová, Aneta January 2017 (has links)
This thesis analyzes determinants of housing construction in regions in the Czech Republic. The main research question is the impact of standard of living on housing construction. The living standard is expressed in terms of net disposable income per capita and housing construction represents the number of housing starts. Other determinants included to the model estimation are rate of unemployment, housing price and number of mortgage. Analysis works with the panel data from period 2005- 2015 and all variables are used in the logarithmic form with one year lag. The model is estimated by random effects model. The assumption about positive impact of living standard on housing construction is not confirmed, because of the statistical insignificance of variable net disposable income. In case of other variables expected effects are confirm. The increases in rate of unemployment and housing prices have the negative impact on housing construction. And opposite the number of mortgage has positive impact on housing construction.
|
84 |
Assisting in the reuse of existing materials to build adaptive hypermedia / Aide à la Création d’Hypermédia Adaptatifs par Réutilisation des Modèles des CréateursZemirline, Nadjet 12 July 2011 (has links)
Aujourd'hui, l’approche « one-size-fits-all » pour les hypermédias n'est plus applicable. Les Hypermédias Adaptatifs (AH) proposent d’adapter leur comportement aux besoins des utilisateurs. Cependant, en raison de la complexité de leur processus de création et des différentes compétences requises par les auteurs, peu d'entre eux ont été développés. Ces dernières années, de nombreux efforts ont été faits pour proposer d’aider les auteurs à créer leurs propres AH. mais certains problèmes demeurent non résolus.Nous nous sommes intéressées à deux problèmes particuliers.Le premier problème concerne l'intégration des ressources des auteurs dans des systèmes existants. Cela permet aux auteurs de réutiliser directement les adaptations prévues dans un système et de les exécuter sur leurs ressources. Pour répondre à ce problème, nous proposons un processus semi-automatique de fusion / spécialisation pour intégrer le modèle d'un auteur dans un modèle d'un système existant. Notre objectif est double: créer un support pour la définition des correspondances entre les éléments d’un modèle existant et ceux du modèle de l'auteur, et aider à créer un modèle cohérent intégrant les deux modèles et les correspondances entre eux. De cette façon, nous permettons aux auteurs d’intégrer leur modèle complet sans aucune transformation ni perte d’information. Le deuxième problème concerne la spécification de l'adaptation, qui est notoirement le processus le plus difficile dans la création des hypermédias adaptatifs. Nous proposons un framework EAP avec trois contributions principales : un ensemble de 22 patrons d'adaptation élémentaires pour l’adaptation de navigation, une typologie organisant les différents patrons d'adaptation élémentaires et un processus pour générer des stratégies d'adaptation basées sur l'utilisation et la combinaison semi-automatique des patrons d’adaptation élémentaires. Nos objectifs sont de permettre de définir facilement des stratégies d'adaptation à un niveau d’abstraction élevé en combinant des stratégies d’adaptation simples. Nous avons comparé l'expressivité du framework EAP à des solutions existantes, identifiant ainsi les avantages et les inconvénients des différentes décisions nécessaires à la définition d’un langage d'adaptation idéal. Nous proposons aussi une vision unifiée de l'adaptation et des langages d'adaptation, basée sur l’analyse de ces solutions et de notre framework, ainsi que sur l'étude de l'expressivité de l'adaptation et de l'interopérabilité entre les différentes solutions analysées. La vision unifiée sur l'adaptation n’est pas limitée aux solutions analysées. Elle peut être utilisée pour comparer et étendre d'autres approches.Outre ces études théoriques qualitatives, cette thèse décrit également des implémentations et des évaluations expérimentales. / Nowadays, there is a growing demand for personalization and the “one-size-fits-all” approach for hypermedia systems is no longer applicable. Adaptive hypermedia (AH) systems adapt their behavior to the needs of individual users. However due to the complexity of their authoring process and the different skills required from authors, only few of them have been proposed. These last years, numerous efforts have been put to propose assistance for authors to create their own AH. However, as explained in this thesis some problems remain.In this thesis, we tackle two particular problems. A first problem concerns the integration of authors’ materials (information and user profile) into models of existing systems. Thus, allowing authors to directly reuse existing reasoning and execute it on their materials. We propose a semi-automatic merging/specialization process to integrate an author’s model into a model of an existing system. Our objectives are twofold: to create a support for defining mappings between elements in a model of existing models and elements in the author’s model and to help creating consistent and relevant models integrating the two models and taking into account the mappings between them.A second problem concerns the adaptation specification, which is famously the hardest part of the authoring process of adaptive web-based systems. We propose an EAP framework with three main contributions: a set of elementary adaptation patterns for the adaptive navigation, a typology organizing the proposed elementary adaptation patterns and a semi-automatic process to generate adaptation strategies based on the use and the combination of patterns. Our objectives are to define easily adaptation strategies at a high level by combining simple ones. Furthermore, we have studied the expressivity of some existing solutions allowing the specification of adaptation versus the EAP framework, discussing thus, based on this study, the pros and cons of various decisions in terms of the ideal way of defining an adaptation language. We propose a unified vision of adaptation and adaptation languages, based on the analysis of these solutions and our framework, as well as a study of the adaptation expressivity and the interoperability between them, resulting in an adaptation typology. The unified vision and adaptation typology are not limited to the solutions analysed, and can be used to compare and extend other approaches in the future. Besides these theoretical qualitative studies, this thesis also describes implementations and experimental evaluations of our contributions in an e-learning application.
|
85 |
Specifika rozvoje datového skladu v bance / Specifics of Data Warehouse enhancement in a bankKarásek, Tomáš January 2015 (has links)
The present thesis deals with the specifics of Data Warehouse enhancement in a bank. The aim of the thesis is a definition of the general specifics of banks, their Business Intelligence and Data Warehouse enhancement compared to other companies. The thesis is divided into seven parts. The first part describes the theoretical basis of banking and Business Intelligence. The second part defines the general specifics of banks and their informatics compared to other companies. Then Business Intelligence in a bank, its architecture and enhancement are explored. In the fourth part a conceptual data model of Data Warehouse in a bank is introduced and described in detail. Afterwards the main source systems of Data Warehouse are identified and matched to the subject areas of the data model. The sixth part discovers important application areas of Business Intelligence usage and mentions the basic indicators. The last part presents a case study (a project of Data Warehouse enhancement in a bank). The result of this thesis is clear description of the Data Warehouse in a bank, its data model, source systems, application areas and enhancement.
|
86 |
Ontološki zasnovana analiza semantičke korektnosti modela podataka primenom sistema automatskog rezonovanja / Ontology based semantic analyses of data model correctness by using automated reasoning systemKazi Zoltan 09 June 2014 (has links)
<p>U radu je izvršeno teoretsko istraživanje i analiza postojećih stavova i rešenja u oblasti validacije i provere kvaliteta modela podataka. Kreiran je teorijski model ontološki zasnovane analize semantičke korektnosti modela podataka primenom sistema automatskog rezonovanja i izvršena praktična implementacija teorijskog modela, što je potvrđeno i sprovedenim eksperimentalnim istraživanjem. Razvijena je softverska aplikacija za formalizaciju modela podataka i mapiranje ontologije u oblik Prolog klauzula. Formirana su pravila zaključivanja na predikatskom računu prvog reda, koja su integrisana sa modelom podataka i domenskom ontologijom. Upitima u okviru Prolog sistema, vrši se provera semantičke korektnosti modela podataka. Definisana je i metrika ontološkog kvaliteta modela podataka koja se bazira na odgovorima sistema automatskog rezonovanja.</p> / <p>Work presents a theoretical study and analysis of existing theories and solutions in the area of data model validation and quality checking. It is created a theoretical model of ontology based analysis of data model semantic correctness by applying automated reasoning system which is practicaly implemented and confirmed by the conducted experimental research. A software application is developed for data model formalization and ontology mapping in Prolog clauses form. Reasoning rules are formed the in first-order predicate logic, which are integrated with the data model and domain ontology. Semantic correctness of the data model is checked with queries within Prolog system. Metrics of ontological quality of the data model are defined which are based on automated reasoning system replies.</p>
|
87 |
Analýza a návrh informačního systému pro firmu eSports.cz, s.r.o. / The Design of Information System for Company eSport.cz, s.r.o.Kobelka, Michal January 2015 (has links)
The master's thesis deals with analysis and design of information system for company eSports, s.r.o. First chapter presents theoretical basis which is necessary for understanding the problem. Second chapter assesses current situatuion of eSports, s.r.o. and analyzes main processes of the company. Last chapter provides proposals for data model of new information system.
|
88 |
Database forensics : Investigating compromised database management systemsBeyers, Hector Quintus January 2013 (has links)
The use of databases has become an integral part of modern human life. Often the data
contained within databases has substantial value to enterprises and individuals. As
databases become a greater part of people’s daily lives, it becomes increasingly interlinked with human behaviour. Negative aspects of this behaviour might include criminal activity,
negligence and malicious intent. In these scenarios a forensic investigation is required to collect evidence to determine what happened on a crime scene and who is responsible for the crime. A large amount of the research that is available focuses on digital forensics,
database security and databases in general but little research exists on database forensics as such. It is difficult for a forensic investigator to conduct an investigation on a DBMS due to limited information on the subject and an absence of a standard approach to follow during a forensic investigation. Investigators therefore have to reference disparate sources of information on the topic of database forensics in order to compile a self-invented approach to investigating a database. A subsequent effect of this lack of research is that compromised DBMSs (DBMSs that have been attacked and so behave abnormally) are not considered or understood in the database forensics field. The concept of compromised DBMSs was illustrated in an article by Olivier who suggested that the ANSI/SPARC model can be used to assist in a forensic investigation on a compromised DBMS. Based on the ANSI/SPARC model, the DBMS was divided into four layers known as the data model, data dictionary, application schema and application data. The extensional nature of the first three layers can influence the application data layer and ultimately manipulate the results produced on the application data layer. Thus, it becomes problematic to conduct a forensic investigation on a DBMS if the integrity of the extensional layers is in question and hence the results on the application data layer cannot be trusted. In order to recover the integrity of a layer of the DBMS a clean layer (newly installed layer) could be used but clean layers are not easy or always possible to configure on a DBMS depending on the forensic scenario. Therefore a combination of clean and existing layers can be used to do a forensic investigation on a DBMS.
PROBLEM STATEMENT
The problem to be addressed is how to construct the appropriate combination of clean and existing layers for a forensic investigation on a compromised DBMS, and ensure the
integrity of the forensic results.
APPROACH
The study divides the relational DBMS into four abstract layers, illustrates how the layers
can be prepared to be either in a found or clean forensic state, and experimentally
combines the prepared layers of the DBMS according to the forensic scenario. The study
commences with background on the subjects of databases, digital forensics and database forensics respectively to give the reader an overview of the literature that already exists in these relevant fields. The study then discusses the four abstract layers of the DBMS and explains how the layers could influence one another. The clean and found environments are introduced due to the fact that the DBMS is different to technologies where digital forensics has already been researched. The study then discusses each of the extensional abstract layers individually, and how and why an abstract layer can be converted to a clean or found state. A discussion of each extensional layer is required to understand how unique each layer of the DBMS is and how these layers could be combined in a way that enables a forensic investigator to conduct a forensic investigation on a compromised DBMS. It is illustrated that each layer is unique and could be corrupted in various ways. Therefore,
each layer must be studied individually in a forensic context before all four layers are
considered collectively. A forensic study is conducted on each abstract layer of the DBMS
that has the potential to influence other layers to deliver incorrect results. Ultimately, the
DBMS will be used as a forensic tool to extract evidence from its own encrypted data and
data structures. Therefore, the last chapter shall illustrate how a forensic investigator can
prepare a trustworthy forensic environment where a forensic investigation could be
conducted on an entire PostgreSQL DBMS by constructing a combination of the
appropriate forensic states of the abstract layers.
RESULTS
The result of this study yields an empirically demonstrated approach on how to deal with a compromised DBMS during a forensic investigation by making use of a combination of
various states of abstract layers in the DBMS. Approaches are suggested on how to deal
with a forensic query on the data model, data dictionary and application schema layer of
the DBMS. A forensic process is suggested on how to prepare the DBMS to extract
evidence from the DBMS. Another function of this study is that it advises forensic
investigators to consider alternative possibilities on how the DBMS could be attacked.
These alternatives might not have been considered during investigations on DBMSs to
date. Our methods have been tested at hand of a practical example and have delivered
promising results. / Dissertation (MEng)--University of Pretoria, 2013. / gm2014 / Electrical, Electronic and Computer Engineering / unrestricted
|
89 |
Metody analýzy longitudinálních dat / Methods of longitudinal data analysisJindrová, Linda January 2015 (has links)
Práce se zabývá longitudinálními daty - měřeními, která jsou prová- děna opakovaně na stejných subjektech. Popisuje r·zné typy model·, které jsou vhodné pro jejich analýzu. Postupuje od nejjednodušších lineárních model· s pevnými nebo náhodnými efekty, přes lineární a nelineární modely se smíšenými efekty, až ke zobecněným lineárním model·m a generalized estimating equati- ons (GEE). Vždy je uveden tvar modelu a zp·sob odhadu parametr·. Jednotlivé modely jsou také porovnávány mezi sebou. Teoretické poznatky jsou doplněny aplikacemi na reálná data. Pomocí lineárních model· analyzujeme data o výrobě v USA, nelineární modely využijeme k vysvětlení závislosti koncentrace léčiva v krvi na čase a GEE aplikujeme na data týkající se dýchacích potíží u dětí. 1
|
90 |
Proposal and Evaluation of a Database Data Model Decision Method / Förslag och utvärdering av en beslutsmetod för databasmodellerHauzenberger, Sabina, Lindholm Brandt, Emil January 2020 (has links)
A common problem when choosing a data model for a database is that there are many aspects to take into consideration–making the decision difficult and time-consuming. Therefore this work aims to create a decision method that enhances the decision by making it more suitable for the use-case at hand as well as making it quicker. First, the Analytical Hierarchy Process, a multi-criteria decision method, was identified as a suitable framework that the created decision method was based on. It was developed iteratively and later validated through a survey at Omegapoint. The survey had 27 respondents, but 14 answers were discarded due to being too unreliable, which led to a total of 13 utilized responses. The decision method was implemented in a web application to simplify the survey process, where the respondents use the web application, and answered some follow up questions about the web application’s result and process. It was found that it is possible to create a decision method which makes the choice of a data model quicker and better suited for the use-case. The method is reliable among a subsample of the respondents in the survey as 11 out of 13 respondents found the decision method’s result to be reasonable. However, the small sample size makes it impossible to draw any statistical conclusions of the reliability of the decision method. Additionally, the decision method helps to make the decision quicker, but this is only proven among the respondents in the survey. Based on the results, we conclude that it is possible to create a decision method which makes the decision quicker and better suited for the use-case. However this is only proved among the survey respondents, and a future work could aim to repeat the validation in order to statistically validate the reliability of the decision method. / Ett vanligt problem vid valet av datamodell för en databas är att det finns många aspekter att ta hänsyn till–vilket gör valet svårt och tidskrävande. Detta arbete försöker därför skapa en beslutsmetod som kan förbättra beslutet genom att göra det snabbare och bättre anpassat för användningsområdet. Först valdes Analytical Hierarchy Process, en multikriterie-beslutsmetod, som grund till den framtagna beslutsmetoden. Beslutsmetoden utvecklades iterativt och validerades sedan genom en undersökning på Omegapoint. Undersökningen hade 27 respondenter, men 14 svar plockades bort då de var för inkonsekventa, vilket ledde till att 13 svar användes till slut. I undersökningen använde deltagarna en webb applikation, baserad på beslutsmetoden, och svarade sedan på några frågor och gav feedback om artefaktens resultat och process. Resultaten visade att det är möjligt att skapa en beslutsmetod som gör valet av datamodell snabbare och bättre anpassat för användningsområdet. Metoden anses vara träffsäker bland deltagarna i undersökningen, där 11 av 13 ansåg att resultatet var rimligt. Däremot kan arbetet inte dra några statistiska slutsatser om hur träffsäker metoden är generellt på grund av det låga antalet deltagare i undersökningen. Utöver en god tillförlitlighet, bidrar metoden till ett snabbare beslut, men detta kan endast bevisas för deltagargruppen i undersökningen. Givet resultaten kan vi dra slutsatsen att det är möjligt att skapa en beslutsmetod som gör valet av datamodell snabbare och bättre anpassat för användningsområdet. Detta kan däremot endast kan bevisas för deltagargruppen i undersökningen och därför föreslås att ett framtida arbete skulle kunna upprepa valideringen med en större deltagargrupp för att kunna fastslå modellens tillförlitlighet statistiskt.
|
Page generated in 0.06 seconds