• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 24
  • 23
  • 15
  • 13
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 176
  • 176
  • 32
  • 31
  • 29
  • 28
  • 25
  • 22
  • 21
  • 20
  • 18
  • 16
  • 16
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Vliv výše životní úrovně na bytovou výstavbu v krajích České republiky a další determinanty bytové výstavby / The impact of standard of living on housing construction in regions in the Czech Republic

Sochorová, Aneta January 2017 (has links)
This thesis analyzes determinants of housing construction in regions in the Czech Republic. The main research question is the impact of standard of living on housing construction. The living standard is expressed in terms of net disposable income per capita and housing construction represents the number of housing starts. Other determinants included to the model estimation are rate of unemployment, housing price and number of mortgage. Analysis works with the panel data from period 2005- 2015 and all variables are used in the logarithmic form with one year lag. The model is estimated by random effects model. The assumption about positive impact of living standard on housing construction is not confirmed, because of the statistical insignificance of variable net disposable income. In case of other variables expected effects are confirm. The increases in rate of unemployment and housing prices have the negative impact on housing construction. And opposite the number of mortgage has positive impact on housing construction.
82

Assisting in the reuse of existing materials to build adaptive hypermedia / Aide à la Création d’Hypermédia Adaptatifs par Réutilisation des Modèles des Créateurs

Zemirline, Nadjet 12 July 2011 (has links)
Aujourd'hui, l’approche « one-size-fits-all » pour les hypermédias n'est plus applicable. Les Hypermédias Adaptatifs (AH) proposent d’adapter leur comportement aux besoins des utilisateurs. Cependant, en raison de la complexité de leur processus de création et des différentes compétences requises par les auteurs, peu d'entre eux ont été développés. Ces dernières années, de nombreux efforts ont été faits pour proposer d’aider les auteurs à créer leurs propres AH. mais certains problèmes demeurent non résolus.Nous nous sommes intéressées à deux problèmes particuliers.Le premier problème concerne l'intégration des ressources des auteurs dans des systèmes existants. Cela permet aux auteurs de réutiliser directement les adaptations prévues dans un système et de les exécuter sur leurs ressources. Pour répondre à ce problème, nous proposons un processus semi-automatique de fusion / spécialisation pour intégrer le modèle d'un auteur dans un modèle d'un système existant. Notre objectif est double: créer un support pour la définition des correspondances entre les éléments d’un modèle existant et ceux du modèle de l'auteur, et aider à créer un modèle cohérent intégrant les deux modèles et les correspondances entre eux. De cette façon, nous permettons aux auteurs d’intégrer leur modèle complet sans aucune transformation ni perte d’information. Le deuxième problème concerne la spécification de l'adaptation, qui est notoirement le processus le plus difficile dans la création des hypermédias adaptatifs. Nous proposons un framework EAP avec trois contributions principales : un ensemble de 22 patrons d'adaptation élémentaires pour l’adaptation de navigation, une typologie organisant les différents patrons d'adaptation élémentaires et un processus pour générer des stratégies d'adaptation basées sur l'utilisation et la combinaison semi-automatique des patrons d’adaptation élémentaires. Nos objectifs sont de permettre de définir facilement des stratégies d'adaptation à un niveau d’abstraction élevé en combinant des stratégies d’adaptation simples. Nous avons comparé l'expressivité du framework EAP à des solutions existantes, identifiant ainsi les avantages et les inconvénients des différentes décisions nécessaires à la définition d’un langage d'adaptation idéal. Nous proposons aussi une vision unifiée de l'adaptation et des langages d'adaptation, basée sur l’analyse de ces solutions et de notre framework, ainsi que sur l'étude de l'expressivité de l'adaptation et de l'interopérabilité entre les différentes solutions analysées. La vision unifiée sur l'adaptation n’est pas limitée aux solutions analysées. Elle peut être utilisée pour comparer et étendre d'autres approches.Outre ces études théoriques qualitatives, cette thèse décrit également des implémentations et des évaluations expérimentales. / Nowadays, there is a growing demand for personalization and the “one-size-fits-all” approach for hypermedia systems is no longer applicable. Adaptive hypermedia (AH) systems adapt their behavior to the needs of individual users. However due to the complexity of their authoring process and the different skills required from authors, only few of them have been proposed. These last years, numerous efforts have been put to propose assistance for authors to create their own AH. However, as explained in this thesis some problems remain.In this thesis, we tackle two particular problems. A first problem concerns the integration of authors’ materials (information and user profile) into models of existing systems. Thus, allowing authors to directly reuse existing reasoning and execute it on their materials. We propose a semi-automatic merging/specialization process to integrate an author’s model into a model of an existing system. Our objectives are twofold: to create a support for defining mappings between elements in a model of existing models and elements in the author’s model and to help creating consistent and relevant models integrating the two models and taking into account the mappings between them.A second problem concerns the adaptation specification, which is famously the hardest part of the authoring process of adaptive web-based systems. We propose an EAP framework with three main contributions: a set of elementary adaptation patterns for the adaptive navigation, a typology organizing the proposed elementary adaptation patterns and a semi-automatic process to generate adaptation strategies based on the use and the combination of patterns. Our objectives are to define easily adaptation strategies at a high level by combining simple ones. Furthermore, we have studied the expressivity of some existing solutions allowing the specification of adaptation versus the EAP framework, discussing thus, based on this study, the pros and cons of various decisions in terms of the ideal way of defining an adaptation language. We propose a unified vision of adaptation and adaptation languages, based on the analysis of these solutions and our framework, as well as a study of the adaptation expressivity and the interoperability between them, resulting in an adaptation typology. The unified vision and adaptation typology are not limited to the solutions analysed, and can be used to compare and extend other approaches in the future. Besides these theoretical qualitative studies, this thesis also describes implementations and experimental evaluations of our contributions in an e-learning application.
83

Specifika rozvoje datového skladu v bance / Specifics of Data Warehouse enhancement in a bank

Karásek, Tomáš January 2015 (has links)
The present thesis deals with the specifics of Data Warehouse enhancement in a bank. The aim of the thesis is a definition of the general specifics of banks, their Business Intelligence and Data Warehouse enhancement compared to other companies. The thesis is divided into seven parts. The first part describes the theoretical basis of banking and Business Intelligence. The second part defines the general specifics of banks and their informatics compared to other companies. Then Business Intelligence in a bank, its architecture and enhancement are explored. In the fourth part a conceptual data model of Data Warehouse in a bank is introduced and described in detail. Afterwards the main source systems of Data Warehouse are identified and matched to the subject areas of the data model. The sixth part discovers important application areas of Business Intelligence usage and mentions the basic indicators. The last part presents a case study (a project of Data Warehouse enhancement in a bank). The result of this thesis is clear description of the Data Warehouse in a bank, its data model, source systems, application areas and enhancement.
84

Ontološki zasnovana analiza semantičke korektnosti modela podataka primenom sistema automatskog rezonovanja / Ontology based semantic analyses of data model correctness by using automated reasoning system

Kazi Zoltan 09 June 2014 (has links)
<p>U radu je izvr&scaron;eno teoretsko istraživanje i analiza postojećih stavova i re&scaron;enja u oblasti validacije i provere kvaliteta modela podataka. Kreiran je teorijski model ontolo&scaron;ki zasnovane analize semantičke korektnosti modela podataka primenom sistema automatskog rezonovanja i izvr&scaron;ena praktična implementacija teorijskog modela, &scaron;to je potvrđeno i sprovedenim eksperimentalnim istraživanjem. Razvijena je softverska aplikacija za formalizaciju modela podataka i mapiranje ontologije u oblik Prolog klauzula. Formirana su pravila zaključivanja na predikatskom računu prvog reda, koja su integrisana sa modelom podataka i domenskom ontologijom. Upitima u okviru Prolog sistema, vr&scaron;i se provera semantičke korektnosti modela podataka. Definisana je i metrika ontolo&scaron;kog kvaliteta modela podataka koja se bazira na odgovorima sistema automatskog rezonovanja.</p> / <p>Work presents a theoretical study and analysis of existing theories and solutions in the area of data model validation and quality checking. It is created a theoretical model of ontology based analysis of data model semantic correctness by applying automated reasoning system which is practicaly implemented and confirmed by the conducted experimental research. A software application is developed for data model formalization and ontology mapping in Prolog clauses form. Reasoning rules are formed the in first-order predicate logic, which are integrated with the data model and domain ontology. Semantic correctness of the data model is checked with queries within Prolog system. Metrics of ontological quality of the data model are defined which are based on automated reasoning system replies.</p>
85

Analýza a návrh informačního systému pro firmu eSports.cz, s.r.o. / The Design of Information System for Company eSport.cz, s.r.o.

Kobelka, Michal January 2015 (has links)
The master's thesis deals with analysis and design of information system for company eSports, s.r.o. First chapter presents theoretical basis which is necessary for understanding the problem. Second chapter assesses current situatuion of eSports, s.r.o. and analyzes main processes of the company. Last chapter provides proposals for data model of new information system.
86

Database forensics : Investigating compromised database management systems

Beyers, Hector Quintus January 2013 (has links)
The use of databases has become an integral part of modern human life. Often the data contained within databases has substantial value to enterprises and individuals. As databases become a greater part of people’s daily lives, it becomes increasingly interlinked with human behaviour. Negative aspects of this behaviour might include criminal activity, negligence and malicious intent. In these scenarios a forensic investigation is required to collect evidence to determine what happened on a crime scene and who is responsible for the crime. A large amount of the research that is available focuses on digital forensics, database security and databases in general but little research exists on database forensics as such. It is difficult for a forensic investigator to conduct an investigation on a DBMS due to limited information on the subject and an absence of a standard approach to follow during a forensic investigation. Investigators therefore have to reference disparate sources of information on the topic of database forensics in order to compile a self-invented approach to investigating a database. A subsequent effect of this lack of research is that compromised DBMSs (DBMSs that have been attacked and so behave abnormally) are not considered or understood in the database forensics field. The concept of compromised DBMSs was illustrated in an article by Olivier who suggested that the ANSI/SPARC model can be used to assist in a forensic investigation on a compromised DBMS. Based on the ANSI/SPARC model, the DBMS was divided into four layers known as the data model, data dictionary, application schema and application data. The extensional nature of the first three layers can influence the application data layer and ultimately manipulate the results produced on the application data layer. Thus, it becomes problematic to conduct a forensic investigation on a DBMS if the integrity of the extensional layers is in question and hence the results on the application data layer cannot be trusted. In order to recover the integrity of a layer of the DBMS a clean layer (newly installed layer) could be used but clean layers are not easy or always possible to configure on a DBMS depending on the forensic scenario. Therefore a combination of clean and existing layers can be used to do a forensic investigation on a DBMS. PROBLEM STATEMENT The problem to be addressed is how to construct the appropriate combination of clean and existing layers for a forensic investigation on a compromised DBMS, and ensure the integrity of the forensic results. APPROACH The study divides the relational DBMS into four abstract layers, illustrates how the layers can be prepared to be either in a found or clean forensic state, and experimentally combines the prepared layers of the DBMS according to the forensic scenario. The study commences with background on the subjects of databases, digital forensics and database forensics respectively to give the reader an overview of the literature that already exists in these relevant fields. The study then discusses the four abstract layers of the DBMS and explains how the layers could influence one another. The clean and found environments are introduced due to the fact that the DBMS is different to technologies where digital forensics has already been researched. The study then discusses each of the extensional abstract layers individually, and how and why an abstract layer can be converted to a clean or found state. A discussion of each extensional layer is required to understand how unique each layer of the DBMS is and how these layers could be combined in a way that enables a forensic investigator to conduct a forensic investigation on a compromised DBMS. It is illustrated that each layer is unique and could be corrupted in various ways. Therefore, each layer must be studied individually in a forensic context before all four layers are considered collectively. A forensic study is conducted on each abstract layer of the DBMS that has the potential to influence other layers to deliver incorrect results. Ultimately, the DBMS will be used as a forensic tool to extract evidence from its own encrypted data and data structures. Therefore, the last chapter shall illustrate how a forensic investigator can prepare a trustworthy forensic environment where a forensic investigation could be conducted on an entire PostgreSQL DBMS by constructing a combination of the appropriate forensic states of the abstract layers. RESULTS The result of this study yields an empirically demonstrated approach on how to deal with a compromised DBMS during a forensic investigation by making use of a combination of various states of abstract layers in the DBMS. Approaches are suggested on how to deal with a forensic query on the data model, data dictionary and application schema layer of the DBMS. A forensic process is suggested on how to prepare the DBMS to extract evidence from the DBMS. Another function of this study is that it advises forensic investigators to consider alternative possibilities on how the DBMS could be attacked. These alternatives might not have been considered during investigations on DBMSs to date. Our methods have been tested at hand of a practical example and have delivered promising results. / Dissertation (MEng)--University of Pretoria, 2013. / gm2014 / Electrical, Electronic and Computer Engineering / unrestricted
87

Metody analýzy longitudinálních dat / Methods of longitudinal data analysis

Jindrová, Linda January 2015 (has links)
Práce se zabývá longitudinálními daty - měřeními, která jsou prová- děna opakovaně na stejných subjektech. Popisuje r·zné typy model·, které jsou vhodné pro jejich analýzu. Postupuje od nejjednodušších lineárních model· s pevnými nebo náhodnými efekty, přes lineární a nelineární modely se smíšenými efekty, až ke zobecněným lineárním model·m a generalized estimating equati- ons (GEE). Vždy je uveden tvar modelu a zp·sob odhadu parametr·. Jednotlivé modely jsou také porovnávány mezi sebou. Teoretické poznatky jsou doplněny aplikacemi na reálná data. Pomocí lineárních model· analyzujeme data o výrobě v USA, nelineární modely využijeme k vysvětlení závislosti koncentrace léčiva v krvi na čase a GEE aplikujeme na data týkající se dýchacích potíží u dětí. 1
88

Proposal and Evaluation of a Database Data Model Decision Method / Förslag och utvärdering av en beslutsmetod för databasmodeller

Hauzenberger, Sabina, Lindholm Brandt, Emil January 2020 (has links)
A common problem when choosing a data model for a database is that there are many aspects to take into consideration–making the decision difficult and time-consuming. Therefore this work aims to create a decision method that enhances the decision by making it more suitable for the use-case at hand as well as making it quicker. First, the Analytical Hierarchy Process, a multi-criteria decision method, was identified as a suitable framework that the created decision method was based on. It was developed iteratively and later validated through a survey at Omegapoint. The survey had 27 respondents, but 14 answers were discarded due to being too unreliable, which led to a total of 13 utilized responses. The decision method was implemented in a web application to simplify the survey process, where the respondents use the web application, and answered some follow up questions about the web application’s result and process. It was found that it is possible to create a decision method which makes the choice of a data model quicker and better suited for the use-case. The method is reliable among a subsample of the respondents in the survey as 11 out of 13 respondents found the decision method’s result to be reasonable. However, the small sample size makes it impossible to draw any statistical conclusions of the reliability of the decision method. Additionally, the decision method helps to make the decision quicker, but this is only proven among the respondents in the survey. Based on the results, we conclude that it is possible to create a decision method which makes the decision quicker and better suited for the use-case. However this is only proved among the survey respondents, and a future work could aim to repeat the validation in order to statistically validate the reliability of the decision method. / Ett vanligt problem vid valet av datamodell för en databas är att det finns många aspekter att ta hänsyn till–vilket gör valet svårt och tidskrävande. Detta arbete försöker därför skapa en beslutsmetod som kan förbättra beslutet genom att göra det snabbare och bättre anpassat för användningsområdet. Först valdes Analytical Hierarchy Process, en multikriterie-beslutsmetod, som grund till den framtagna beslutsmetoden. Beslutsmetoden utvecklades iterativt och validerades sedan genom en undersökning på Omegapoint. Undersökningen hade 27 respondenter, men 14 svar plockades bort då de var för inkonsekventa, vilket ledde till att 13 svar användes till slut. I undersökningen använde deltagarna en webb applikation, baserad på beslutsmetoden, och svarade sedan på några frågor och gav feedback om artefaktens resultat och process. Resultaten visade att det är möjligt att skapa en beslutsmetod som gör valet av datamodell snabbare och bättre anpassat för användningsområdet. Metoden anses vara träffsäker bland deltagarna i undersökningen, där 11 av 13 ansåg att resultatet var rimligt. Däremot kan arbetet inte dra några statistiska slutsatser om hur träffsäker metoden är generellt på grund av det låga antalet deltagare i undersökningen. Utöver en god tillförlitlighet, bidrar metoden till ett snabbare beslut, men detta kan endast bevisas för deltagargruppen i undersökningen. Givet resultaten kan vi dra slutsatsen att det är möjligt att skapa en beslutsmetod som gör valet av datamodell snabbare och bättre anpassat för användningsområdet. Detta kan däremot endast kan bevisas för deltagargruppen i undersökningen och därför föreslås att ett framtida arbete skulle kunna upprepa valideringen med en större deltagargrupp för att kunna fastslå modellens tillförlitlighet statistiskt.
89

Computer Model Emulation and Calibration using Deep Learning

Bhatnagar, Saumya January 2022 (has links)
No description available.
90

XML在地理資訊系統空間資料表達上的應用 / The Application of XML in the Representation of GIS Spatial Data

張家坤 Unknown Date (has links)
隨著網際網路的蓬勃發展,傳統的地理資訊系統也從早期的單機系統朝Web GIS的方向發展,但是關於地理資料的格式卻仍舊莫衷一是,沒有一個為業界所共同遵從的標準,其結果除了人力與財力資源的浪費,也造成不同單位間的地理資料整合上的困難。傳統的電子地圖採用點矩陣的圖形格式,因檔案太大而常造成資料傳輸上的延遲,而且圖形的解析度與品質不佳;此外在建置上十分不便,需要專門的處理軟體才能進行編輯修改。因此傳統GIS的處理方式已經不能滿足網路化的GIS應用系統的資訊需求。  在XML出現之後,W3C與業界也體認到這樣的需求,因此新一代的網路二維圖形標準,可變動向量圖形(Scalable Vector Graphics,SVG)便應運而生,雖然目前SVG仍在候選建議階段,但鑑於它的諸多特點與優越性,目前已獲得業界各大公司的支持。本論文便在此種情況下產生研究動機,我們期望透過SVG這種新興的向量圖形格式,找出解決目前GIS在地理空間資料上所面臨的困境。經過本研究所實作出來的雛形系統顯示,以SVG作為新一代的Web GIS地理空間資料格式不但可行,而且突破許多傳統方式的瓶頸和缺點。 / With the development of Internet, traditional GIS was being changed from single user to Web GIS. But the file formats of geographic spatial data are still various. We don’t have a simple and effective standard. The result are financial and manpower resource wasting. It is inconvenient and difficult to integrate geographic spatial data between different GIS and organizations. Traditional maps are bitmapped format, with lower quality at high resulation. We have to use particular tools when modifying bitmaps. Therefore, traditional way in GIS should be improved.  The Scalable Vector Graphics(SVG) format is a new XML grammer for defining vector-based 2D graphics of the Web and other applications. SVG was created by World Wide Web Consortium(W3C), the non-profit, industry-wide, open–standards consorium. Over twenty organizations, including Sun Microsystems, Adobe, Apple, IBM, and Kodak, have been involved in defining SVG. SVG is a subset of XML, which is rapidly becoming the foundation for all modern Web applications. SVG gave us a motive to do the investigation. We hope to use the SVG format to solve the problems in GIS. In the result of the prototype system in our thesis, we find out that SVG should be the better, effective spatial data format of Web GIS.

Page generated in 0.0466 seconds