Spelling suggestions: "subject:"data model"" "subject:"mata model""
141 |
Geographical Information Technologies – Decision Support for Road Maintenance in UgandaKayondo-Ndandiko, Lydia Mazzi January 2012 (has links)
This study set out to develop a framework within which the use of Geographical Information Technologies (GITs) can be enhanced in Road Infrastructure Maintenance (RIM) in Uganda. Specifically it was guided by 3 objectives; 1. To assess the gaps in the use of GITs for RIM in Uganda and the limitations to accessing these technologies, 2. To develop a methodological framework to enhance the use of GITs in RIM and 3. To develop a Geographical Information Systems for Transportation (GIS-T) data model based on the road maintenance data requirements. A participatory approach through a series of interviews, focus group discussions, workshop & conferences, document reviews, field observations & measurements and GIS analysis were employed. Based on the Spatial Data Infrastructure (SDI) concept and the principle of Causality, the gaps and limitations were established to mainly be concerned with data and organisational constraints as opposed to technical issues. They were classified to include; inadequate involvement of GITs in organisational activities, inappropriate institutional arrangements, absence of data sharing frameworks, budget constraints, insufficient geospatial capacity, digital divide in the perception, adoption & affordability of GITs among the stakeholders and the absence of a road maintenance Spatial Data Infrastructure (SDI). A methodological framework, comprising of 6 strategic components was developed to enhance the use of GITs in RIM. This included enactment of relevant policy components to guide GIT use, continuous capacity building, establishment of a road maintenance SDI, fostering collaboration and spatial data sharing frameworks, budgetary allocation based on defined activities inclusive of GIT initiatives, and adoption of a dynamic segmentation data model. Conceptual and logical data models were developed and proposed for the Sector. The conceptual model, presented using an entity relationship diagram, relates the road network to the point and line events occurring on it. The logical object relational model developed using the ESRI provided template represents the road and the point and line events in a total of 19 object classes. The Study concludes that in order to ground GIT benefits in the sector; technical, data and organisational concerns involved in GIT undertakings should be accorded equal emphasis. Institutionalisation and diffusion of GITs as aspects of the component strategies are regarded capacity building mechanisms earmarked to boost success in GIT initiatives. Further research on diffusion and funding models for GIT initiatives is recommended. It is suggested that aspects of the proposed model be considered when establishing GIT standards for the sector. The RIM sector is encouraged to embrace Science and Technology and to participate in Research and Development and particularly to adopt the culture of innovation considering the ready availability of off the shelf equipment, freeware and open source software that can foster informed decision making.
|
142 |
Modelovanje i implementacija sistema za podršku vrednovanju publikovanih naučno-istraživačkih rezultata / Modeling and implementation of system for evaluation of published research outputsNikolić Siniša 26 April 2016 (has links)
<p>Cilj – Prvi cilj istraživanja je kreiranje modela podataka i implementacija informacionog sistema zasnovanog na modelu za potrebe vrednovanja publikovanih naučno-istraživačkih rezultata. Model bi bio primenjen u CRIS UNS informacionom sistemu, kao podrška sistemu vrednovanja.<br />Drugi cilj istraživanja je utvrđivanje u kojoj meri i na koji način se može automatizovati proces evaluacije koji se zasniva na različitim pravilima i pravilnicima.<br />Metodologija – Kako bi se definisalo proširenje CERIF modela neophodno je bilo identifikovati različite aspekte podataka koji su prisutni u evaluaciji naučno-istraživačkih publikacija. Stoga, zarad potreba istraživanja, odabrana su i analizirana su dokumenta koja predstavljaju različite nacionalne pravilnike, okvire i smernice za evaluaciju.<br />Za modelovanje specifikacije arhitekture sistema za vrednovanje korišćeni su CASE alati koji su bazirani na objektno-orijentisanoj metodologiji (UML 2.0). Za implementaciju proširenja CERIF modela u CRIS UNS sistemu korišćena je Java platforma i tehnologije koji olakšavaju kreiranje veb aplikacija kao što su AJAX, RichFaces, JSF itd. Pored navedene opšte metodologije za razvoj softverskih sistema korišćeni su primeri dobre prakse u razvoju informacionih sistema. To se pre svega odnosi na principe korišćene u razvoju institucionalnih repozitorijuma, bibliotečkih informacionih sistema, informacionih sistema naučno-istraživačke delatnosti, CRIS sistema, sistema koji omogućuju evaluaciju podataka itd.<br />Ekspertski sistem koji bi podržao automatizaciju procesa evaluacije po različitim pravilnicima odabran je na osnovu analize postojećih rešenja za sisteme bazirane na pravilima i pregleda naučne literature.<br />Rezultati – Analizom nacionalnih pravilnika i smernica dobijen je skup podataka na osnovu kojeg je moguće evaluirati publikovane rezultate po odabranim pravilnicima.<br />Razvijen je model podataka kojim se predstavljaju svi podaci koji učestvuju u procesu evaluacije i koji je kompatibilan sa CERIF modelom podataka.<br />Predloženi model je moguće implementirati u CERIF kompatibilnim CRIS sistemima, što je potvrđeno implementacijom informacionog sistema za vrednovanje publikovanih naučno-istraživačkih rezultata u okviru CRIS UNS.<br />Ekspertski sistem baziran na pravilima može biti iskorišćen za potrebe automatizacije procesa evaluacije, što je potvrđeno predstavom i implementacijom SRB pravilnika u Jess sistemu baziranom na pravilima.<br />Praktična primena –Zaključci proizašli iz analize pravilnika (npr. poređenje sistema i definisanje metapodataka za vrednovanje) se mogu primeniti pri definisanju modela podataka za CERIF sisteme i za sisteme koji nisu CERIF orijentisani.<br />Sistem za podršku vrednovanju publikovanih naučno-istraživačkih rezultata je implementiran kao deo CRIS UNS sistema koji se koristi na Univerzitetu u Novom Sadu čime je obezbeđeno vrednovanje publikovanih naučno-istraživačkih rezultata za različite potrebe (npr. promocije u naučna i istraživačka zvanja, dodele nagrada i materijalnih sredstava, finansiranje projekata, itd.), po različitim pravilnicima i komisijama.<br />Vrednost – Dati su metapodaci na osnovu kojih se vrši vrednovanje publikovanih rezultat istraživanja po raznim nacionalnim pravilnicima i smernicama. Dat je model podataka i proširenje CERIF modela podataka kojim se podržava vrednovanje rezultata istraživanja u CRIS sistemima. Posebna prednost pomenutih modela je nezavisnost istih od implementacije sistema za vrednovanje rezultata istraživanja. Primena predloženog proširenje CERIF modela u CRIS sistemima praktično je pokazana u CRIS sistemu Univerziteta u Novom Sadu. Sistem za vrednovanje koji se bazira na proširenju CERIF modela pruža i potencijalnu interoperabilnost sa sistemima koji CERIF model podržavaju. Implementacijom informacionog sistema za vrednovanje, vrednovanje naučnih publikacija je postalo olakšano i transparentnije. Potvrda koncepata da se ekspertski sistemi bazirani na pravilima mogu koristiti za automatizaciju vrednovanja, otvara totalno novi okvir za implementaciju informacionih sistema za podršku vrednovanja postignutih rezultata istraživanja.</p> / <p>Aim – The first aim of the research was creation of data model and implementation of information system based on the proposed model for the purpose of evaluation of published research outputs. The model is applied in CRIS information system to support the system for evaluation.<br />The second objective was determination of the manner and extent in which the evaluation process that is based on different rules and different rulebooks could be automated.<br />Methodology - In order to define the extension of the CERIF model, it was necessary to identify the various aspects of data which is relevant in evaluation of scientific research publications. Therefore, documents representing different national regulations, frameworks and guidelines for evaluations were selected and analyzed.<br />For the modeling of the system architecture, CASE tools were used, which are based on object-oriented methodology (UML 2.0). To implement the extension of the CERIF model within the CRIS UNS system, JAVA platform and technologies that facilitate creation of web applications such as AXAJ and RichFaces were used. In addition to this general methodology for development of software systems, best practice examples from the information systems development are also used. This primary refers to the principles used in development of institutional repositories, library information systems, information systems of the scientific-research domain, CRIS systems, systems that enable evaluation of data, etc.<br />The expert system that supports automation of the evaluation process by different rulebooks was selected based on analysis of the existing solutions for rule based systems and examination of scientific literature.<br />Results - By analysis of the national rulebooks and guidelines, a pool of data was gathered, which served as a basis for evaluation of published results by any analyzed rulebook.<br />A data model was developed, by which all data involved in the evaluation process can be represented. The proposed model is CERIF compatible.<br />The proposed model can be implemented in CERIF compatible CRIS systems, which was confirmed by the implementation of an information system for evaluation of published scientific research results in CRIS UNS.<br />An expert system based on rules can be used for the needs of automation of the evaluation process, which was confirmed by the presentation and implementation of the Serbian Rulebook by Jess.<br />Practical application - The conclusions raised from the analysis of rulebooks (e.g. Comparison of systems and defining metadata for evaluation) can be applied in defining the data model for CERIF systems and for systems that are not CERIF oriented.<br />The system for support of evaluation of published scientific research results was implemented as part of the CRIS UNS system used at the University of Novi Sad, thus providing evaluation of published scientific research results for different purposes (e.g. promotion in scientific and research titles, assignment of awards and material resources, financing of projects, etc.), according to different rulebooks and commissions.<br />Value – Metadata is provided on which basis the evaluation of published research results by various national rulebooks and guidelines is conducted. A data model and an expansion of the CERIF data model that supports the evaluation of the research results within CRIS systems are given. A special advantage of these models is their independence of the implementation of the system for evaluation of research results. The application of the proposed extension of the CERIF model into CRIS systems practically is demonstrated in the CRIS system of the University of Novi Sad. The system that implements an expansion of the CERIF model provides a potential interoperability with systems that support CERIF model. After the implementation of the information system for evaluation, the evaluation of scientific publications becomes easier and more transparent. A confirmation of the concept that the expert systems based on rules can be used in automation of the evaluation process opens a whole new framework for implementation of information systems for evaluation.</p>
|
143 |
Order-sensitive XML Query Processing Over Relational SourcesMurphy, Brian R 05 May 2003 (has links)
XML is an emerging standard format for data on the Web as well as in business applications. In order to store and access this information in an efficient manner, database technology must be utilized. A relational database system, the most established and mature technology for query processing and storage, creates a strong foundation for such an XML data management system. However, while relational databases are based on SQL queries, the original user queries are written in XQuery, an XML query language. This XML query language has support for order-sensitive queries as XML is an order-sensitive markup language. A major problem has been discovered with loading XML in a relational database. That problem is the lack of native SQL support for and management of order handling. While XQuery has order and positional support, SQL does not have the same support. For example, individuals who were viewing XML information about music albums would have a hard time querying for the first three songs of a track list from a relational backend. Mapping XML documents to relational backends also proves hard as the data models (hierarchical elements versus flat tables) are so different. For these reasons, and other purposes, the Rainbow System is being developed at WPI as a system that bridges XML data and relational data. This thesis in particular deals with the algebra operators that affect order, order sensitive loading and mapping of XML documents, and the pushdown of order handling into SQL-capable query engines. The contributions of the thesis are the order-sensitive rewrite rules, new XML to relational mappings with different order styles, order-sensitive template-driven SQL generation, and a proposed metadata table for order-sensitive information. A system that implements these proposed techniques with XQuery as the XML query language and Oracle as the backend relational storage system has been developed. Experiments were created to measure execution time based on various factors. First, scalability of the system as backend data set size grows is studied. Second, scalability of the system as results returned from the database grows, and finally, query execution times with different loading types are explored. The experimental results are encouraging. Query execution with the relational backend proves to be much faster than native execution within the Rainbow system. These results confirm the practical utility of our proposed order-sensitive XQuery execution solution over relational data.
|
144 |
Assimilation de données et inversion bathymétrique pour la modélisation de l'évolution des plages sableusesBirrien, Florent 14 May 2013 (has links)
Cette thèse présente une plateforme d'assimilation de données issues de l'imagerie vidéo et intégrée au modèle numérique d'évolution de profil de plage 1DBEACH. Le manque de jeux de données bathymétriques haute-fréquence est un des problèmes récurrents pour la modélisation morphodynamique littorale. Pourtant, des relevés topographiques réguliers sont nécessaires non seulement pour la validation de nos modèles hydro-sédimentaires mais aussi dans une perspective de prévision d'évolution morphologique de nos plages sableuses et d'évolution de la dynamique des courants de baïnes en temps réel. Les récents progrès dans le domaine de l'imagerie vidéo littorale ont permis d'envisager un moyen de suivi morphologique quasi-quotidien et bien moins coûteux que les traditionnelles campagnes de mesure. En effet, les images dérivées de la vidéo de type timex ou timestack rendent possible l'extraction de proxys bathymétriques qui permettent de caractériser et de reconstruire la morphologie de plage sous-jacente. Cependant, ces méthodes d'inversion bathymétrique directes sont limitées au cas linéaire et nécessitent, selon les conditions hydrodynamiques ambiantes, l'acquisition de données vidéo sur plusieurs heures voire plusieurs jours pour caractériser un état de plage. En réponse à ces différents points bloquants, ces travaux de thèse proposaient l'implémentation puis la validation de méthodes d'inversion bathymétrique basées sur l'assimilation dans notre modèle de différentes sources d'observations vidéo disponibles et complémentaires. A partir d'informations hétérogènes et non redondantes, ces méthodes permettent la reconstruction rapide et précise d'une morphologie de plage dans son intégralité pour ainsi bénéficier de relevés bathymétriques haute fréquence réguliers. / This thesis presents data-model assimilation techniques using video-derived beach information to improve the modelling of beach profile evolution.The acquisition of accurate and recurrent nearshore bathymetric data is a difficult and challenging task which limits our understanding of nearshore morphological changes. This is particularly true in the surf zone which exhibits the largest degree of morphological variability. In addition, surfzone bathymetric data are crucial from many perspectives such as numerical model validation, operational rip current prediction or real-time nearshore evolution modelling. In parallel, video imagery recently arose as a low-cost alternative to direct measurement in order to daily monitor beach morphological changes. Indeed, bathymetry proxies can be extracted from video-derived images such as timex or timestacks. These data can be then used to estimate underlying beach morphologies. However, simple linear depth inversion techniques still suffer from some restrictions and require up to a 3-day dataset to completely characterize a given beach morphology. As an alternative, this thesis presents and validates data-assimilation methods that combine multiple sources of available video-derived bathymetry proxies to provide a rapid, complete and accurate estimation of the underlying bathymetry and prevent from excessive information.
|
145 |
The Sea of Stuff : a model to manage shared mutable data in a distributed environmentConte, Simone Ivan January 2019 (has links)
Managing data is one of the main challenges in distributed systems and computer science in general. Data is created, shared, and managed across heterogeneous distributed systems of users, services, applications, and devices without a clear and comprehensive data model. This technological fragmentation and lack of a common data model result in a poor understanding of what data is, how it evolves over time, how it should be managed in a distributed system, and how it should be protected and shared. From a user perspective, for example, backing up data over multiple devices is a hard and error-prone process, or synchronising data with a cloud storage service can result in conflicts and unpredictable behaviours. This thesis identifies three challenges in data management: (1) how to extend the current data abstractions so that content, for example, is accessible irrespective of its location, versionable, and easy to distribute; (2) how to enable transparent data storage relative to locations, users, applications, and services; and (3) how to allow data owners to protect data against malicious users and automatically control content over a distributed system. These challenges are studied in detail in relation to the current state of the art and addressed throughout the rest of the thesis. The artefact of this work is the Sea of Stuff (SOS), a generic data model of immutable self-describing location-independent entities that allow the construction of a distributed system where data is accessible and organised irrespective of its location, easy to protect, and can be automatically managed according to a set of user-defined rules. The evaluation of this thesis demonstrates the viability of the SOS model for managing data in a distributed system and using user-defined rules to automatically manage data across multiple nodes.
|
146 |
開放型XML資料庫績效評估工作量之模型 / An Open Workload Model for XML Database Benchmark尤靖雅 Unknown Date (has links)
XML (eXtensible Markup Language)是今日新興在網路上所使用的延伸性的標記語言。它具有豐富的語意表達及與展現方式獨立的資料獨立性。由於這些特性,使得XML成為新的資料交換標準並且其應用在資料庫中產生了許多新的研究議題在資料儲存和查詢處理上。在本篇研究中,將研究XML資料庫中的績效評估的議題並且發展一個可適用於不同應用領域及各種平台上的使用者導向且開放型工作量模式以評估XML資料庫績效。此XML開放型的工作量模型包含三個子模型─XML資料模型、查詢模型以及控制模型。XML資料模型將模式化XML文件中階層式的結構概念;查詢模型包含了一連串查詢模組以供測試XML資料庫的處理查詢能力以及一個開放型的查詢輸入介面以供使用者依照需求設定所需的測試查詢;控制模型中定義了一連串變數以供設定績效評估系統中的執行環境。我們發展此系統化且具開放型的工作量方法可以幫助各種不同應用領域的使用者預測及展現XML資料庫系統的績效。 / XML (eXtensible Markup Language) is the emerging data format for data processing on the Internet. XML provides a rich data semantics and data independence from the presentation. Thanks to these features, XML becomes a new data exchange standard and leads new storage and query processing issues on database research communities. In this paper, the performance evaluation issues on XML databases have been studied and a generic and requirement-driven XML workload model that is applicable to any application scenario or movable on various platforms is developed. There are three sub-models in this generic workload model, the XML data model, the query model, and the control model. The XML data model formulates the generic hierarchy structure of XML documents and supports a flexible document structure of the test database. The XML query model contains a flexible classical query module selector and an open query input to define the requirement-driven test query model to challenge the XML query processing ability of the XML database. The control model defines variables that are used to set up the implementation of a benchmark. This open, flexible, and systematic workload method permits users in various application domains to predicate or profile the performance of the XML database systems.
|
147 |
人民幣國際化程度與前景的實證分析 / Empirical study on the degree and prospect of renminbi internationalization王國臣, Wang, Guo Chen Unknown Date (has links)
人民幣是否可能成為另一個重要的國際貨幣,甚至挑戰美元的國際地位?此即本論文的問題意識。對此,本論文進一步提出三個研究問題:一是如何測量當前的人民幣國際化程度?二是如何測量當前的人民幣資本開放程度?三是資本開放對於人民幣國際化程度的影響為何?
為此,本研究利用主成分分析(PCA),以建構人民幣國際化程度(CIDI)與人民幣資本帳開放程度(CAOI)。其次再利用動態追蹤資料模型──系統一般動差估計法(SGMM),以檢證各項人民幣綜合競爭力對於貨幣國際化程度的影響。最後,本研究進一步梳理人民幣資本帳開放的進程,並結合上述所有實證分析的結果,進而預估漸進資本開放下人民幣國際化的前景。研究對象包括人民幣在內的33種國際貨幣,研究時間則起自1999年歐元成立,迄於2009年。
本論文的發現三:一是,當前人民幣國際化程度進展相當快速。但截至2009年年底,人民幣國際化程度還很低,遠落後於美元、歐元、日圓,以及英鎊等主要國際貨幣。不僅如此,人民幣國際化程度也遜於俄羅斯盧布、巴西里拉,以及印度盧比等開發中國家所發行的貨幣。
二是,過去10年來,人民幣資本帳開放程度不升反降,截至2009年年底,人民幣的資本帳開放程度維持在零,這表示:人民幣是世界上管制最為嚴格的貨幣。相對而言,美元、歐元、日圓,以及英鎊的資本帳開放程度至少都在70%以上,特別是英鎊的資本帳開放程度更趨近於完全開放。
三是,根據SGMM的實證結果顯示,網路外部性、經濟規模、金融市場規模、貨幣穩定度,以及資本開放程度都是影響貨幣國際化程度的關鍵因素。在此基礎上,本研究利用發生機率(odds ratio),以計算不同資本開放情境下,人民幣成為前10大國際貨幣的可能性。結果顯示,如果人民幣的資本帳開放到73%左右,人民幣便可擠進前10大國際貨幣(發生機率為65.6%)。
不過,這只是最為保守的估計。原因有二:一是,隨者中國經濟實力的崛起,以及人民幣預期升值的脈絡下,國際市場對於人民幣的需求原本就很高。此時,人民幣資本帳如果能適時開放,則人民幣的國際持有將大幅增加。換言之,本研究沒有考量到,各貨幣競爭力因素與資本開放程度之間的加乘效果。
二是,資本開放不僅直接對貨幣國際化程度產生影響,也會透過擴大金融市場規模與網路外部性等其他貨幣競爭力因素,間接對貨幣國際化程度造成影響。這間接效果,本研究也沒有考量到。因此,可以預期的是,只要人民幣資本帳能夠漸進開放,人民幣國際化的前景將比本研究所預估的高出許多。 / This paper discusses whether the Renminbi (RMB) will become an international currency, even challenging to the U.S. dollar. In order to examine above question, this paper take the following three steps:
1. By using principal component analyses (PCA), this paper constructs two indices: currency internationalization degree index (CIDI) and capital account liberalization degree index (CAOI);
2. By using dynamic panel data model-system generalized method of moment (SGMM), this paper analyzes factors affect the CIDI, including economic and trade size, financial system, network externalities, confidence in the currency’s value, and CAOI;
3. According to the PCA and SGMM results, this paper calculates the odds ratio of RMB becoming important international currency.
The reserch achieved the following results. First, the degree of internationalization of the RMB progress very fast, but the RMB CIDI is still very low, its CIDI far behinds the dollar, euro, Japanese yen, and pounds.
Second, over the past 10 years, RMB CAOI is not increased but decreased. Its CAOI is at zero in 2009, this means that: the RMB is the most stringent controls in the world currency. In contrast, U.S. dollars, euros, yen, and pound CAOI are at least in more than 70%.
Third, according to the SGMM results, economic size, financial system, network externalities, confidence in the currency’s value, and CAOI are key factors affect the CIDI. Based on this output, this paper forecasted that if the RMB CAOI is open to about 73%, RMB could be squeezed into the top 10 of the international currency. (The odds ratio is 65.6%)
It is noteworthy that this is only the lowest estimates. This is because that this paper did not consider the interaction effects of each currency competitiveness factors and CAOI. Therefore, if RMB CAOI continues open, the prospect of RMB CIDI is much higher than estimated by this paper.
|
148 |
Modelling the Cross-Country Trafficability with Geographical Information SystemsGumos, Aleksander Karol January 2005 (has links)
<p>The main objectives of this work were to investigate Geographical Information Systems techniques for modelling a cross-country trafficability. To accomplished stated tasks, reciprocal relationships between the soil deposits, local hydrology, geology and geomorphology were studied in relation to the study area in South-Eastern Sweden.</p><p>Growing awareness of nowadays users of GIS in general is being concentrated on understanding an importance of soil conditions changed after cross-country trafficability. Therefore, in this thesis, constructing of the Soil Knowledge Database introduced to the genuine geological soil textural classes a new, modified geotechnical division with desirable for off-road ground reasoning measurable factors, like soil permeability, capillarity or Atterberg’s consistency limits.</p><p>Digital Elevation Model, the driving force for landscape studies in the thesis, was carefully examined together with the complementary datasets of the investigated area. Testing of the elevation data was done in association to the hydrological modelling, which resulted with the Wetness Index map. The three distinguishable soil wetness conditions: dry, moist and wet, were obtained, and used consequently for creation of the static ground conditions map, a visible medium of soils susceptibility to for example machine compaction.</p><p>The work resulted with a conceptual scheme for cross-country trafficability modelling, which was put into effect while modeling in GIS. As a final outcome, by combining all processed data together, derivatives were incorporated and draped over the rendered 3D animating scene. A visually aided simulation enabled to concretized theoretical, hypothetical and experimental outcomes into one coherent model of apprised under Multicriterial Evaluation techniques standardized factor maps for ground vehicle maneuverability. Also further steps of research were proposed.</p>
|
149 |
Supporting flexible workflow processes with a progression modelStavness, Nicole Ann 02 March 2005
<body>Users require flexibility when interacting with information systems to contend with changing business processes and to support diverse workflow. Model-based user interface design can accommodate flexible business processes by integrating workflow modelling with other modelling approaches. We present a workflow model, the progression model, to help in developing systems that support flexible business processes. <p>
The progression model tracks a users interaction with an application as a set of data elements we refer to as a workflow transaction. The steps a user takes to create a workflow transaction and the state of the workflow transaction at each step is made explicit. By making the workflow status and workflow transaction state explicit, the user can change the order of the steps in a process, manage multiple workflow transactions, keep track of data as it is accumulated, and so on. The intent
is to provide the user with a mechanism to deal with partial information, interrupted and concurrent workflow transaction entry, and the processing of multiple workflow transactions. <p>
This thesis describes the progression model, an XML-compliant notation to specify the progression model, and a prototype system.
</body>
|
150 |
Modelling the Cross-Country Trafficability with Geographical Information SystemsGumos, Aleksander Karol January 2005 (has links)
The main objectives of this work were to investigate Geographical Information Systems techniques for modelling a cross-country trafficability. To accomplished stated tasks, reciprocal relationships between the soil deposits, local hydrology, geology and geomorphology were studied in relation to the study area in South-Eastern Sweden. Growing awareness of nowadays users of GIS in general is being concentrated on understanding an importance of soil conditions changed after cross-country trafficability. Therefore, in this thesis, constructing of the Soil Knowledge Database introduced to the genuine geological soil textural classes a new, modified geotechnical division with desirable for off-road ground reasoning measurable factors, like soil permeability, capillarity or Atterberg’s consistency limits. Digital Elevation Model, the driving force for landscape studies in the thesis, was carefully examined together with the complementary datasets of the investigated area. Testing of the elevation data was done in association to the hydrological modelling, which resulted with the Wetness Index map. The three distinguishable soil wetness conditions: dry, moist and wet, were obtained, and used consequently for creation of the static ground conditions map, a visible medium of soils susceptibility to for example machine compaction. The work resulted with a conceptual scheme for cross-country trafficability modelling, which was put into effect while modeling in GIS. As a final outcome, by combining all processed data together, derivatives were incorporated and draped over the rendered 3D animating scene. A visually aided simulation enabled to concretized theoretical, hypothetical and experimental outcomes into one coherent model of apprised under Multicriterial Evaluation techniques standardized factor maps for ground vehicle maneuverability. Also further steps of research were proposed.
|
Page generated in 0.0684 seconds