• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 24
  • 23
  • 15
  • 13
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 176
  • 176
  • 32
  • 31
  • 29
  • 28
  • 25
  • 22
  • 21
  • 20
  • 18
  • 16
  • 16
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

An XML-based Multidimensional Data Exchange Study / 以XML為基礎之多維度資料交換之研究

王容, Wang, Jung Unknown Date (has links)
在全球化趨勢與Internet帶動速度競爭的影響下,現今的企業經常採取將旗下部門分散佈署於各地,或者和位於不同地區的公司進行合併結盟的策略,藉以提昇其競爭力與市場反應能力。由於地理位置分散的結果,這類企業當中通常存在著許多不同的資料倉儲系統;為了充分支援管理決策的需求,這些不同的資料倉儲當中的資料必須能夠進行交換與整合,因此需要有一套開放且獨立的資料交換標準,俾能經由Internet在不同的資料倉儲間交換多維度資料。然而目前所知的跨資料倉儲之資料交換解決方案多侷限於逐列資料轉換或是以純文字檔案格式進行資料轉移的方式,這些方式除缺乏效率外亦不夠系統化。在本篇研究中,將探討多維度資料交換的議題,並發展一個以XML為基礎的多維度資料交換模式。本研究並提出一個基於學名結構的方法,以此方法發展一套單一的標準交換格式,並促成分散各地的資料倉儲間形成多對多的系統化映對模式。以本研究所發展之多維度資料模式與XML資料模式間的轉換模式為基礎,並輔以本研究所提出之多維度中介資料管理功能,可形成在網路上通用且以XML為基礎的多維度資料交換過程,並能兼顧效率與品質。本研究並開發一套雛型系統,以XML為基礎來實作多維度資料交換,藉資證明此多維度資料交換模式之可行性,並顯示經由中介資料之輔助可促使多維度資料交換過程更加系統化且更富效率。 / Motivated by the globalization trend and Internet speed competition, enterprise nowadays often divides into many departments or organizations or even merges with other companies that located in different regions to bring up the competency and reaction ability. As a result, there are a number of data warehouse systems in a geographically-distributed enterprise. To meet the distributed decision-making requirements, the data in different data warehouses is addressed to enable data exchange and integration. Therefore, an open, vendor-independent, and efficient data exchange standard to transfer data between data warehouses over the Internet is an important issue. However, current solutions for cross-warehouse data exchange employ only approaches either based on records or transferring plain-text files, which are neither adequate nor efficient. In this research, issues on multidimensional data exchange are studied and an XML-based Multidimensional Data Exchange Model is developed. In addition, a generic-construct-based approach is proposed to enable many-to-many systematic mapping between distributed data warehouses, introducing a consistent and unique standard exchange format. Based on the transformation model we develop between multidimensional data model and XML data model, and enhanced by the multidimensional metadata management function proposed in this research, a general-purpose XML-based multidimensional data exchange process over web is facilitated efficiently and improved in quality. Moreover, we develop an XML-based prototype system to exchange multidimensional data, which shows that the proposed multidimensional data exchange model is feasible, and the multidimensional data exchange process is more systematic and efficient using metadata.
132

都市蔓延與氣候暖化關係之研究-以台北都會區為例 / The Study of relationship between urban sprawl and climate warming - An example of Taipei metropolitan area

賴玫錡, Lai, Mei Chi Unknown Date (has links)
本研究主要探討台北都會區都市蔓延與氣候暖化之關係,實證分析是否都市蔓延的發展形態會造成氣溫的上升。有研究指出台灣的歷年氣溫上升是因為近年來工商業急速發展,人口增加,建築物型態改變,交通運輸量激增等所致。國內外許多研究也發現都市化與氣溫是呈現正相關,而綠地與氣溫呈現負相關。 本研究實證分析部分使用地理資訊系統之內差法和空間分析方法,以及迴歸分析使用panel data之固定效果模型等工具,內插法之結果得到台北都會區年平均氣溫自1996年至2006年約上升1℃,有些地區甚至上升約2℃,且上升之溫度範圍有擴大的趨勢,呈現放射狀的溫度分布,此與都市蔓延之放射狀發展形態類似。使用空間分析方法則證實了一地人口數的增加會造成該地氣溫上升,並且也發現近來人口數多增加在都市外圍地區,這與上述氣溫分布和都市蔓延之放射狀發展形態也相符合。 迴歸分析結果顯示人口數對於氣溫有相當大之正相關,耕地面積對氣溫則呈現負相關,可見得擁有廣大綠地可以降低區域之氣溫,減緩氣候暖化,因此建議政府需檢討當前農地政策,配合環境保護,適合時宜的提出正確之政策。另外在各鄉鎮市區固定效果估計量方面,可以歸納出若一地區有廣大的公園、綠地、或是有河川流域的經過,對於降低當地氣溫有明顯的幫助;時間趨勢之固定效果估計量顯示台北都會區隨著時間的經過,氣溫將持續上升。因此在未來都市規劃方面,規劃者必須了解各地區特性,善加利用其自然環境以調和氣候暖化之影響、多設置公園綠地、多種植綠色植物、在道路周邊行道樹的設置、建築物間風場之設計等。如此將可以降低都市蔓延對氣候暖化的影響,以及防止氣候暖化的發生。 / In this study, we research the relationship between urban sprawl and climate warming in Taipei metropolitan area. Analyze empirically whether the developed shape of urban sprawl causes the climbing of the temperature. Some studies indicate that the reasons why the climate is getting warmer in Taiwan are the high-speed developments of industry and commerce, the increase of population, the changes of the buildings and the huge increase of the traffic volume. Some other studies also find out that there is a positive correlation between the urbanization and the temperature, and there is a negative correlation between the green space and the temperature. The empirical analysis in this study is based on the Interpolation Method and Spatial Analysis of GIS. And the regression analysis is based on the Fixed Effect Model of Panel Data. The yearly average temperature increased about 1℃ to 2℃ in the Taipei metropolitan area from 1996 to 2006. Furthermore, the range of the increasing temperature has been trending up, and it reveals a radial distribution. It is similar to the radial developed shape of urban sprawl. By using Spatial Analysis, we prove that the temperature of an area increases when the population rises. And we find out that the population rises in most of the peri-urban areas. It also answers to the radial developed shape of urban sprawl and the distribution of the temperature as above. The result of using the regression analysis shows that there is a positive correlation between the number of the population and the temperature and is a negative correlation between the farmland areas and the temperature. So that if there is a big green space, it can decrease the temperature in an area, reduce climate warming. For this reason, I suggest that the government should review our current farmland policy, which should be worked with the environmental protection policy, and bring it into practice at the right time right place. From the fixed effect estimation, we concludes that it helps decrease the temperature in an area obviously when there is a big park, big green space or where a river passing through. The time trend of the fixed effect estimation indicates that the climate in the Taipei metropolitan area will be getting warming with time goes by. Therefore, the urban planner should know better of the feature in each area, using the natural environment to accommodate the influence of climate warming. To have more parks, green spaces and plants, plant more trees by the roads, design the wind flow between buildings. Cut down the carbon production by using either way. Thus and so, we can reduce the influence of urban sprawl to climate warming, and also prevent climate warming.
133

以企業流程模型導向實施資料庫重構之研究-以S公司為例 / The study of database reverse engineering based on business process module-with S company as an example

林于新, Lin, Yu-hsin Unknown Date (has links)
1960年代起資訊科技應用興起以協助組織運行,多數企業因缺乏資訊知識背景,紛紛購入套裝軟體協助業務營運。但套裝軟體無法切合企業的流程,且隨環境變遷和科技演進,不敷使用的問題日益嚴重。從資料庫設計的角度出發,套裝軟體複雜的資料架構、長期修改和存取資料而欠缺管理、無關連式資料庫的概念,導致組織的資料品質低落。當今組織如何將資料庫重新設計以符合所需、新舊系統資料該如何轉換以提升品質,是企業面臨的一大挑戰。   有鑑於此,本研究設計一套資料庫重構流程,以企業流程為基礎為企業設計客製化的資料庫,並將資料從套裝軟體移轉至該理想的資料庫。流程分三階段,階段1是運用資料庫反向工程(Database Reverse Engineering)的方法,還原企業現行資料庫的資料語意和模型架構;階段2則結合流程模型(Process Model)和資料模型(Data Model)的概念,建立以企業流程為基礎的理想資料庫;階段3利用ETL(Extract、Transform、Load)和資料整合的技術,將企業資料從現行資料庫中萃取、轉換和載入至理想資料庫,便完成資料庫重構的作業。   本研究亦將資料庫重構流程實做於個案公司,探討企業早期導入之套裝軟體和以流程為基礎的理想資料模型間的設計落差。實做分析結果,二者在資料庫架構設計、資料語意建立和正規化設計等三部分存有落差設計,因此在執行資料庫重構之資料移轉解決落差時,需釐清來源端資料的含糊語意、考量目的端資料的一致性和參考完整性、以及清潔錯誤的來源資料。   最後,總結目前企業老舊資料庫普遍面臨資料庫架構複雜、無法吻合作業流程所需、未制訂完善資料庫管理機制等問題,而本研究之資料庫重構流程的設計概念,能為企業建立以流程為導向的理想資料庫。 / The raising of information technique helped organization governance greatly was started since 1960s, but because of lack information background and knowledge, many organizations just simply brought software packages to assist business processes or organization governance. The result was those software packages which couldn't fit in with the processes of organization' requirements were getting worse because of changes of environment. From the view of database design, it results in low quality of data because of the complexity of database structure, long-term modifications and accessing to data, and the lack of relational database knowledge. Nowadays, the problems of redesign database structure or transform data from a old system to a new system are great challenges to enterprises. Based on the above, thie research designed a process of database restruction in order to establish customized database based on businesss processes. There are three phases of this process. In phase 1, a company acquires the original data structure and semantic of its software package by the method of database reverse engineering. In phase 2, using concepts of process model and data model, the company establishes its ideal database based on businesss processes. In phase 3, it extracts, transforms, and load data from the current database of software package to ideal database by the technique of ETL and data integration. After these three phases, the company completes the process of data restriction. The process of database restruction is done in a case company to analyze the design gap between the current data model of software package and the ideal data model based on business processes. In the result of analysis, this research found out there are three gaps between its as-is and to-be data models. These three gaps are the design of database struction, the definition of data semantic, and the design of database normalization. Because of these design gaps, when removing gaps by data transformation, a company should pay attention to clarify the semantic of source data, considerate the consistency and referential integrity of destination data, and clean dirty data from source database. Finanlly, the summary of the problems a company using old database are the complexity of database structure, the unfit database for businesss processes, the lack of database management, etc. The process of database restruction this research design can assist a company in establishing ideal database based on business processes.
134

Essays on economic and econometric applications of Bayesian estimation and model comparison

Li, Guangjie January 2009 (has links)
This thesis consists of three chapters on economic and econometric applications of Bayesian parameter estimation and model comparison. The first two chapters study the incidental parameter problem mainly under a linear autoregressive (AR) panel data model with fixed effect. The first chapter investigates the problem from a model comparison perspective. The major finding in the first chapter is that consistency in parameter estimation and model selection are interrelated. The reparameterization of the fixed effect parameter proposed by Lancaster (2002) may not provide a valid solution to the incidental parameter problem if the wrong set of exogenous regressors are included. To estimate the model consistently and to measure its goodness of fit, the Bayes factor is found to be more preferable for model comparson than the Bayesian information criterion based on the biased maximum likelihood estimates. When the model uncertainty is substantial, Bayesian model averaging is recommended. The method is applied to study the relationship between financial development and economic growth. The second chapter proposes a correction function approach to solve the incidental parameter problem. It is discovered that the correction function exists for the linear AR panel model of order p when the model is stationary with strictly exogenous regressors. MCMC algorithms are developed for parameter estimation and to calculate the Bayes factor for model comparison. The last chapter studies how stock return's predictability and model uncertainty affect a rational buy-and-hold investor's decision to allocate her wealth for different lengths of investment horizons in the UK market. The FTSE All-Share Index is treated as the risky asset, and the UK Treasury bill as the riskless asset in forming the investor's portfolio. Bayesian methods are employed to identify the most powerful predictors by accounting for model uncertainty. It is found that though stock return predictability is weak, it can still affect the investor's optimal portfolio decisions over different investment horizons.
135

Modelovanje i implementacija sistema za podršku vrednovanju publikovanih naučno-istraživačkih rezultata / Modeling and implementation of system for evaluation of published research outputs

Nikolić Siniša 26 April 2016 (has links)
<p>Cilj &ndash; Prvi cilj istraživanja je kreiranje modela podataka i implementacija informacionog sistema zasnovanog na modelu za potrebe vrednovanja publikovanih naučno-istraživačkih rezultata. Model bi bio primenjen u CRIS UNS informacionom sistemu, kao podr&scaron;ka sistemu vrednovanja.<br />Drugi cilj istraživanja je utvrđivanje u kojoj meri i na koji način se može automatizovati proces evaluacije koji se zasniva na različitim pravilima i pravilnicima.<br />Metodologija &ndash; Kako bi se definisalo pro&scaron;irenje CERIF modela neophodno je bilo identifikovati različite aspekte podataka koji su prisutni u evaluaciji naučno-istraživačkih publikacija. Stoga, zarad potreba istraživanja, odabrana su i analizirana su dokumenta koja predstavljaju različite nacionalne pravilnike, okvire i smernice za evaluaciju.<br />Za modelovanje specifikacije arhitekture sistema za vrednovanje kori&scaron;ćeni su CASE alati koji su bazirani na objektno-orijentisanoj metodologiji (UML 2.0). Za implementaciju pro&scaron;irenja CERIF modela u CRIS UNS sistemu kori&scaron;ćena je Java platforma i tehnologije koji olak&scaron;avaju kreiranje veb aplikacija kao &scaron;to su AJAX, RichFaces, JSF itd. Pored navedene op&scaron;te metodologije za razvoj softverskih sistema kori&scaron;ćeni su primeri dobre prakse u razvoju informacionih sistema. To se pre svega odnosi na principe kori&scaron;ćene u razvoju institucionalnih repozitorijuma, bibliotečkih informacionih sistema, informacionih sistema naučno-istraživačke delatnosti, CRIS sistema, sistema koji omogućuju evaluaciju podataka itd.<br />Ekspertski sistem koji bi podržao automatizaciju procesa evaluacije po različitim pravilnicima odabran je na osnovu analize postojećih re&scaron;enja za sisteme bazirane na pravilima i pregleda naučne literature.<br />Rezultati &ndash; Analizom nacionalnih pravilnika i smernica dobijen je skup podataka na osnovu kojeg je moguće evaluirati publikovane rezultate po odabranim pravilnicima.<br />Razvijen je model podataka kojim se predstavljaju svi podaci koji učestvuju u procesu evaluacije i koji je kompatibilan sa CERIF modelom podataka.<br />Predloženi model je moguće implementirati u CERIF kompatibilnim CRIS sistemima, &scaron;to je potvrđeno implementacijom informacionog sistema za vrednovanje publikovanih naučno-istraživačkih rezultata u okviru CRIS UNS.<br />Ekspertski sistem baziran na pravilima može biti iskori&scaron;ćen za potrebe automatizacije procesa evaluacije, &scaron;to je potvrđeno predstavom i implementacijom SRB pravilnika u Jess sistemu baziranom na pravilima.<br />Praktična primena &ndash;Zaključci proiza&scaron;li iz analize pravilnika (npr. poređenje sistema i definisanje metapodataka za vrednovanje) se mogu primeniti pri definisanju modela podataka za CERIF sisteme i za sisteme koji nisu CERIF orijentisani.<br />Sistem za podr&scaron;ku vrednovanju publikovanih naučno-istraživačkih rezultata je implementiran kao deo CRIS UNS sistema koji se koristi na Univerzitetu u Novom Sadu čime je obezbeđeno vrednovanje publikovanih naučno-istraživačkih rezultata za različite potrebe (npr. promocije u naučna i istraživačka zvanja, dodele nagrada i materijalnih sredstava, finansiranje projekata, itd.), po različitim pravilnicima i komisijama.<br />Vrednost &ndash; Dati su metapodaci na osnovu kojih se vr&scaron;i vrednovanje publikovanih rezultat istraživanja po raznim nacionalnim pravilnicima i smernicama. Dat je model podataka i pro&scaron;irenje CERIF modela podataka kojim se podržava vrednovanje rezultata istraživanja u CRIS sistemima. Posebna prednost pomenutih modela je nezavisnost istih od implementacije sistema za vrednovanje rezultata istraživanja. Primena predloženog pro&scaron;irenje CERIF modela u CRIS sistemima praktično je pokazana u CRIS sistemu Univerziteta u Novom Sadu. Sistem za vrednovanje koji se bazira na pro&scaron;irenju CERIF modela pruža i potencijalnu interoperabilnost sa sistemima koji CERIF model podržavaju. Implementacijom informacionog sistema za vrednovanje, vrednovanje naučnih publikacija je postalo olak&scaron;ano i transparentnije. Potvrda koncepata da se ekspertski sistemi bazirani na pravilima mogu koristiti za automatizaciju vrednovanja, otvara totalno novi okvir za implementaciju informacionih sistema za podr&scaron;ku vrednovanja postignutih rezultata istraživanja.</p> / <p>Aim &ndash; The first aim of the research was creation of data model and implementation of information system based on the proposed model for the purpose of evaluation of published research outputs. The model is applied in CRIS information system to support the system for evaluation.<br />The second objective was determination of the manner and extent in which the evaluation process that is based on different rules and different rulebooks could be automated.<br />Methodology - In order to define the extension of the CERIF model, it was necessary to identify the various aspects of data which is relevant in evaluation of scientific research publications. Therefore, documents representing different national regulations, frameworks and guidelines for evaluations were selected and analyzed.<br />For the modeling of the system architecture, CASE tools were used, which are based on object-oriented methodology (UML 2.0). To implement the extension of the CERIF model within the CRIS UNS system, JAVA platform and technologies that facilitate creation of web applications such as AXAJ and RichFaces were used. In addition to this general methodology for development of software systems, best practice examples from the information systems development are also used. This primary refers to the principles used in development of institutional repositories, library information systems, information systems of the scientific-research domain, CRIS systems, systems that enable evaluation of data, etc.<br />The expert system that supports automation of the evaluation process by different rulebooks was selected based on analysis of the existing solutions for rule based systems and examination of scientific literature.<br />Results - By analysis of the national rulebooks and guidelines, a pool of data was gathered, which served as a basis for evaluation of published results by any analyzed rulebook.<br />A data model was developed, by which all data involved in the evaluation process can be represented. The proposed model is CERIF compatible.<br />The proposed model can be implemented in CERIF compatible CRIS systems, which was confirmed by the implementation of an information system for evaluation of published scientific research results in CRIS UNS.<br />An expert system based on rules can be used for the needs of automation of the evaluation process, which was confirmed by the presentation and implementation of the Serbian Rulebook by Jess.<br />Practical application - The conclusions raised from the analysis of rulebooks (e.g. Comparison of systems and defining metadata for evaluation) can be applied in defining the data model for CERIF systems and for systems that are not CERIF oriented.<br />The system for support of evaluation of published scientific research results was implemented as part of the CRIS UNS system used at the University of Novi Sad, thus providing evaluation of published scientific research results for different purposes (e.g. promotion in scientific and research titles, assignment of awards and material resources, financing of projects, etc.), according to different rulebooks and commissions.<br />Value &ndash; Metadata is provided on which basis the evaluation of published research results by various national rulebooks and guidelines is conducted. A data model and an expansion of the CERIF data model that supports the evaluation of the research results within CRIS systems are given. A special advantage of these models is their independence of the implementation of the system for evaluation of research results. The application of the proposed extension of the CERIF model into CRIS systems practically is demonstrated in the CRIS system of the University of Novi Sad. The system that implements an expansion of the CERIF model provides a potential interoperability with systems that support CERIF model. After the implementation of the information system for evaluation, the evaluation of scientific publications becomes easier and more transparent. A confirmation of the concept that the expert systems based on rules can be used in automation of the evaluation process opens a whole new framework for implementation of information systems for evaluation.</p>
136

Order-sensitive XML Query Processing Over Relational Sources

Murphy, Brian R 05 May 2003 (has links)
XML is an emerging standard format for data on the Web as well as in business applications. In order to store and access this information in an efficient manner, database technology must be utilized. A relational database system, the most established and mature technology for query processing and storage, creates a strong foundation for such an XML data management system. However, while relational databases are based on SQL queries, the original user queries are written in XQuery, an XML query language. This XML query language has support for order-sensitive queries as XML is an order-sensitive markup language. A major problem has been discovered with loading XML in a relational database. That problem is the lack of native SQL support for and management of order handling. While XQuery has order and positional support, SQL does not have the same support. For example, individuals who were viewing XML information about music albums would have a hard time querying for the first three songs of a track list from a relational backend. Mapping XML documents to relational backends also proves hard as the data models (hierarchical elements versus flat tables) are so different. For these reasons, and other purposes, the Rainbow System is being developed at WPI as a system that bridges XML data and relational data. This thesis in particular deals with the algebra operators that affect order, order sensitive loading and mapping of XML documents, and the pushdown of order handling into SQL-capable query engines. The contributions of the thesis are the order-sensitive rewrite rules, new XML to relational mappings with different order styles, order-sensitive template-driven SQL generation, and a proposed metadata table for order-sensitive information. A system that implements these proposed techniques with XQuery as the XML query language and Oracle as the backend relational storage system has been developed. Experiments were created to measure execution time based on various factors. First, scalability of the system as backend data set size grows is studied. Second, scalability of the system as results returned from the database grows, and finally, query execution times with different loading types are explored. The experimental results are encouraging. Query execution with the relational backend proves to be much faster than native execution within the Rainbow system. These results confirm the practical utility of our proposed order-sensitive XQuery execution solution over relational data.
137

Assimilation de données et inversion bathymétrique pour la modélisation de l'évolution des plages sableuses

Birrien, Florent 14 May 2013 (has links)
Cette thèse présente une plateforme d'assimilation de données issues de l'imagerie vidéo et intégrée au modèle numérique d'évolution de profil de plage 1DBEACH. Le manque de jeux de données bathymétriques haute-fréquence est un des problèmes récurrents pour la modélisation morphodynamique littorale. Pourtant, des relevés topographiques réguliers sont nécessaires non seulement pour la validation de nos modèles hydro-sédimentaires mais aussi dans une perspective de prévision d'évolution morphologique de nos plages sableuses et d'évolution de la dynamique des courants de baïnes en temps réel. Les récents progrès dans le domaine de l'imagerie vidéo littorale ont permis d'envisager un moyen de suivi morphologique quasi-quotidien et bien moins coûteux que les traditionnelles campagnes de mesure. En effet, les images dérivées de la vidéo de type timex ou timestack rendent possible l'extraction de proxys bathymétriques qui permettent de caractériser et de reconstruire la morphologie de plage sous-jacente. Cependant, ces méthodes d'inversion bathymétrique directes sont limitées au cas linéaire et nécessitent, selon les conditions hydrodynamiques ambiantes, l'acquisition de données vidéo sur plusieurs heures voire plusieurs jours pour caractériser un état de plage. En réponse à ces différents points bloquants, ces travaux de thèse proposaient l'implémentation puis la validation de méthodes d'inversion bathymétrique basées sur l'assimilation dans notre modèle de différentes sources d'observations vidéo disponibles et complémentaires. A partir d'informations hétérogènes et non redondantes, ces méthodes permettent la reconstruction rapide et précise d'une morphologie de plage dans son intégralité pour ainsi bénéficier de relevés bathymétriques haute fréquence réguliers. / This thesis presents data-model assimilation techniques using video-derived beach information to improve the modelling of beach profile evolution.The acquisition of accurate and recurrent nearshore bathymetric data is a difficult and challenging task which limits our understanding of nearshore morphological changes. This is particularly true in the surf zone which exhibits the largest degree of morphological variability. In addition, surfzone bathymetric data are crucial from many perspectives such as numerical model validation, operational rip current prediction or real-time nearshore evolution modelling. In parallel, video imagery recently arose as a low-cost alternative to direct measurement in order to daily monitor beach morphological changes. Indeed, bathymetry proxies can be extracted from video-derived images such as timex or timestacks. These data can be then used to estimate underlying beach morphologies. However, simple linear depth inversion techniques still suffer from some restrictions and require up to a 3-day dataset to completely characterize a given beach morphology. As an alternative, this thesis presents and validates data-assimilation methods that combine multiple sources of available video-derived bathymetry proxies to provide a rapid, complete and accurate estimation of the underlying bathymetry and prevent from excessive information.
138

The Sea of Stuff : a model to manage shared mutable data in a distributed environment

Conte, Simone Ivan January 2019 (has links)
Managing data is one of the main challenges in distributed systems and computer science in general. Data is created, shared, and managed across heterogeneous distributed systems of users, services, applications, and devices without a clear and comprehensive data model. This technological fragmentation and lack of a common data model result in a poor understanding of what data is, how it evolves over time, how it should be managed in a distributed system, and how it should be protected and shared. From a user perspective, for example, backing up data over multiple devices is a hard and error-prone process, or synchronising data with a cloud storage service can result in conflicts and unpredictable behaviours. This thesis identifies three challenges in data management: (1) how to extend the current data abstractions so that content, for example, is accessible irrespective of its location, versionable, and easy to distribute; (2) how to enable transparent data storage relative to locations, users, applications, and services; and (3) how to allow data owners to protect data against malicious users and automatically control content over a distributed system. These challenges are studied in detail in relation to the current state of the art and addressed throughout the rest of the thesis. The artefact of this work is the Sea of Stuff (SOS), a generic data model of immutable self-describing location-independent entities that allow the construction of a distributed system where data is accessible and organised irrespective of its location, easy to protect, and can be automatically managed according to a set of user-defined rules. The evaluation of this thesis demonstrates the viability of the SOS model for managing data in a distributed system and using user-defined rules to automatically manage data across multiple nodes.
139

開放型XML資料庫績效評估工作量之模型 / An Open Workload Model for XML Database Benchmark

尤靖雅 Unknown Date (has links)
XML (eXtensible Markup Language)是今日新興在網路上所使用的延伸性的標記語言。它具有豐富的語意表達及與展現方式獨立的資料獨立性。由於這些特性,使得XML成為新的資料交換標準並且其應用在資料庫中產生了許多新的研究議題在資料儲存和查詢處理上。在本篇研究中,將研究XML資料庫中的績效評估的議題並且發展一個可適用於不同應用領域及各種平台上的使用者導向且開放型工作量模式以評估XML資料庫績效。此XML開放型的工作量模型包含三個子模型─XML資料模型、查詢模型以及控制模型。XML資料模型將模式化XML文件中階層式的結構概念;查詢模型包含了一連串查詢模組以供測試XML資料庫的處理查詢能力以及一個開放型的查詢輸入介面以供使用者依照需求設定所需的測試查詢;控制模型中定義了一連串變數以供設定績效評估系統中的執行環境。我們發展此系統化且具開放型的工作量方法可以幫助各種不同應用領域的使用者預測及展現XML資料庫系統的績效。 / XML (eXtensible Markup Language) is the emerging data format for data processing on the Internet. XML provides a rich data semantics and data independence from the presentation. Thanks to these features, XML becomes a new data exchange standard and leads new storage and query processing issues on database research communities. In this paper, the performance evaluation issues on XML databases have been studied and a generic and requirement-driven XML workload model that is applicable to any application scenario or movable on various platforms is developed. There are three sub-models in this generic workload model, the XML data model, the query model, and the control model. The XML data model formulates the generic hierarchy structure of XML documents and supports a flexible document structure of the test database. The XML query model contains a flexible classical query module selector and an open query input to define the requirement-driven test query model to challenge the XML query processing ability of the XML database. The control model defines variables that are used to set up the implementation of a benchmark. This open, flexible, and systematic workload method permits users in various application domains to predicate or profile the performance of the XML database systems.
140

人民幣國際化程度與前景的實證分析 / Empirical study on the degree and prospect of renminbi internationalization

王國臣, Wang, Guo Chen Unknown Date (has links)
人民幣是否可能成為另一個重要的國際貨幣,甚至挑戰美元的國際地位?此即本論文的問題意識。對此,本論文進一步提出三個研究問題:一是如何測量當前的人民幣國際化程度?二是如何測量當前的人民幣資本開放程度?三是資本開放對於人民幣國際化程度的影響為何? 為此,本研究利用主成分分析(PCA),以建構人民幣國際化程度(CIDI)與人民幣資本帳開放程度(CAOI)。其次再利用動態追蹤資料模型──系統一般動差估計法(SGMM),以檢證各項人民幣綜合競爭力對於貨幣國際化程度的影響。最後,本研究進一步梳理人民幣資本帳開放的進程,並結合上述所有實證分析的結果,進而預估漸進資本開放下人民幣國際化的前景。研究對象包括人民幣在內的33種國際貨幣,研究時間則起自1999年歐元成立,迄於2009年。 本論文的發現三:一是,當前人民幣國際化程度進展相當快速。但截至2009年年底,人民幣國際化程度還很低,遠落後於美元、歐元、日圓,以及英鎊等主要國際貨幣。不僅如此,人民幣國際化程度也遜於俄羅斯盧布、巴西里拉,以及印度盧比等開發中國家所發行的貨幣。 二是,過去10年來,人民幣資本帳開放程度不升反降,截至2009年年底,人民幣的資本帳開放程度維持在零,這表示:人民幣是世界上管制最為嚴格的貨幣。相對而言,美元、歐元、日圓,以及英鎊的資本帳開放程度至少都在70%以上,特別是英鎊的資本帳開放程度更趨近於完全開放。 三是,根據SGMM的實證結果顯示,網路外部性、經濟規模、金融市場規模、貨幣穩定度,以及資本開放程度都是影響貨幣國際化程度的關鍵因素。在此基礎上,本研究利用發生機率(odds ratio),以計算不同資本開放情境下,人民幣成為前10大國際貨幣的可能性。結果顯示,如果人民幣的資本帳開放到73%左右,人民幣便可擠進前10大國際貨幣(發生機率為65.6%)。 不過,這只是最為保守的估計。原因有二:一是,隨者中國經濟實力的崛起,以及人民幣預期升值的脈絡下,國際市場對於人民幣的需求原本就很高。此時,人民幣資本帳如果能適時開放,則人民幣的國際持有將大幅增加。換言之,本研究沒有考量到,各貨幣競爭力因素與資本開放程度之間的加乘效果。 二是,資本開放不僅直接對貨幣國際化程度產生影響,也會透過擴大金融市場規模與網路外部性等其他貨幣競爭力因素,間接對貨幣國際化程度造成影響。這間接效果,本研究也沒有考量到。因此,可以預期的是,只要人民幣資本帳能夠漸進開放,人民幣國際化的前景將比本研究所預估的高出許多。 / This paper discusses whether the Renminbi (RMB) will become an international currency, even challenging to the U.S. dollar. In order to examine above question, this paper take the following three steps: 1. By using principal component analyses (PCA), this paper constructs two indices: currency internationalization degree index (CIDI) and capital account liberalization degree index (CAOI); 2. By using dynamic panel data model-system generalized method of moment (SGMM), this paper analyzes factors affect the CIDI, including economic and trade size, financial system, network externalities, confidence in the currency’s value, and CAOI; 3. According to the PCA and SGMM results, this paper calculates the odds ratio of RMB becoming important international currency. The reserch achieved the following results. First, the degree of internationalization of the RMB progress very fast, but the RMB CIDI is still very low, its CIDI far behinds the dollar, euro, Japanese yen, and pounds. Second, over the past 10 years, RMB CAOI is not increased but decreased. Its CAOI is at zero in 2009, this means that: the RMB is the most stringent controls in the world currency. In contrast, U.S. dollars, euros, yen, and pound CAOI are at least in more than 70%. Third, according to the SGMM results, economic size, financial system, network externalities, confidence in the currency’s value, and CAOI are key factors affect the CIDI. Based on this output, this paper forecasted that if the RMB CAOI is open to about 73%, RMB could be squeezed into the top 10 of the international currency. (The odds ratio is 65.6%) It is noteworthy that this is only the lowest estimates. This is because that this paper did not consider the interaction effects of each currency competitiveness factors and CAOI. Therefore, if RMB CAOI continues open, the prospect of RMB CIDI is much higher than estimated by this paper.

Page generated in 0.062 seconds