• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 66
  • 25
  • 24
  • 15
  • 13
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 185
  • 185
  • 33
  • 32
  • 31
  • 29
  • 25
  • 23
  • 22
  • 21
  • 18
  • 16
  • 16
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Multimodální databáze / Multimodal Databases

Kuncl, Jiří January 2009 (has links)
This masters thesis is dedicated to theme of multimodal databases, especially multimedia databases. The first part contains overview of today's most used data models. Next part summarizes information about content-based search in multimedia content and indexing this type of data. Final part is dedicated to implementation system for storing and managing of multimedia content based on Helix streaming system a PostgreSQL database system.
132

Att främja aktiva och medvetna val gällande hållbarhet hos e-handelskunder / How to promote active and conscious choices regarding sustainability amongst e-commerce customers

Nilsson, Zandra, Lindberg, Jesper January 2021 (has links)
This study aims to investigate how to work with sustainability in e-commerce, with a focus on making information available to the consumer as well as how this could be conceptualized. This requires a multidisciplinary approach, with domain knowledge based on environmental science and additional knowledge in information and computer science. In the time we live in now, we see an increased need to think sustainably and act for it in society. Consumption of products or services has a great impact on this and to a large extent via e-commerce today where consumers often find themselves confused in how to make a sustainable choice. We must work towards a more sustainable society and living. If we continue to utilize our planet's resources and assets without letting nature recover, we will probably end up in an unsustainable situation. Previous research shows the complexity in terms of sustainability, the definition as such but also how to assess it, a lack of regulated standards complicates the unified picture of what is actually sustainable. Creating an index or some form of standard for sustainability is not possible for us in relation to the time frame and scope of this study. An attempt to clarify what one can do with a focus on the consumer and how he or she can access information that is available or could be, is for us interesting to investigate. What made this possible was an extensive literature study to acquire the domain knowledge required to be able to understand what we needed to focus on, the literature study itself provided good insights and a good foundation as a starting point. By following a work process based on Design Science Research Methodology and with the opportunity to collaborate with an established company that works to guide consumers in e-commerce, we have been able to work in a concrete and relevant context and thus had the opportunity to test our concepts based on a literature study and data collection methods, consisting of questionnaires and interviews. Key findings from the study were that consumers wanted to know more about sustainability as well as the importance of involving the consumer in the process. Based on a literature study and the data collections, a basic data model for sustainability was developed. Companies in e-commerce should take the initiative to support sustainable online consumption. The task is difficult, but doing so could provide a competetive advantage among the companies and at the same time it contributes to a good cause.
133

Semantic Data Integration in Manufacturing Design with a Case Study of Structural Analysis

Sarkar, Arkopaul 24 September 2014 (has links)
No description available.
134

Teaching hydrological modelling: illustrating model structure uncertainty with a ready-to-use computational exercise

Knoben, Wouter J. M., Spieler, Diana 06 June 2024 (has links)
Estimating the impact of different sources of uncertainty along the modelling chain is an important skill graduates are expected to have. Broadly speaking, educators can cover uncertainty in hydrological modelling by differentiating uncertainty in data, model parameters and model structure. This provides students with insights on the impact of uncertainties on modelling results and thus on the usability of the acquired model simulations for decision making. A survey among teachers in the Earth and environmental sciences showed that model structural uncertainty is the least represented uncertainty group in teaching. This paper introduces a computational exercise that introduces students to the basics of model structure uncertainty through two ready-to-use modelling experiments. These experiments require either Matlab or Octave, and use the open-source Modular Assessment of Rainfall-Runoff Models Toolbox (MARRMoT) and the open-source Catchment Attributes and Meteorology for Large-sample Studies (CAMELS) data set. The exercise is short and can easily be integrated into an existing hydrological curriculum, with only a limited time investment needed to introduce the topic of model structure uncertainty and run the exercise. Two trial applications at the Technische Universität Dresden (Germany) showed that the exercise can be completed in two afternoons or four 90 min sessions and that the provided setup effectively transfers the intended insights about model structure uncertainty.
135

Analyse des Straßenverkehrs mit verteilten opto-elektronischen Sensoren

Schischmanow, Adrian 14 November 2005 (has links)
Aufgrund der steigenden Verkehrsnachfrage und der begrenzten Resourcen zum Ausbau der Straßenverkehrsnetze werden zukünftig größere Anforderungen an die Effektivität von Telematikanwendungen gestellt. Die Erhebung und Bereitstellung aktueller Verkehrsdaten durch geeignete Sensoren ist dazu eine entscheidende Voraussetzung. Gegenstand dieser Arbeit ist die großflächige Analyse des Straßenverkehrs auf der Basis bodengebundener und verteilter opto-elektronischer Sensoren. Es wird ein Konzept vorgestellt, dass eine von der Bilddatenerhebung bis zur Bereitstellung der Daten für Verkehrsanwendungen durchgehende Verarbeitungskette enthält. Der interdisziplinäre Ansatz bildet die Basis zur Verknüpfung eines solchen Sensorsystems mit Verkehrstelematik. Die Abbildung des Verkehrsgeschehens erfolgt im Gegensatz zu herkömmlichen bodengebundenen Messsystemen innerhalb größerer zusammenhängender Ausschnitte des Verkehrsraums. Dadurch können streckenbezogene Verkehrskenngrößen direkt bestimmt werden. Die Georeferenzierung der Verkehrsobjekte ist die Grundlage für eine optimale Verkehrsanalyse und Verkehrssteuerung. Die generierten Daten sind Basis zur Findung und Verifizierung von Theorien und Modellen sowie zur Entwicklung verkehrsadaptiver Steuerungsverfahren auf mikroskopischer Ebene. Es wird gezeigt, wie aus der Fusion gleichzeitig erhaltener Daten mehrerer Sensoren, die im Bereich des Sichtbaren und im thermalen Infrarot sensitiv sind, ein zusammengesetztes Abbildungsmosaik eines vergrößerten Verkehrsraums erzeugt werden kann. In diesem Abbildungsmosaik werden Verkehrsdatenmodelle unterschiedlicher räumlicher Kategorien abgeleitet. Die Darstellung des Abbildungsmosaiks mit seinen Daten erfolgt auf unterschiedlichen Informationsebenen in geokodierten Karten. Die Bewertung mikroskopischer Verkehrsprozesse wird durch die besondere Berücksichtigung der Zeitkomponente bei der Visualisierung möglich. Die vorgestellte Verarbeitungskette beinhaltet neue Anwendungsbereiche für geografische Informationssysteme (GIS). Der beschriebene Ansatz wurde konzeptionell bearbeitet, in der Programmiersprache IDL realisiert und erfolgreich getestet. / The growing demand of urban and interregional road traffic requires an improvement regarding the effectiveness of telematics systems. The use of appropriate sensor systems for traffic data acquisition is a decisive prerequisite for the efficiency of traffic control. This thesis focuses on analyzing road traffic based on stationary and distributed ground opto-electronic matrix sensors. A concept which covers all parts from image data acquisition up to traffic data provision is presented. This interdisciplinary approach establishes a basis for the integration of such a sensor system into telematics systems. Unlike conventional ground stationary sensors, the acquisition of traffic data is spread over larger areas in this case. As a result, road specific traffic data can be measured directly. Georeferencing of traffic objects is the basis for optimal road traffic analysis and road traffic control. This approach will demonstrate how to generate a spatial mosaic consisting of traffic data generated by several sensors with different spectral resolution. For traffic flow analysis the realisation of special 4D data visualisation methods on different information levels was an essential need. The data processing chain introduces new areas of application for geographical information systems (GIS). The approach utilised in this study has been worked out conceptually and also successfully tested and applied in the programming language IDL.
136

Formale Semantik des Datentypmodells von SDL-2000

Menar, Martin von Löwis of 18 December 2003 (has links)
Mit der aktuellen Überarbeitung der Sprache SDL (Specification and Description Language) der ITU-T wurde die semantische Fundierung der formalen Definition dieser Sprache vollständig überarbeitet; die formale Definition basiert nun auf dem Kalkül der Abstract State Machines (ASMs). Ebenfalls neu definiert wurde das um objekt-orientierte Konzepte erweiterte Datentypsystem. Damit musste eine formale semantische Fundierung für diese neuen Konzepte gefunden werden. Der bisher verwendete Kalkül ACT.ONE sollte nicht mehr verwendet werden, da er schwer verwendbar, nicht implementierbar und nicht auf Objektsysteme erweiterbar ist. In der vorliegenden Arbeit werden die Prinzipien einer formalen Sprachdefinition dargelegt und die Umsetzung dieser Prinzipien für die Sprache SDL-2000 vorgestellt. Dabei wird erläutert, dass eine konsistente Sprachdefinition nur dadurch erreicht werden konnte, dass die Definition der formalen Semantik der Sprache parallel mit der Entwicklung der informalen Definition erfolgte. Dabei deckt die formale Sprachdefinition alle Aspekte der Sprache ab: Syn-tax, statische Semantik und dynamische Semantik. Am Beispiel der Datentypsemantik wird erläutert, wie jeder dieser Aspekte informal beschrieben und dann formalisiert wurde. Von zentraler Rolle für die Anwendbarkeit der formalen Semantikdefinition in der Praxis ist der Einsatz von Werkzeugen. Die Arbeit erläutert, wie aus der formalen Sprachdefinition voll-automatisch ein Werkzeug generiert wurde, das die Sprache SDL implementiert, und wie die durch die Umsetzung der formalen Semantikdefinition in ein Werkzeug Fehler in dieser Definition aufgedeckt und behoben werden konnten. / With the latest revision of ITU-T SDL (Specification and Description Language), the semantic foundations of the formal language definition were completely revised; the formal definition is now based on the calculus of Abstract State Machines (ASMs). In addition, the data type system of SDL was revised, as object-oriented concepts were added. As a result, a new semantical foundation for these new concepts had to be defined. The ACT.ONE calculus that had been used so far was not suitable as a foundation any more, as it is hard to use, unimplementable and not extensible for the object oriented features. In this thesis, we elaborate the principles of a formal language definition, and the realisation of these principles in SDL-2000. We explains that a consistent language definition can only be achieved by developing the formal semantics definition in parallel with the development of the informal definition. The formal language definition covers all aspects of the language: syntax, static semantics, and dynamic semantics. Using the data type semantics as an example, we show how each of these aspects is informally described, and then formalized. For the applicability of the formal semantics definition for practitioners, usage of tools plays a central role. We explain how we transform the formal language definition fully automatically into a tool that implements the language SDL. We also explain how creating the tool allowed us to uncover and correct errors in the informal definition.
137

An XML-based Multidimensional Data Exchange Study / 以XML為基礎之多維度資料交換之研究

王容, Wang, Jung Unknown Date (has links)
在全球化趨勢與Internet帶動速度競爭的影響下,現今的企業經常採取將旗下部門分散佈署於各地,或者和位於不同地區的公司進行合併結盟的策略,藉以提昇其競爭力與市場反應能力。由於地理位置分散的結果,這類企業當中通常存在著許多不同的資料倉儲系統;為了充分支援管理決策的需求,這些不同的資料倉儲當中的資料必須能夠進行交換與整合,因此需要有一套開放且獨立的資料交換標準,俾能經由Internet在不同的資料倉儲間交換多維度資料。然而目前所知的跨資料倉儲之資料交換解決方案多侷限於逐列資料轉換或是以純文字檔案格式進行資料轉移的方式,這些方式除缺乏效率外亦不夠系統化。在本篇研究中,將探討多維度資料交換的議題,並發展一個以XML為基礎的多維度資料交換模式。本研究並提出一個基於學名結構的方法,以此方法發展一套單一的標準交換格式,並促成分散各地的資料倉儲間形成多對多的系統化映對模式。以本研究所發展之多維度資料模式與XML資料模式間的轉換模式為基礎,並輔以本研究所提出之多維度中介資料管理功能,可形成在網路上通用且以XML為基礎的多維度資料交換過程,並能兼顧效率與品質。本研究並開發一套雛型系統,以XML為基礎來實作多維度資料交換,藉資證明此多維度資料交換模式之可行性,並顯示經由中介資料之輔助可促使多維度資料交換過程更加系統化且更富效率。 / Motivated by the globalization trend and Internet speed competition, enterprise nowadays often divides into many departments or organizations or even merges with other companies that located in different regions to bring up the competency and reaction ability. As a result, there are a number of data warehouse systems in a geographically-distributed enterprise. To meet the distributed decision-making requirements, the data in different data warehouses is addressed to enable data exchange and integration. Therefore, an open, vendor-independent, and efficient data exchange standard to transfer data between data warehouses over the Internet is an important issue. However, current solutions for cross-warehouse data exchange employ only approaches either based on records or transferring plain-text files, which are neither adequate nor efficient. In this research, issues on multidimensional data exchange are studied and an XML-based Multidimensional Data Exchange Model is developed. In addition, a generic-construct-based approach is proposed to enable many-to-many systematic mapping between distributed data warehouses, introducing a consistent and unique standard exchange format. Based on the transformation model we develop between multidimensional data model and XML data model, and enhanced by the multidimensional metadata management function proposed in this research, a general-purpose XML-based multidimensional data exchange process over web is facilitated efficiently and improved in quality. Moreover, we develop an XML-based prototype system to exchange multidimensional data, which shows that the proposed multidimensional data exchange model is feasible, and the multidimensional data exchange process is more systematic and efficient using metadata.
138

都市蔓延與氣候暖化關係之研究-以台北都會區為例 / The Study of relationship between urban sprawl and climate warming - An example of Taipei metropolitan area

賴玫錡, Lai, Mei Chi Unknown Date (has links)
本研究主要探討台北都會區都市蔓延與氣候暖化之關係,實證分析是否都市蔓延的發展形態會造成氣溫的上升。有研究指出台灣的歷年氣溫上升是因為近年來工商業急速發展,人口增加,建築物型態改變,交通運輸量激增等所致。國內外許多研究也發現都市化與氣溫是呈現正相關,而綠地與氣溫呈現負相關。 本研究實證分析部分使用地理資訊系統之內差法和空間分析方法,以及迴歸分析使用panel data之固定效果模型等工具,內插法之結果得到台北都會區年平均氣溫自1996年至2006年約上升1℃,有些地區甚至上升約2℃,且上升之溫度範圍有擴大的趨勢,呈現放射狀的溫度分布,此與都市蔓延之放射狀發展形態類似。使用空間分析方法則證實了一地人口數的增加會造成該地氣溫上升,並且也發現近來人口數多增加在都市外圍地區,這與上述氣溫分布和都市蔓延之放射狀發展形態也相符合。 迴歸分析結果顯示人口數對於氣溫有相當大之正相關,耕地面積對氣溫則呈現負相關,可見得擁有廣大綠地可以降低區域之氣溫,減緩氣候暖化,因此建議政府需檢討當前農地政策,配合環境保護,適合時宜的提出正確之政策。另外在各鄉鎮市區固定效果估計量方面,可以歸納出若一地區有廣大的公園、綠地、或是有河川流域的經過,對於降低當地氣溫有明顯的幫助;時間趨勢之固定效果估計量顯示台北都會區隨著時間的經過,氣溫將持續上升。因此在未來都市規劃方面,規劃者必須了解各地區特性,善加利用其自然環境以調和氣候暖化之影響、多設置公園綠地、多種植綠色植物、在道路周邊行道樹的設置、建築物間風場之設計等。如此將可以降低都市蔓延對氣候暖化的影響,以及防止氣候暖化的發生。 / In this study, we research the relationship between urban sprawl and climate warming in Taipei metropolitan area. Analyze empirically whether the developed shape of urban sprawl causes the climbing of the temperature. Some studies indicate that the reasons why the climate is getting warmer in Taiwan are the high-speed developments of industry and commerce, the increase of population, the changes of the buildings and the huge increase of the traffic volume. Some other studies also find out that there is a positive correlation between the urbanization and the temperature, and there is a negative correlation between the green space and the temperature. The empirical analysis in this study is based on the Interpolation Method and Spatial Analysis of GIS. And the regression analysis is based on the Fixed Effect Model of Panel Data. The yearly average temperature increased about 1℃ to 2℃ in the Taipei metropolitan area from 1996 to 2006. Furthermore, the range of the increasing temperature has been trending up, and it reveals a radial distribution. It is similar to the radial developed shape of urban sprawl. By using Spatial Analysis, we prove that the temperature of an area increases when the population rises. And we find out that the population rises in most of the peri-urban areas. It also answers to the radial developed shape of urban sprawl and the distribution of the temperature as above. The result of using the regression analysis shows that there is a positive correlation between the number of the population and the temperature and is a negative correlation between the farmland areas and the temperature. So that if there is a big green space, it can decrease the temperature in an area, reduce climate warming. For this reason, I suggest that the government should review our current farmland policy, which should be worked with the environmental protection policy, and bring it into practice at the right time right place. From the fixed effect estimation, we concludes that it helps decrease the temperature in an area obviously when there is a big park, big green space or where a river passing through. The time trend of the fixed effect estimation indicates that the climate in the Taipei metropolitan area will be getting warming with time goes by. Therefore, the urban planner should know better of the feature in each area, using the natural environment to accommodate the influence of climate warming. To have more parks, green spaces and plants, plant more trees by the roads, design the wind flow between buildings. Cut down the carbon production by using either way. Thus and so, we can reduce the influence of urban sprawl to climate warming, and also prevent climate warming.
139

以企業流程模型導向實施資料庫重構之研究-以S公司為例 / The study of database reverse engineering based on business process module-with S company as an example

林于新, Lin, Yu-hsin Unknown Date (has links)
1960年代起資訊科技應用興起以協助組織運行,多數企業因缺乏資訊知識背景,紛紛購入套裝軟體協助業務營運。但套裝軟體無法切合企業的流程,且隨環境變遷和科技演進,不敷使用的問題日益嚴重。從資料庫設計的角度出發,套裝軟體複雜的資料架構、長期修改和存取資料而欠缺管理、無關連式資料庫的概念,導致組織的資料品質低落。當今組織如何將資料庫重新設計以符合所需、新舊系統資料該如何轉換以提升品質,是企業面臨的一大挑戰。   有鑑於此,本研究設計一套資料庫重構流程,以企業流程為基礎為企業設計客製化的資料庫,並將資料從套裝軟體移轉至該理想的資料庫。流程分三階段,階段1是運用資料庫反向工程(Database Reverse Engineering)的方法,還原企業現行資料庫的資料語意和模型架構;階段2則結合流程模型(Process Model)和資料模型(Data Model)的概念,建立以企業流程為基礎的理想資料庫;階段3利用ETL(Extract、Transform、Load)和資料整合的技術,將企業資料從現行資料庫中萃取、轉換和載入至理想資料庫,便完成資料庫重構的作業。   本研究亦將資料庫重構流程實做於個案公司,探討企業早期導入之套裝軟體和以流程為基礎的理想資料模型間的設計落差。實做分析結果,二者在資料庫架構設計、資料語意建立和正規化設計等三部分存有落差設計,因此在執行資料庫重構之資料移轉解決落差時,需釐清來源端資料的含糊語意、考量目的端資料的一致性和參考完整性、以及清潔錯誤的來源資料。   最後,總結目前企業老舊資料庫普遍面臨資料庫架構複雜、無法吻合作業流程所需、未制訂完善資料庫管理機制等問題,而本研究之資料庫重構流程的設計概念,能為企業建立以流程為導向的理想資料庫。 / The raising of information technique helped organization governance greatly was started since 1960s, but because of lack information background and knowledge, many organizations just simply brought software packages to assist business processes or organization governance. The result was those software packages which couldn't fit in with the processes of organization' requirements were getting worse because of changes of environment. From the view of database design, it results in low quality of data because of the complexity of database structure, long-term modifications and accessing to data, and the lack of relational database knowledge. Nowadays, the problems of redesign database structure or transform data from a old system to a new system are great challenges to enterprises. Based on the above, thie research designed a process of database restruction in order to establish customized database based on businesss processes. There are three phases of this process. In phase 1, a company acquires the original data structure and semantic of its software package by the method of database reverse engineering. In phase 2, using concepts of process model and data model, the company establishes its ideal database based on businesss processes. In phase 3, it extracts, transforms, and load data from the current database of software package to ideal database by the technique of ETL and data integration. After these three phases, the company completes the process of data restriction. The process of database restruction is done in a case company to analyze the design gap between the current data model of software package and the ideal data model based on business processes. In the result of analysis, this research found out there are three gaps between its as-is and to-be data models. These three gaps are the design of database struction, the definition of data semantic, and the design of database normalization. Because of these design gaps, when removing gaps by data transformation, a company should pay attention to clarify the semantic of source data, considerate the consistency and referential integrity of destination data, and clean dirty data from source database. Finanlly, the summary of the problems a company using old database are the complexity of database structure, the unfit database for businesss processes, the lack of database management, etc. The process of database restruction this research design can assist a company in establishing ideal database based on business processes.
140

Essays on economic and econometric applications of Bayesian estimation and model comparison

Li, Guangjie January 2009 (has links)
This thesis consists of three chapters on economic and econometric applications of Bayesian parameter estimation and model comparison. The first two chapters study the incidental parameter problem mainly under a linear autoregressive (AR) panel data model with fixed effect. The first chapter investigates the problem from a model comparison perspective. The major finding in the first chapter is that consistency in parameter estimation and model selection are interrelated. The reparameterization of the fixed effect parameter proposed by Lancaster (2002) may not provide a valid solution to the incidental parameter problem if the wrong set of exogenous regressors are included. To estimate the model consistently and to measure its goodness of fit, the Bayes factor is found to be more preferable for model comparson than the Bayesian information criterion based on the biased maximum likelihood estimates. When the model uncertainty is substantial, Bayesian model averaging is recommended. The method is applied to study the relationship between financial development and economic growth. The second chapter proposes a correction function approach to solve the incidental parameter problem. It is discovered that the correction function exists for the linear AR panel model of order p when the model is stationary with strictly exogenous regressors. MCMC algorithms are developed for parameter estimation and to calculate the Bayes factor for model comparison. The last chapter studies how stock return's predictability and model uncertainty affect a rational buy-and-hold investor's decision to allocate her wealth for different lengths of investment horizons in the UK market. The FTSE All-Share Index is treated as the risky asset, and the UK Treasury bill as the riskless asset in forming the investor's portfolio. Bayesian methods are employed to identify the most powerful predictors by accounting for model uncertainty. It is found that though stock return predictability is weak, it can still affect the investor's optimal portfolio decisions over different investment horizons.

Page generated in 0.0648 seconds