• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 75
  • 16
  • 10
  • 9
  • 5
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 148
  • 28
  • 28
  • 20
  • 19
  • 18
  • 17
  • 16
  • 13
  • 11
  • 11
  • 10
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Bayesian data fusion in environmental sciences : theory and applications

Fasbender, Dominique 17 November 2008 (has links)
During the last thirty years, new technologies have contributed to a drastic increase of the amount of data in environmental sciences. Monitoring networks, remote sensors, archived maps and large databases are just few examples of the possible information sources responsible for this growing amount of information. For obvious reasons, it might be interesting to account for all these information when dealing with a space-time prediction/estimation context. In environmental sciences, measurements are very often sampled scarcely over space and time. Geostatistics is the field that investigates variables in a space-time context. It includes a large number of methods and approaches that all aim at providing space-time predictions (or interpolations) for variables scarcely known in space and in time by accounting for space-time dependance between these variables. As a consequence, geostatistics methods are relevant when dealing with the processing and the analysis of environmental variables in which space and time play an important role. As direct consequence of the increasing amount of data, there is an important diversity in the information (e.g. different nature, different uncertainty). These issues have recently motivated the emergence of the concept of data fusion. Broadly speaking, the main objective of data fusion methods is to deal with various information sources in such a way that the final result is a single prediction that accounts for all the sources at once. This enables thus to conciliate several and potentially contradictory sources instead of having to select only one of them because of a lack of appropriate methodology. For most of existing geostatistics methods, it is quite difficult to account for a potentially large number of different information sources at once. As a consequence, one has often to opt for only one information source among all the available sources. This of course leads to a dramatic loss of information. In order to avoid such choices, it is thus relevant to get together the concepts of both data fusion and geostatistics in the context of environmental sciences. The objectives of this thesis are (i) to develop the theory of a Bayesian data fusion (BDF) framework in a space-time prediction context and (ii) to illustrate how the proposed BDF framework can account for a diversity of information sources in a space-time context. The method will thus be applied to a few environmental sciences applications for which (i) crucial available information sources are typically difficult to account for or (ii) the number of secondary information sources is a limitation when using existing methods. Reproduced by permission of Springer. P. Bogaert and D. Fasbender (2007). Bayesian data fusion in a spatial prediction context: a general formulation. Stoch. Env. Res. Risk. A., vol. 21, 695-709. (Chap. 1). © 2008 IEEE. Reprinted, with permission, from D. Fasbender, J. Radoux and P. Bogaert (2008). Bayesian data fusion for adaptable image pansharpening. IEEE Trans. Geosci. Rem. Sens., vol. 46, 1847-1857. (Chap. 3). © 2008 IEEE. Reprinted, with permission, from D. Fasbender, D. Tuia, P. Bogaert and M. Kanevski (2008). Support-based implementation of Bayesian data fusion for spatial enhancement: applications to ASTER thermal images. IEEE Geosci. Rem. Sens. Letters, vol. 6, 598-602. (Chap. 4). Reproduced by permission of American Geophysical Union. D. Fasbender, L. Peeters, P. Bogaert and A. Dassargues (2008). Bayesian data fusion applied to water table spatial mapping. Accepted for publication in Water Resour. Res. (Chap. 5).
42

Behavioural Model Fusion

Nejati, Shiva 19 January 2009 (has links)
In large-scale model-based development, developers periodically need to combine collections of interrelated models. These models may capture different features of a system, describe alternative perspectives on a single feature, or express ways in which different features alter one another's structure or behaviour. We refer to the process of combining a set of interrelated models as "model fusion". A number of factors make model fusion complicated. Models may overlap, in that they refer to the same concepts, but these concepts may be presented differently in each model, and the models may contradict one another. Models may describe independent system components, but the components may interact, potentially causing undesirable side effects. Finally, models may cross-cut, modifying one another in ways that violate their syntactic or semantic properties. In this thesis, we study three instances of the fusion problem for "behavioural models", motivated by real-world applications. The first problem is combining "partial" models of a single feature with the goal of creating a more complete description of that feature. The second problem is maintenance of "variant" specifications of individual features. The goal here is to combine the variants while preserving their points of difference (i.e., variabilities). The third problem is analysis of interactions between models describing "different" features. Specifically, given a set of features, the goal is to construct a composition such that undesirable interactions are absent. We provide an automated tool-supported solution to each of these problems and evaluate our solutions. The main novelties of the techniques presented in this thesis are (1) preservation of semantics during the fusion process, and (2) applicability to large and evolving collections of models. These are made possible by explicit modelling of partiality, variability and regularity in behavioural models, and providing semantic-preserving notions for relating these models.
43

Large-scale and high-quality multi-view stereo

Vu, Hoang Hiep 05 December 2011 (has links) (PDF)
Acquisition of 3D model of real objects and scenes is indispensable and useful in many practical applications, such as digital archives, game and entertainment industries, engineering, advertisement. There are 2 main methods for 3D acquisition : laser-based reconstruction (active method) and image-based reconstruction from multiple images of the scene in different points of view (passive method). While laser-based reconstruction achieves high accuracy, it is complex, expensive and difficult to set up for large-scale outdoor reconstruction. Image-based, or multi-view stereo methods are more versatile, easier, faster and cheaper. By the time we begin this thesis, most multi-view methods could handle only low resolution images under controlled environment. This thesis targets multi-view stereo both both in large scale and high accuracy issues. We significantly improve some previous methods and combine them into a remarkably effective multi-view pipeline with GPU acceleration. From high-resolution images, we produce highly complete and accurate meshes that achieve best scores in many international recognized benchmarks. Aiming even larger scale, on one hand, we develop Divide and Conquer approaches in order to reconstruct many small parts of a big scene. On the other hand, to combine separate partial results, we create a new merging method, which can merge automatically and quickly hundreds of meshes. With all these components, we are successful to reconstruct highly accurate water-tight meshes for cities and historical monuments from large collections of high-resolution images (around 1600 images of 5 M Pixel images)
44

New data structure and process model for automated watershed delineation

Mudgal, Naveen 19 April 2005
DEM analysis to delineate the stream network and its associated subwatersheds are the primary steps in the raster-based parameterization of watersheds. There are two widely used methods for delineating subwatersheds. One of these is the Upstream Catchment Area (UCA) method. The UCA method employs a user specified threshold value of upstream catchment area to delineate subwatersheds from the extracted network of streams. The other common technique is the nodal method. In this approach, subwatersheds are initiated at stream network nodes, where nodes occur at the upstream starting point of streams and at the point of intersection of streams in the network. The UCA approach and the Nodal approach do not permit watershed initiation at points of specific interests. They also fail to explicitly recognize drainage features other than single channel reaches. That is, they exclude water bodies, wetlands, braided channels and other important hydrologic features. TOPAZ (TOpographic PArameteriZation) [Garbrecht and Martz, 1992], is a typical program for raster based, automated drainage analysis. It initiates subwatersheds at source points and at points of intersection of drainage channels. TOPAZ treats lakes as spurious depressions arising out of errors in DEM, and removes them. This research addresses one important limitation of the currently used modeling techniques and tools. It adds the capability to initiate watershed delineation at points of specific interest other than junction and source points in the delineated networks from the Digital Elevation Models (DEMs). The research project evaluates qualitative and quantitative aspects of a new Object Oriented data structure and process model for raster format data analysis in spatial hydrology. The concept of incorporating a user-specified analysis in extraction and parameterization of watersheds is based on the need to have a tool to allow for studies specific to certain points in the stream network including gauging stations. It is also based on the need for an improved delineation of hydrologic features (water bodies) in hydrologic modeling. The research project developed an interface module for TOPAZ [Garbrecht and Martz, 1992] to offset the aforementioned disadvantages of the subwatershed delineation techniques. The research developed an internal hybrid, raster-based, Object Oriented data structure and processing model similar to that of vector data type. The new internal data structure permits further augmentation of the software tool. This internal data structure and algorithms provide an improved framework for discretization of the important hydrologic entities (water bodies) and the capability of extracting homogenous hydrological subwatersheds.
45

New data structure and process model for automated watershed delineation

Mudgal, Naveen 19 April 2005 (has links)
DEM analysis to delineate the stream network and its associated subwatersheds are the primary steps in the raster-based parameterization of watersheds. There are two widely used methods for delineating subwatersheds. One of these is the Upstream Catchment Area (UCA) method. The UCA method employs a user specified threshold value of upstream catchment area to delineate subwatersheds from the extracted network of streams. The other common technique is the nodal method. In this approach, subwatersheds are initiated at stream network nodes, where nodes occur at the upstream starting point of streams and at the point of intersection of streams in the network. The UCA approach and the Nodal approach do not permit watershed initiation at points of specific interests. They also fail to explicitly recognize drainage features other than single channel reaches. That is, they exclude water bodies, wetlands, braided channels and other important hydrologic features. TOPAZ (TOpographic PArameteriZation) [Garbrecht and Martz, 1992], is a typical program for raster based, automated drainage analysis. It initiates subwatersheds at source points and at points of intersection of drainage channels. TOPAZ treats lakes as spurious depressions arising out of errors in DEM, and removes them. This research addresses one important limitation of the currently used modeling techniques and tools. It adds the capability to initiate watershed delineation at points of specific interest other than junction and source points in the delineated networks from the Digital Elevation Models (DEMs). The research project evaluates qualitative and quantitative aspects of a new Object Oriented data structure and process model for raster format data analysis in spatial hydrology. The concept of incorporating a user-specified analysis in extraction and parameterization of watersheds is based on the need to have a tool to allow for studies specific to certain points in the stream network including gauging stations. It is also based on the need for an improved delineation of hydrologic features (water bodies) in hydrologic modeling. The research project developed an interface module for TOPAZ [Garbrecht and Martz, 1992] to offset the aforementioned disadvantages of the subwatershed delineation techniques. The research developed an internal hybrid, raster-based, Object Oriented data structure and processing model similar to that of vector data type. The new internal data structure permits further augmentation of the software tool. This internal data structure and algorithms provide an improved framework for discretization of the important hydrologic entities (water bodies) and the capability of extracting homogenous hydrological subwatersheds.
46

Behavioural Model Fusion

Nejati, Shiva 19 January 2009 (has links)
In large-scale model-based development, developers periodically need to combine collections of interrelated models. These models may capture different features of a system, describe alternative perspectives on a single feature, or express ways in which different features alter one another's structure or behaviour. We refer to the process of combining a set of interrelated models as "model fusion". A number of factors make model fusion complicated. Models may overlap, in that they refer to the same concepts, but these concepts may be presented differently in each model, and the models may contradict one another. Models may describe independent system components, but the components may interact, potentially causing undesirable side effects. Finally, models may cross-cut, modifying one another in ways that violate their syntactic or semantic properties. In this thesis, we study three instances of the fusion problem for "behavioural models", motivated by real-world applications. The first problem is combining "partial" models of a single feature with the goal of creating a more complete description of that feature. The second problem is maintenance of "variant" specifications of individual features. The goal here is to combine the variants while preserving their points of difference (i.e., variabilities). The third problem is analysis of interactions between models describing "different" features. Specifically, given a set of features, the goal is to construct a composition such that undesirable interactions are absent. We provide an automated tool-supported solution to each of these problems and evaluate our solutions. The main novelties of the techniques presented in this thesis are (1) preservation of semantics during the fusion process, and (2) applicability to large and evolving collections of models. These are made possible by explicit modelling of partiality, variability and regularity in behavioural models, and providing semantic-preserving notions for relating these models.
47

Large-scale and high-quality multi-view stereo

Vu, Hoang Hiep 05 December 2011 (has links) (PDF)
Acquisition of 3D model of real objects and scenes is indispensable and useful in many practical applications, such as digital archives, game and entertainment industries, engineering, advertisement. There are 2 main methods for 3D acquisition : laser-based reconstruction (active method) and image-based reconstruction from multiple images of the scene in different points of view (passive method). While laser-based reconstruction achieves high accuracy, it is complex, expensive and difficult to set up for large-scale outdoor reconstruction. Image-based, or multi-view stereo methods are more versatile, easier, faster and cheaper. By the time we begin this thesis, most multi-view methods could handle only low resolution images under controlled environment. This thesis targets multi-view stereo both both in large scale and high accuracy issues. We significantly improve some previous methods and combine them into a remarkably effective multi-view pipeline with GPU acceleration. From high-resolution images, we produce highly complete and accurate meshes that achieve best scores in many international recognized benchmarks. Aiming even larger scale, on one hand, we develop Divide and Conquer approaches in order to reconstruct many small parts of a big scene. On the other hand, to combine separate partial results, we create a new merging method, which can merge automatically and quickly hundreds of meshes. With all these components, we are successful to reconstruct highly accurate water-tight meshes for cities and historical monuments from large collections of high-resolution images (around 1600 images of 5 M Pixel images)
48

Supporting Integration Activities in Object-Oriented Applications

Uquillas-Gomez, Verónica 10 October 2012 (has links) (PDF)
De plus en plus de logiciels sont développés par des équipes de développeurs travaillant de manière collaborative en parallèle. Les développeurs peuvent altérer un ensemble d'artéfacts, inspecter et in- tégrer le code de changements faits par d'autres développeurs. Par exemple, les corrections d'erreurs, les améliorations ou nouvelles fonctionnalités doivent être intégrées dans la version finale d'un logi- ciel et ceci à différents moments du cycle de développement. A un niveau technique, le processus de développement collaboratif est mis en pratique à l'aide d'outils de contrôle de versions (ex: git, SVN). Ces outils permettent aux développeurs de créer leurs propres branches de développement, faisant des tâches de fusion ou d'intégration de ces branches une partie intégrante du processus de développement. Les systèmes de versions de contrôle utilisent des algorithmes de fusion pour aider les développeurs à fusionner les modifications de leur branche dans le base de code commune. Cependant ces techniques travaillent à un niveau lexical, et elles ne garantissent pas que le système résultant soit fonctionnel. Alors que l'utilisation de branches offre de nombreux avantages, la fusion et l'intégration de mod- ifications d'une branche sur une autre est difficile à mettre en oeuvre du fait du manque de support pour assister les développeurs dans la compréhension d'un changement et de son impact. Par exemple, l'intégration d'un changement peut parfois avoir un effet inattendu sur le système et son comporte- ment menant à des bugs subtiles. De plus, les développeurs ne sont pas aidés lors de l'évaluation de l'impact d'un changement, ou lors de la sélection de changements à intégrer d'une branche vers une autre (cherry picking), en particulier lorsque ces branches ont divergé. Dans cette dissertation, nous présentons une approche dont le but est d'apporter des solutions à ces problèmes pour les développeurs, et plus précisément les intégrateurs. Cette approche se base sur des outils et solutions semi-automatisés aidant à de changements la compréhension à l'intérieur d'une branche ou entre branches. Nous nous attachons à satisfaire les besoins en information des intégrateurs quand ils doivent comprendre et intégrer des changements. Pour cela, nous caractérisons les changements et/ou séquences de changements et leurs dépendances. Ces caractérisations sont basées sur la représentation comme citoyens de première classe de l'historique du système et des changements approtés considérant les entités logicielles (ex: classes ou méthodes) et leurs relations plutôt que des fichiers et du texte comme le font les outils de con- trôle de versions. Pour cela, nous proposons une famille de méta-modèles (Ring, RingH, RingS et RingC) qui offrent une représentation des entités du système, de son historique, des changements apportés dans les différentes branches et de leurs dépendances. Des instances de ces meta-modèles sont ensuite utilisées par nos outils destinée à assister les intégrateurs: Torch, un outil visuel qui car- actérise les changements, et JET un ensemble d'outils qui permettent de naviguer dans des séquences de changements. Mots clés: programmation à objets; méta-modèles; historique et versions de programmes; vi- sualisation de programmes; fusion sémantique; analyse de programmes. Samenvatting Hedendaagse software is het resultaat van een collaboratief ontwikkelingsproces met meerdere teams van ontwikkelaars. Het doel van dit proces is om het toe te laten dat ontwikkelaars gelijktijdig en onafhankelijk van elkaar kunnen werken. Hiervoor hebben ze toegang tot een gedeelde verzameling van artefacten die ze kunnen aanpassen, en hebben ze de mogelijkheid om de aanpassingen die an- dere ontwikkelaars maken aan de broncode te inspecteren en te integreren. Zo kunnen bijvoorbeeld bug fixes, verbeteringen en nieuwe functionaliteit tijdig geïntegreerd worden in een versie van een softwaresysteem. Op een technisch niveau wordt dit collaboratief ontwikkelingsproces ondersteund door versiecon- trolesystemen. Gezien deze versiecontrolesystemen het mogelijk maken voor ontwikkelaars om in hun eigen branch van het systeem te werken, zijn merging en integratie een volwaardig onderdeel van het ontwikkelingsproces geworden. Hiertoe bieden deze versiecontrolesystemen geavanceerde en geautomatiseerde merge-technieken aan die ontwikkelaars helpen om hun aanpassingen samen te voegen met de aanpassingen van andere ontwikkelaars. Echter, deze technieken garanderen niet dat het resultaat van dit samenvoegen tot een werkend systeem zal leiden. Alhoewel het gebruik van branching binnen het ontwikkelingsproces vele voordelen biedt, wor- den de hieraan verbonden taken van het invoegen en integreren van aanpassingen bemoeilijkt door een gebrek aan ondersteuning. Bijvoorbeeld, het integreren van aanpassingen kan een onverwachte impact hebben op het ontwerp of het gedrag van het systeem, wat dan weer kan leiden tot de intro- ductie van subtiele fouten. Bovendien wordt er aan ontwikkelaars geen ondersteuning geboden bij het integreren van veranderen die afkomstig zijn uit een andere branch van het systeem (het zogenaamde cherry picking), bij divergerende branches, bij het zoeken naar afhankelijkheden tussen aanpassingen, of bij het inschatten van de mogelijke impact van een verzameling veranderingen op het systeem. In dit proefschrift stellen we een techniek voor die bovenvermelde problemen aanpakt door on- twikkelaars - en in het bijzonder integrators - semi-automatisch te assisteren bij het integreren van aanpassingen, zowel binnen één branch als tussen verschillende branches. We leggen hierbij de klem- toon op het helpen van integrators om de informatie te verkrijgen die ze nodig hebben om aanpassin- gen aan de software te begrijpen en te integreren. Hiervoor maken we gebruik van een karakterisering van aanpassingen en van aanpassingsstromen (dit zijn een opeenvolging van aanpassingen binnen een branch), te samen met een karakterisatie van de afhankelijkheden tussen de aanpassingen. Deze karakteriseringen zijn gebaseerd op een eersterangs voorstelling van de historiek van een systeem en de aanpassingen die binnen deze historiek werden uitgevoerd. Deze voorstelling is gedefinieerd in termen van de feitelijke programma-entiteiten, in plaats van bestanden en tekst die integrators niet de noodzakelijke informatie verschaffen. Hiervoor bieden we een familie van meta- modellen aan (Ring, RingH, RingS en RingC) die een implementatie verschaffen van de voorstelling van programma-entiteiten, de historiek van het systeem, aanpassingen, en de afhankelijkheden tussen aanpassingen. Deze meta-modellen bieden ook de analyses aan om versies van een systeem te vergeli- jken, en om aanpassingen en afhankelijkheden te berekenen. Verder stellen we tools voor die, gebruik makende van instanties van onze meta-modellen, het mogelijk maken voor integrators om de karak-iv teriseringen van aanpassingen te analyseren. De visuele tool Torch en de verzameling van JET-tools, voorzien in de informatie die noodzakelijk is om assistentie te bieden bij respectievelijk het integreren van aanpassingen binnen één branch en tussen verschillende branches. Trefwoorden: objectgericht programmeren; meta-modellen; historiek en versies van pro- gramma's; visualisatie; semantisch mergen; programma-analyses
49

The Merge Policy for the Vocational and Senior High Schools-In Tainan City and County

Hsiao, Su-chuan 14 July 2010 (has links)
The Merge Policy for the Vocational and Senior High Schools in Tainan City and County. Abstract The declining childbirth rate and the marketization of education have been the focuses of the education system in recent years and have led to the rearrangement of the social resources and present educational policy. The aim of this study is to explore the merge policy of the vocational and senior high schools in the modern society and education system. This research drew the conclusion through the vocational and senior high school principals¡¦ interviews and SWOT analysis. The following suggestions are four steps: plan, operation process, result, and trace the effect. Besides, the school merging policy could be divided into alliances model, efficiency merging model and cooperating merging model depends on the crisis, organization growth policy and public strategic decisions. In order to merge the schools successfully and keep the local cultures, it¡¦s necessary to combine the external resources and build the cooperating systems. The further research can develop the concrete evaluation indexes about keeping the specific characteristics of the remote schools to be the tool to evaluate the merging effect.
50

A Study of the Efficiency of the Merging Program of the Urban and Rural Townships in Pingtung County

KUO, CHIN-MAN 24 August 2010 (has links)
Since local governments in Taiwan were given the power of self governance in 1950, the administrative divisions haven¡¦t been readjusted. After 60-year development, the population distribution and urban modes have totally changed. Without readjustment of the administrative divisions, human resource could not be reasonably deployed, resources wasted, regional development gaps widened and the entire country development was severely influenced. Under the impact of globalization and in response to new development in all aspects, such as politics, economics, society and territory, the governmental system and function have to be re-defined and administrative divisions and organization structure have to be readjusted to build an idealized and high-effective government. In recent years, local self-governing groups around the world also moved on to merging to cope with local fiscal predicament and promote the empowerment of local governing groups to enhance their administrative ability. The academic also comprehend the urgent importance of the merging of urban and rural townships and thus propose different resolution projects and strategies and directions of merging and adjustment. The author managed to sort different literature into supporting theories of merging and foreign cases about the merging of local self-governing groups. Based on the above theories and cases and through data envelopment analysis (DEA), the author simulated different merging programs of urban and rural townships, compared the efficiency before merging and that after merging, and proposed concrete suggestion towards the inefficient self-governing groups. The result showed that the program of merging urban and rural townships could increase the efficiency in every aspect. The conclusion can serve as reference to the future implement of the program of merging urban and rural townships.

Page generated in 0.0724 seconds