Spelling suggestions: "subject:"largescale"" "subject:"largerscale""
881 |
Representations of boundary layer cloudiness and surface wind probability distributions in subtropical marine stratus and stratocumulus regionsHe, Yanping 16 January 2007 (has links)
Representations of Boundary Layer Cloudiness and Surface Wind Probability Distributions in Subtropical Marine Stratus and Stratocumulus Regions
Yanping He
153 pages
Directed by Dr. Robert E. Dickinson
A simple low cloud cover scheme is developed for the subtropical marine stratus and stratocumulus (MSC) regions. It is based on a modified CIN concept named the Lower Troposphere Available Dry Inhibition Energy (ADIN). The e-folder time for the local change of ADIN is found to be approximately 6 to 7 hours. On monthly and longer timescales, local productions of ADIN are balanced by local destructions of ADIN within lower troposphere. Dynamical transport of environmental dry static energy and surface evaporation lead to the variations of cloud top radiative cooling, which is a linear function of low cloud cover. Data analysis suggests that total ADIN dynamical transport plays the most important role in determining the seasonal variations and spatial variations of low cloud amounts¡£
The new scheme produces realistic seasonal and spatial variations of both EECRA ship observation and satellite observations in all MSC regions. It explains 25% more covariance than that using Klein-Hartmann (KH) scheme for monthly ISCCP low cloud amount near the Peruvian and Canarian region during the period from 1985 to 1997£¬it better represents the relationship between ENSO index and low cloud cover variations near the Peruvian region. When implemented into NCAR CAM3.1, it systematically reduces the model biases in the summertime spatial variations of low cloud amount and downward solar radiation in the Peruvian, California, and Canarian regions. Model simulated summertime cloud liquid water path, large scale precipitation, and surface fluxes are also significantly changed.
A single predictor named Lower troposphere available thermal inhibition energy (ATIN) is also shown to be more skillful than the lower tropospheric stability in diagnosing low cloud stratiform clouds in the monthly and seasonal timescales. On synoptic timescale, dynamical transport of available dry inhibition energy and surface evaporation are better correlated with marine low cloud amount variations than ATIN and lower troposphere stability.
The influence of boundary layer clouds, ocean surface SST, and large scale divergence on the stochastic dynamics of local ocean surface winds are addressed using QuikSCAT and AIRS satellite observations and a simple conceptual model in the southeast Pacific. The ocean surface pressure gradient depends on both the boundary layer height and temperature inversion strength. Marine boundary clouds are diagnosed using the cloud cover scheme developed in Chapter 2. The model successfully reproduces the observed mean state, the standard deviation, and skewness of local surface wind speeds in the southeast Pacific.
|
882 |
Performance-directed design of asynchronous VLSI systems / Samuel Scott Appleton.Appleton, Samuel Scott January 1997 (has links)
Bibliography :p.269-285. / xxii, 285 p. : ill. ; 30 cm. / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / Describes a new method for describing asynchronous systems (free-flow asynchronism). The method is demonstrated through two applications ; a channel signalling system and amedo. / Thesis (Ph.D.)--University of Adelaide, Dept. of Electrical and Electronic Engineering, 1998
|
883 |
A parallel processing architecture for CAD of integrated circuits / Bruce A. TonkinTonkin, Bruce A. (Bruce Archibald) January 1990 (has links)
Bibliography: leaves 233-259 / xii, 259 leaves : ill ; 30 cm. / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / Thesis (Ph.D.)--University of Adelaide, 1991
|
884 |
Numerical analysis of shallow circular foundations on sandsYamamoto, Nobutaka January 2006 (has links)
This thesis describes a numerical investigation of shallow circular foundations resting on various types of soil, mainly siliceous and calcareous sands. An elasto-plastic constitutive model, namely the MIT-S1 model (Pestana, 1994), which can predict the rate independent behaviour of different types of soils ranging through uncemented sands, silts and clays, is used to simulating the compression, drained triaxial shear and shallow circular foundation responses. It is found that this model provides a reasonable fit to measured behaviour, particularly for highly compressible calcareous sands, because of the superior modelling of the volumetric compression. The features of the MIT-S1 model have been used to investigate the effects of density, stress level (or foundation size), inherent anisotropy and material type on the response of shallow foundations. It was found that the MIT-S1 model is able to distinguish responses on dilatant siliceous and compressible calcareous sands by relatively minor adjustment of the model parameters. Kinematic mechanisms extracted from finite element calculations show different deformation patterns typical for these sands, with a bulb of compressed material and punching shear for calcareous sand, and a classical rupture failure pattern accompanied by surface heave for siliceous sand. Moreover, it was observed that the classical failure pattern transforms gradually to a punching shear failure pattern as the foundation size increases. From this evidence, a dimensional transition between these failure mechanisms can be defined, referred to as the critical size. The critical size is also the limiting foundation size to apply conventional bearing capacity analyses. Alternative approaches are needed, focusing mainly on the soil compressibility, for shallow foundations greater than the critical size. Two approaches, 1-D compression and bearing modulus analyses, have been proposed for those foundation conditions. From the validations, the former is applicable for extremely large foundations, very loose soil conditions and highly compressible calcareous materials, while the latter is suitable for moderate levels of compressibility or foundation size. It is suggested that appropriate assessment of compression features is of great importance for shallow foundation analysis on sand.
|
885 |
Ανάπτυξη αρχιτεκτονικών διπλού φίλτρου και FPGA υλοποιήσεις για το H.264 / AVC deblocking filterΚαβρουλάκης, Νικόλαος 07 June 2013 (has links)
Αντικείμενο της παρούσας διπλωματικής εργασίας είναι η παρουσίαση και η μελέτη ενος εναλλακτικού σχεδιασμού του deblocking φίλτρου του προτύπου κωδικοποίησης βίντεο Η.264. Αρχικά επεξηγείται αναλυτικά ο τρόπος λειτουργίας του φίλτρου και στη συνέχεια προτείνεται ένας πρωτοποριακός σχεδιασμός με χρήση pipeline πέντε σταδίων. Ο σχεδιασμός παρουσιάζει σημαντικά πλεονεκτήματα στον τομέα της ταχύτητας (ενδεικτικά εμφανίζεται βελτιωμένη απόδοση στην συχνότητα λειτουργίας και στο throughput). Αυτό πιστοποιήθηκε από μετρήσεις που έγιναν σε συγκεκριμένα fpga και επαλήθευσαν τα θεωρητικά συμπεράσματα που είχαν εξαχθεί. / The standard H.264 (or else MPEG-4 part 10) is nowadays the most widely used standard in the area of video coding as it is supported by the largest enterprises in the internet (including Google, Apple and Youtube). Its most important advantage over the previous standards is that it achieves better bitrate without falling in terms of quality.
A crucial part of the standard is the deblocking filter which is applied in each macroblock of a frame so that it reduces the blocking distortion. The filter accounts for about one third of the computational requirements of the standard, something which makes it a really important part of the filtering process.
The current diploma thesis presents an alternative design of the filter which achieves better performance than the existing ones. The design is based in the use of two filters (instead of one used in current technology) and moreover, in the application of a pipelined design in each filter. By using a double filter, exploitation of the independence which exists in many parts of the macroblock is achieved. That is to say, it is feasible that different parts of it can be filtered at the same time without facing any problems. Furthermore, the use of the pipeline technique importantly increases the throughput. Needless to say, in order for the desired result to be achieved, the design has to be made really carefully so that the restrictions imposed by the standard will not be failed. The use of this alternative filter design will result in an important raise in the performance. Amongst all, the operating frequency, the throughput and the quality of the produced video will all appear to be considerably risen. It also needs to be mentioned that the inevitable increase of the area used (because of the fact that two filters are used instead of one) is not really important in terms of cost.
The structure of the thesis is described in this paragraph. In chapter 1 there is a rather synoptic description of the H.264 standard and the exact position of the deblocking filter in the whole design is clarified. After that, the algorithmic description of the filter follows (Chapter 2). In this chapter, all the parameters participating in the filter are presented in full detail as well as the equations used during the process. In the next chapter (chapter 3), the architecture chosen for the design is presented. That is to say, the block diagram is presented and explained, as well as the table of timings which explains completely how the filter works. The pipelining technique applied in the filter is also analyzed and justified in this chapter. In the next chapter (chapter 4), every structural unit used in the current architecture is analyzed completely and its role in the whole structure is presented. Finally, in chapter 5, the results of the measurements made in typical fpgas of Altera and Xilinx are presented. The results are shown in table format whereas for specific parameters diagrams were used so that the improved performance of the current design compared to the older ones that are widely used, becomes evident.
|
886 |
Contributions à l’observation par commande d’observabilité et à la surveillance de pipelines par observateurs / Contributions to the observation by observabilty control and pipelines monitoring using observersRubio Scola, Ignacio Eduardo 30 January 2015 (has links)
Ce travail se compose de deux parties, dans la première, deux types de méthodologies sont proposées pour garantir l'observabilité sur des systèmes non uniformément observables. Premièrement sont présentées les méthodes basées sur le grammien d'observabilité et, à continuation, les méthodes basées directement sur l'équation de l'observateur. Dans la deuxième partie, diverses techniques sont détaillées pour la détection de défauts (fuites et obstructions) dans les canalisations sous pressions. Pour cela on construit plusieurs modèles en discrétisant les équations du coup de bélier par différences finies, implicites et explicites dans le temps. Sur ces modèles des techniques sont développés en utilisant des observateurs et des algorithmes d'optimisation. Les modèles discrets ainsi que certains observateurs ont été validés par une série d'expériences effectuées dans des canalisations d'essai. Des résultats de convergence, expérimentaux et en simulation sont exposés dans ce mémoire. / This work consists of two parts, in the first one, two types of methods are proposed to ensure the observability of non-uniformly observable systems. Firstly methods based on the observability gramian are presented, and then some methods based directly on the equation of the observer. In the second part, various techniques are detailed for the detection of defaults (leaks and obstructions) in a pipeline under pressure. For that, we built several models by discretizing the water hammer equations using finite differences explicit and implicit in time. Then some techniques are developed using observers and optimization algorithms. Discrete models and some observers were validated by a series of experiments in pipelines. Convergence, experimental and simulation results are presented in this manuscript.
|
887 |
Méthodes de simulation du comportement mécanique non linéaire des grandes structures en béton armé et précontraint : condensation adaptative en contexte aléatoire et représentation des hétérogénéités / Simulation methods for the nonlinear mechanical behavior of large reinforced and prestressed concrete structures : adaptive condensation in the probabilistic context and modelling of the heterogeneitiesLlau, Antoine 26 September 2016 (has links)
Les structures en béton et béton armé de grandes dimensions, en particulier les enceintes de confinement, peuvent être sujettes à de la fissuration localisée suite à leur vieillissement ou dans le cas d’une forte sollicitation (APRP, par exemple). Afin d’optimiser les actions de maintenance, il est nécessaire de disposer d’un modèle prédictif de l’endommagement du béton. Ce phénomène se produit à une échelle matériau relativement petite et un modèle prédictif nécessite un maillage fin et une loi de comportement non linéaire. Hors ce type de modélisation ne peut être directement appliquée sur une structure de génie civil de grande échelle, le calcul étant trop lourd pour les machines actuelles.Une méthode de calcul est proposée, qui concentre l’effort de calcul sur les zones d’intérêt (parties endommagées) de la structure en éliminant les zones non endommagées. L’objectif est ainsi d’utiliser la puissance de calcul disponible pour la caractérisation des propriétés des fissures notamment. Cette approche utilise la méthode de condensation statique de Guyan pour ramener les zones élastiques à un ensemble de conditions aux limites appliquées aux bornes des zones d’intérêt. Lorsque le système évolue, un système de critères permet de promouvoir à la volée des zones élastiques en zones d’intérêt si de l’endommagement y apparaît. Cette méthode de condensation adaptative permet de réduire la dimension du problème non linéaire sans altérer la qualité des résultats par rapport à un calcul complet de référence. Cependant, une modélisation classique ne permet pas de prendre en compte les divers aléas impactant le comportement de la structure : propriétés mécaniques, géométrie, chargement… Afin de mieux caractériser ce comportement en tenant compte des incertitudes, la méthode de condensation adaptative proposée est couplée avec une approche de collocation stochastique. Chaque calcul déterministe nécessaire pour caractériser les incertitudes sur les grandeurs d’intérêt de la structure est ainsi réduit et les étapes de prétraitement nécessaires à la condensation sont elles-mêmes mutualisées via une deuxième collocation. L’approche proposée permet ainsi de produire pour un coût de calcul limité des densités de probabilités des grandeurs d’intérêt d’une grande structure. Les stratégies de résolution proposées rendent accessibles à l’échelle locale une modélisation plus fine que celle qui pourrait s’appliquer sur l’ensemble de la structure. Afin de bénéficier d’une meilleure représentativité à cette échelle, il est nécessaire de représenter les effets tridimensionnels des hétérogénéités. Dans le domaine du génie civil et nucléaire, cela concerne au premier chef les câbles de précontrainte, traditionnellement représentés en unidimensionnel. Une approche est donc proposée, qui s’appuie sur un maillage et une modélisation 1D pour reconstruire un volume équivalent au câble et retransmettre les efforts et rigidités dans le volume de béton. Elle combine la représentativité d’un modèle 3D complet et conforme des câbles lorsque le maillage s’affine et la facilité d’utilisation et paramétrage d’un modèle 1D. L’applicabilité des méthodes proposées à une structure de génie civil de grande échelle est évaluée sur un modèle numérique d’une maquette à l’échelle 1/3 de l’enceinte de confinement interne d’un réacteur de type REP 1300 MWe à double paroi. / Large-scale concrete and reinforced concrete structures, and in particular containment buildings, may undergo localized cracking when they age or endure strong loadings (LOCA for instance). In order to optimize the maintenance actions, a predictive model of concrete damage is required. This phenomenon takes place at a rather small material scale and a predictive model requires a refined mesh and a nonlinear constitutive law. This type of modelling cannot be applied directly on a large-scale civil engineering structure, as the computational load would be too heavy for the existing machines.A simulation method is proposed to focus the computational effort on the areas of interest (damaged parts) of the structure while eliminating the undamaged areas. It aims at using the available computing power for the characterization of crack properties in particular. This approach uses Guyan’s static condensation technique to reduce the elastic areas to a set of boundary conditions applied to the areas of interest. When the system evolves, a set of criteria allows to promote on the fly the elastic areas to areas of interest if damage appears. This adaptive condensation technique allows to reduce the dimension of a nonlinear problem without degrading the quality of the results when compared to a full reference simulation.However, a classical modelling does not allow to take into account the various unknowns which will impact the structural behaviour: mechanical properties, geometry, loading… In order to better characterize this behaviour while taking into account the various uncertainties, the proposed adaptive condensation method is coupled with a stochastic collocation approach. Each deterministic simulation required for the characterization of the uncertainties on the structural quantities of interest is therefore reduced and the pre-processing steps necessary to the condensation technique are also reduced using a second collocation. The proposed approach allows to produce for a reduced computational cost the probability density functions of the quantities of interest of a large structure.The proposed calculation strategies give access at the local scale to a modelling finer than what would be applicable to the full structure. In order to improve the representativeness at this scale, the tridimensional effects of the heterogeneities must be taken into account. In the civil and nuclear engineering field, one of the main issues is the modelling of prestressing tendons, usually modelled in one dimension. A new approach is proposed, which uses a 1D mesh and model to build a volume equivalent to the tendon and redistribute the forces and stiffnesses in the concrete. It combines the representativeness of a full conform 3D modelling of the tendon when the mesh is refined and the ease of use of the 1D approaches.The applicability of the proposed methodologies to a large-scale civil engineering structure is evaluated using a numerical model of a 1/3 mock-up of a double-wall containment building of a PWR 1300 MWe nuclear reactor.
|
888 |
Méthodes pour l'analyse des champs profonds extragalactiques MUSE : démélange et fusion de données hyperspectrales ;détection de sources étendues par inférence à grande échelle / Methods for the analysis of extragalactic MUSE deep fields : hyperspectral unmixing and data fusion;detection of extented sources with large-scale inferenceBacher, Raphael 08 November 2017 (has links)
Ces travaux se placent dans le contexte de l'étude des champs profonds hyperspectraux produits par l'instrument d'observation céleste MUSE. Ces données permettent de sonder l'Univers lointain et d'étudier les propriétés physiques et chimiques des premières structures galactiques et extra-galactiques. La première problématique abordée dans cette thèse est l'attribution d'une signature spectrale pour chaque source galactique. MUSE étant un instrument au sol, la turbulence atmosphérique dégrade fortement le pouvoir de résolution spatiale de l'instrument, ce qui génère des situations de mélange spectral pour un grand nombre de sources. Pour lever cette limitation, des approches de fusion de données, s'appuyant sur les données complémentaires du télescope spatial Hubble et d'un modèle de mélange linéaire, sont proposées, permettant la séparation spectrale des sources du champ. Le second objectif de cette thèse est la détection du Circum-Galactic Medium (CGM). Le CGM, milieu gazeux s'étendant autour de certaines galaxies, se caractérise par une signature spatialement diffuse et de faible intensité spectrale. Une méthode de détection de cette signature par test d'hypothèses est développée, basée sur une stratégie de max-test sur un dictionnaire et un apprentissage des statistiques de test sur les données. Cette méthode est ensuite étendue pour prendre en compte la structure spatiale des sources et ainsi améliorer la puissance de détection tout en conservant un contrôle global des erreurs. Les codes développés sont intégrés dans la bibliothèque logicielle du consortium MUSE afin d'être utilisables par l'ensemble de la communauté. De plus, si ces travaux sont particulièrement adaptés aux données MUSE, ils peuvent être étendus à d'autres applications dans les domaines de la séparation de sources et de la détection de sources faibles et étendues. / This work takes place in the context of the study of hyperspectral deep fields produced by the European 3D spectrograph MUSE. These fields allow to explore the young remote Universe and to study the physical and chemical properties of the first galactical and extra-galactical structures.The first part of the thesis deals with the estimation of a spectral signature for each galaxy. As MUSE is a terrestrial instrument, the atmospheric turbulences strongly degrades the spatial resolution power of the instrument thus generating spectral mixing of multiple sources. To remove this issue, data fusion approaches, based on a linear mixing model and complementary data from the Hubble Space Telescope are proposed, allowing the spectral separation of the sources.The second goal of this thesis is to detect the Circum-Galactic Medium (CGM). This CGM, which is formed of clouds of gas surrounding some galaxies, is characterized by a spatially extended faint spectral signature. To detect this kind of signal, an hypothesis testing approach is proposed, based on a max-test strategy on a dictionary. The test statistics is learned on the data. This method is then extended to better take into account the spatial structure of the targets, thus improving the detection power, while still ensuring global error control.All these developments are integrated in the software library of the MUSE consortium in order to be used by the astrophysical community.Moreover, these works can easily be extended beyond MUSE data to other application fields that need faint extended source detection and source separation methods.
|
889 |
Cu-catalyzed chemical vapour deposition of graphene : synthesis, characterization and growth kineticsWu, Xingyi January 2017 (has links)
Graphene is a two dimensional carbon material whose outstanding properties have been envisaged for a variety of applications. Cu-catalyzed chemical vapour deposition (Cu-CVD) is promising for large scale production of high quality monolayer graphene. But the existing Cu-CVD technology is not ready for industry-level production. It still needs to be improved on some aspects, three of which include synthesizing industrially useable graphene films under safe conditions, visualizing the domain boundaries of the continuous graphene, and understanding the kinetic features of the Cu-CVD process. This thesis presents the research aiming at these three objectives. By optimizing the Cu pre-treatments and the CVD process parameters, continuous graphene monolayers with the millimetre-scale domain sizes have been synthesized. The process safety has been ensured by delicately diluting the flammable gases. Through a novel optical microscope set up, the spatial distributions of the domains in the continuous Cu-CVD graphene films have been directly imaged and the domain boundaries visualised. This technique is non-destructive to the graphene and hence could help manage the domain boundaries of the large area graphene. By establishing the novel rate equations for graphene nucleation and growth, this study has revealed the essential kinetic characteristics of general Cu-CVD processes. For both the edge-attachment-controlled and the surface-diffusion-controlled growth, the rate equations for the time-evolutions of the domain size, the nucleation density, and the coverage are solved, interpreted, and used to explain various Cu-CVD experimental results. The continuous nucleation and inter-domain competitions prove to have non-trivial influences over the growth process. This work further examines the temperature-dependence of the graphene formation kinetics leading to a discovery of the internal correlations of the associated energy barriers. The complicated effects of temperature on the nucleation density are explored. The criteria for identifying the rate-limiting step is proposed. The model also elucidates the kinetics-dependent formation of the characteristic domain outlines. By accomplishing these three objectives, this research has brought the current Cu-CVD technology a large step forward towards practical implementation in the industry level and hence made high quality graphene closer to being commercially viable.
|
890 |
Optimization routes of planning and implementation in large-scale urban development projects : some learning from France for China / Conduites d’optimisation en aménagement et mise-en-œuvre dans projets de développement urbain en large échelle : des leçons françaises pour la ChineChen, Tingting 25 May 2012 (has links)
L’idée majeure de cette thèse de Doctorat part d’un regard sur les réalités en Chine. L’objectif majeur de cette thèse est celui d’essayer d’offrir des suggestions pour les villes chinoises à partir de l’expérience française en matière d’aménagement et de mise en œuvre de projets de développement urbain à large échelle. Comprendre le contexte, le mécanisme et la politique d’aménagement urbain au sein des pays développés peut être utile pour mieux répondre aux problèmes dans notre pays et, ainsi, à mieux construire les villes en Chine.D’abord, la dissertation fait une définition sur les concepts de base, le contenu et la structure de travail des projets de développement urbain à large échelle.L’origine des problèmes et les difficultés trouvées dans ce type d’opération seront alors débattues. Ancrée sur une étude empirique, la dissertation analyse quelques projets de développement urbain à large échelle en référence.Dans les sociétés modernes, les aménageurs doivent considérer plusieurs indicateurs d’incertitude dans le développement urbain. Ils doivent également être opérationnels en ce qui concerne la régulation de l’espace et coordonner les enjeux d’intérêt propres au développement urbain à large échelle. La thèse repère un cadre théorique de base selon ces trois aspects et travaille sur les moyens que les projets de développement urbain à large échelle en France mènent vis-à-vis de tous ces enjeux. Au travers de trois conduites qui sont la reforme du zoning et les droits du sol, une régulation plus stricte de l’espace urbain et une nette amélioration du cadre de coordination, l’aménagement et la mise en œuvre de ce type d’opération en France ont été optimisés. Au travers de réformes, de quelques modifications et améliorations, plusieurs projets français ont bien réussi à aboutir aux objectifs du plan. Les conséquences sociales, économiques et environnementales ont été également bien cadrées. Les Plans Locaux d’Urbanisme (PLU) ont été étudiés pour la reforme de la politique d’usage des sols. Le cahier des charges et le rôle de l'architecte Coordinateur ont été analysés pour la régulation des espaces urbains. Les ZAC, les SEM (Société d’Economie Mixte) et la participation institutionnelle ont été étudiées en terme de coordination d’institution. En partant d’une étude comparative entre la situation en Chine et l’expérience réussie en France, des suggestions ont été proposées pour optimiser les projets de développement urbain de large échelle en Chine. Les trois conduites d’optimisation sont bien connectées, ce qui signifie qu’ils ont une influence directe sur la construction urbaine. Une conduite isolée ne peut pas résoudre les défis. Ainsi, la dissertation suggère un ensemble de conduites d’optimisation pour les grands projets de développement urbain en Chine. La réforme de l’aménagement urbain et la mise-en-place institutionnelle devraient perfectionner les plans d’usage des sols, la régulation de l’espace urbain et les mécanismes de coordination dans son ensemble. L’aménagement urbain et le système de management devraient être orientés vers un perfectionnement intégré. / The main ideas of this thesis come from realities in China. The main objective ofthis thesis is trying to offer some suggestions for cities in China by learning fromFrance, on the topic of planning and implementation of large-scale urbandevelopment projects. Understanding the background, mechanism and policy ofurban planning in developed countries may help to cope with problems in ourcountry and then to better construct Chinese cities. Firstly, the dissertation defines the basic concepts, contents and framework of large-scale urban development projects. The origin of problems and difficulties in large-scale urban development projects is then discussed. Based on empirical study, the dissertation analyzes some typical large-scale urban development projects in France and then evaluates them. In modern society, planners should consider many uncertain indicators in urban development. They should also be effective in regulation of space and coordinate conflict of interests in large-scale urban development. The dissertation raises a basic theoritical framework in those three aspects and explores how French largescale urban development projects cope with these challenges. By three optimizing routes which are reforming of land-use planning, strengthen regulation of urban space and improvement of coordination mechanism, the planning and implementation of large-scale urban development project in France have been optimized. By continuous reforming, adjustment and improvement, many French large-scale urban development projects successfully finished the planning objectives. The social, economic and environment effects have also been well embodied.PLU (Plans Locaux d’Urbanisme) has been studied for the reform of land-use planning. Le "cahier des charges "and the role of the "Architect Coordinateur" have been analyzed for the regulation of urban space. ZAC (Zone d’Aménagement Concerte), SEM (Société d’Ecoonomie Mixte) and public participation institution have been studied in aspect of coordination institution. Based on comparativestudy of situation in China and successful experience in France, suggestionshave been made for optimizing Chinese large-scale urban development projects.The three optimizing routes are connected tightly, which means they influencethe course of urban construction together. Isolated route could not solve theproblems. Therefore, the dissertation suggests an ensemble optimizing route inChinese large-scale urban development projects. The reform of urban planningand implementation institution should improve land-use planning, regulation ofurban space and coordination mechanism all together. The urban planning andmanagement system should be in a direction of integrative improvement.
|
Page generated in 0.057 seconds