• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 393
  • 168
  • 46
  • 44
  • 29
  • 21
  • 19
  • 18
  • 17
  • 17
  • 15
  • 7
  • 4
  • 3
  • 3
  • Tagged with
  • 949
  • 949
  • 748
  • 149
  • 148
  • 142
  • 124
  • 113
  • 97
  • 87
  • 76
  • 74
  • 72
  • 64
  • 63
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
751

Ageing assessment of transformer insulation through oil test database analysis

Tee, Sheng Ji January 2016 (has links)
Transformer ageing is inevitable and it is a challenge for utilities to manage a large fleet of ageing transformers. This means the need for monitoring transformer condition. One of the most widely used methods is oil sampling and testing. Databases of oil test records hence manifest as a great source of information for facilitating transformer ageing assessment and asset management. In this work, databases from three UK utilities including about 4,600 transformers and 65,000 oil test entries were processed, cleaned and analysed. The procedures used could help asset managers in how to approach databases, such as the need for addressing oil contamination, measurement procedure change and oil treatment discontinuities. An early degradation phenomenon was detected in multiple databases/utilities, which was investigated and found to be caused by the adoption of hydrotreatment oil refining technique in the late 1980s. Asset managers may need to monitor more frequently the affected units and restructure long term plans. The work subsequently focused on population analyses which indicated higher voltage transformers (275 kV and 400 kV) are tested more frequently and for more parameters compared with lower voltage units (33 kV and 132 kV). Acidity is the parameter that shows the highest correlation with transformer in-service age. In addition, the influence of the length of oil test records on population ageing trends was studied. It is found that it is possible to have a representative population ageing trend even with a short period (e.g. two years) of oil test results if the transformer age profile is representative of the whole transformer population. Leading from population analyses, seasonal influence on moisture was investigated which implies the importance of incorporating oil sampling temperature for better interpretation of moisture as well as indirectly breakdown voltage records. A condition mismatch between dielectric dissipation factor and resistivity was also discovered which could mean the need for revising the current IEC 60422 oil maintenance guide. Finally, insulation condition ranking was performed through principal component analysis (PCA) and analytic hierarchy process (AHP). These two techniques were demonstrated to be not just capable alternatives to traditional empirical formula but also allow fast, objective interpretation in PCA case, as well as flexible and comprehensive (objective and subjective incorporations) analysis in AHP case.
752

Séparation de Sources Dans des Mélanges non-Lineaires / Blind Source Separation in Nonlinear Mixtures

Ehsandoust, Bahram 30 April 2018 (has links)
La séparation aveugle de sources aveugle (BSS) est une technique d’estimation des différents signaux observés au travers de leurs mélanges à l’aide de plusieurs capteurs, lorsque le mélange et les signaux sont inconnus. Bien qu’il ait été démontré mathématiquement que pour des mélanges linéaires, sous des conditions faibles, des sources mutuellement indépendantes peuvent être estimées, il n’existe dans de résultats théoriques généraux dans le cas de mélanges non-linéaires. La littérature sur ce sujet est limitée à des résultats concernant des mélanges non linéaires spécifiques.Dans la présente étude, le problème est abordé en utilisant une nouvelle approche utilisant l’information temporelle des signaux. L’idée originale conduisant à ce résultat, est d’étudier le problème de mélanges linéaires, mais variant dans le temps, déduit du problème non linéaire initial par dérivation. Il est démontré que les contre-exemples déjà présentés, démontrant l’inefficacité de l’analyse par composants indépendants (ACI) pour les mélanges non-linéaires, perdent leur validité, considérant l’indépendance au sens des processus stochastiques, au lieu de l’indépendance au sens des variables aléatoires. Sur la base de cette approche, de bons résultats théoriques et des développements algorithmiques sont fournis. Bien que ces réalisations ne soient pas considérées comme une preuve mathématique de la séparabilité des mélanges non-linéaires, il est démontré que, compte tenu de quelques hypothèses satisfaites dans la plupart des applications pratiques, elles sont séparables.De plus, les BSS non-linéaires pour deux ensembles utiles de signaux sources sont également traités, lorsque les sources sont (1) spatialement parcimonieuses, ou (2) des processus Gaussiens. Des méthodes BSS particulières sont proposées pour ces deux cas, dont chacun a été largement étudié dans la littérature qui correspond à des propriétés réalistes pour de nombreuses applications pratiques.Dans le cas de processus Gaussiens, il est démontré que toutes les applications non-linéaires ne peuvent pas préserver la gaussianité de l’entrée, cependant, si on restreint l’étude aux fonctions polynomiales, la seule fonction préservant le caractère gaussiens des processus (signaux) est la fonction linéaire. Cette idée est utilisée pour proposer un algorithme de linéarisation qui, en cascade par une méthode BSS linéaire classique, sépare les mélanges polynomiaux de processus Gaussiens.En ce qui concerne les sources parcimonieuses, on montre qu’elles constituent des variétés distinctes dans l’espaces des observations et peuvent être séparées une fois que les variétés sont apprises. À cette fin, plusieurs problèmes d’apprentissage multiple ont été généralement étudiés, dont les résultats ne se limitent pas au cadre proposé du SRS et peuvent être utilisés dans d’autres domaines nécessitant un problème similaire. / Blind Source Separation (BSS) is a technique for estimating individual source components from their mixtures at multiple sensors, where the mixing model is unknown. Although it has been mathematically shown that for linear mixtures, under mild conditions, mutually independent sources can be reconstructed up to accepted ambiguities, there is not such theoretical basis for general nonlinear models. This is why there are relatively few results in the literature in this regard in the recent decades, which are focused on specific structured nonlinearities.In the present study, the problem is tackled using a novel approach utilizing temporal information of the signals. The original idea followed in this purpose is to study a linear time-varying source separation problem deduced from the initial nonlinear problem by derivations. It is shown that already-proposed counter-examples showing inefficiency of Independent Component Analysis (ICA) for nonlinear mixtures, loose their validity, considering independence in the sense of stochastic processes instead of simple random variables. Based on this approach, both nice theoretical results and algorithmic developments are provided. Even though these achievements are not claimed to be a mathematical proof for the separability of nonlinear mixtures, it is shown that given a few assumptions, which are satisfied in most practical applications, they are separable.Moreover, nonlinear BSS for two useful sets of source signals is also addressed: (1) spatially sparse sources and (2) Gaussian processes. Distinct BSS methods are proposed for these two cases, each of which has been widely studied in the literature and has been shown to be quite beneficial in modeling many practical applications.Concerning Gaussian processes, it is demonstrated that not all nonlinear mappings can preserve Gaussianity of the input. For example being restricted to polynomial functions, the only Gaussianity-preserving function is linear. This idea is utilized for proposing a linearizing algorithm which, cascaded by a conventional linear BSS method, separates polynomial mixturesof Gaussian processes.Concerning spatially sparse sources, it is shown that spatially sparsesources make manifolds in the observations space, and can be separated once the manifolds are clustered and learned. For this purpose, multiple manifold learning problem has been generally studied, whose results are not limited to the proposed BSS framework and can be employed in other topics requiring a similar issue.
753

Aide à la conception de chaînes logistiques humanitaires efficientes et résilientes : application au cas des crises récurrentes péruviennes / Resilient and efficient humanitarian supply chain design approach : application to recurrent peruvian disasters

Vargas Florez, Jorge 15 October 2014 (has links)
Chaque année, plus de 400 catastrophes naturelles frappent le monde. Pour aider les populations touchées, les organisations humanitaires stockent par avance de l’aide d’urgence dans des entrepôts. Cette thèse propose des outils d’aide à la décision pour les aider à localiser et dimensionner ces entrepôts. Notre approche repose sur la construction de scénarios représentatifs. Un scénario représente la survenue d’une catastrophe dont on connaît l’épicentre, la gravité et la probabilité d’occurrence. Cette étape repose sur l’exploitation et l’analyse de bases de données des catastrophes passées. La seconde étape porte sur la propagation géographique de la catastrophe et détermine son impact sur la population des territoires touchés. Cet impact est fonction de la vulnérabilité et de la résilience du territoire. La vulnérabilité mesure la valeur attendue des dégâts alors que la résilience estime la capacité à résister au choc et à se rétablir rapidement. Les deux sont largement déterminées par des facteurs économiques et sociaux, soit structurels (géographie, PIB…) ou politiques (existence d’infrastructure d’aide, normes de construction…). Nous proposons par le biais d’analyses en composantes principales (ACP) d’identifier les facteurs influents de résilience et de vulnérabilité, puis d’estimer le nombre de victimes touchées à partir de ces facteurs. Souvent, les infrastructures (eau, télécommunication, électricité, voies de communication) sont détruits ou endommagés par la catastrophe (ex : Haïti en 2010). La dernière étape a pour objectif d’évaluer les impacts logistiques en ce qui concerne : les restrictions des capacités de transport existant et la destruction de tout ou partie des stocks d’urgence. La suite de l’étude porte sur la localisation et le dimensionnement du réseau d’entrepôt. Nos modèles présentent l’originalité de tenir compte de la dégradation des ressources et infrastructures suite due à la catastrophe (dimension résilience) et de chercher à optimiser le rapport entre les coûts engagés et le résultat obtenu (dimension efficience). Nous considérons d’abord un scénario unique. Le problème est une extension d’un problème de location classique. Puis, nous considérons un ensemble de scénarios probabilisés. Cette approche est indispensable à la considération du caractère très incertain des catastrophes humanitaires. L’ensemble de ces contributions a été confronté à la réalité des faits dans le cadre d’une application au cas des crises récurrentes du Pérou. Ces crises, essentiellement dues aux tremblements de terre et aux inondations (El Niño), imposent la constitution d’un réseau logistique de premiers secours qui soit résilient et efficient. / Every year, more than 400 natural disasters hit the world. To assist those affected populations, humanitarian organizations store in advance emergency aid in warehouses. This PhD thesis provides tools for support decisions on localization and sizing of humanitarian warehouses. Our approach is based on the design of representative and realistic scenarios. A scenario expresses some disasters’ occurrences for which epicenters are known, as well as their gravity and frequency. This step is based on the exploitation and analysis of databases of past disasters. The second step tackles about possible disaster’s propagation. The objective consists in determining their impact on population on each affected area. This impact depends on vulnerability and resilience of the territory. Vulnerability measures expected damage values meanwhile resilience estimates the ability to withstand some shock and recover quickly. Both are largely determined by social and economic factors, being structural (geography, GDP, etc.) or political (establishment or not relief infrastructure, presence and strict enforcement of construction standards, etc.). We propose through Principal Component Analysis (PCA) to identify, for each territory, influential factors of resilience and vulnerability and then estimate the number of victims concerned using these factors. Often, infrastructure (water, telecommunications, electricity, communication channels) are destroyed or damaged by the disaster (e.g. Haiti in 2010). The last step aims to assess the disaster logistics impact, specifically those related to with: transportation flows capacity limitations and destruction of all or part of emergency relief inventories. The following of our study focuses on location and allocation of a warehouses’ network. The proposed models have the originality to consider potential resources and infrastructure degradation after a disaster (resilience dimension) and seek optimizing the equilibrium between costs and results (effectiveness dimension). Initially we consider a single scenario. The problem is an extension of classical location studies. Then we consider a set of probable scenarios. This approach is essential due to the highly uncertain character of humanitarian disasters. All of these contributions have been tested and validated through a real application case: Peruvian recurrent disasters. These crises, mainly due to earthquakes and floods (El Niño), require establishment of a first aid logistics network that should be resilient and efficient.
754

Analyse par ToF-SIMS de matériaux fragiles pour les micro/nanotechnologies : évaluation et amplification de l'information chimique / ToF-SIMS characterisation of fragile materials used in microelectronic and microsystem devices : validation and enhancement of the chemical information

Scarazzini, Riccardo 04 July 2016 (has links)
Aujourd’hui, une grande variété de matériaux dit « fragiles » sont intégrés dans des dispositifs micro ou nanotechnologiques. Ces matériaux sont définissables comme « fragiles » en raison de leur forme, de leur dimension ou encore de leur densité. Dans ce travail, trois catégories de matériaux, de différents niveaux de maturités industrielle et technologique, ont été étudiés par spectrométrie de masse des ions secondaires à temps du vol (ToF-SIMS). Ces matériaux sont: du silicium méso-poreux, des polyméthacrylates déposés en couches très minces par voie chimique en phase vapeur initiée (iCVD) et des matériaux organosilicates (SiOCH) à basse constante diélectrique (low-k). L’objectif de ce travail est de vérifier et de valider la méthode ToF-SIMS comme une technique fiable pour répondre aux besoins de caractérisation chimique rencontrés pas ces matériaux Il s’agit également d’établir la cohérence de l’information chimique produite par l’interprétation de l’interaction ion/matière se déroulant lors de l’analyse. Pour le silicium méso-poreux, les échantillons ont été pulvérisés par différentes sources primaires d’ions (Césium, Xénon, Oxygène) et l’information secondaire générée comme, par exemple, les différences d’ionisation entre la couche poreuse et le matériau dense ont été analysées, notamment vis de l’énergie du faisceau de pulvérisation mais aussi du taux de porosité du matériau cible. Des modifications morphologiques significativement différentes selon la source d’ions ont également été observées et ont été corrélées à différents types de régime de pulvérisation, principalement induits par le taux de porosité de la cible.Concernant la caractérisation de polymères en couches minces, des conditions d’abrasion très peu agressives, notamment l’usage d’ions d’argon en cluster polyatomiques, ont été appliquées avec l’intention d’obtenir une information chimique secondaire riche en hautes masses moléculaires. La discrimination de films de polyméthacrylate avec une structure chimique quasi-identique a pu être obtenue et un protocole de quantification de copolymères proposé. De plus, par l’utilisation de la méthode d’analyse de données en composantes principales (PCA) appliquée aux spectres,une corrélation claire a été établie entre les composantes principales et la masse moléculaire des films de polymères.Enfin l’impact de traitements d’intégration tels que de la gravure ou du nettoyage chimique, nécessaires à la mise en œuvre industrielle des matériaux low-k, mais défavorables à leurs propriétés diélectriques, a été étudié. Pour obtenir une information chimique résolue en profondeur, l’abrasion par césium à basse énergie a été identifiée comme la stratégie la plus sensible et la plus adaptée. De même, la PCA a permis d’amplifier significativement les différences chimiques entre échantillons, permettant de rapprocher les variations de constante diélectrique aux compositions chimiques / Nowadays, the micro and nanotechnology field integrates a wide range of materials that can be defined as “fragile” because of their shape, dimension or density. In this work, three materials of this kind, at different level of technological and industrial maturity are studied by time of flight secondary ion mass spectrometry (ToF-SIMS). These materials are: mesoporous silicon, thin polymethacrylate films deposited by initiated Chemical Vapour Deposition (i-CVD)and hybrid organosilicate (SiOCH) dielectric materials (low-k). The objective is to verify and validate the ToF-SIMS as a reliable characterisation technique for describing the chemical properties of these materials. Indeed, because of this intrinsic ‘fragility’ the consistency of the chemical information is connected to an appropriate interpretation of the specific ion/matter interactions taking place.For mesoporous silicon, a systematic analysis is carried out considering various sputtering ion sources (Caesium, Xenon and Oxygen); both sputtering and ionisation behaviours are examined relatively to the nonporous silicon, taking into account energy of the sputtering beam and porosity rate of the target material.Concerning nanometric thick polymer films, low damaging analysis conditions are applied by the use of argon cluster primary ion sources in order to obtain a significant molecular secondary ion information. In these conditions, a discrimination of quasi-identical nanometre thick structures is made possible and a quantification method for copolymers is then proposed. In addition, with the supplement of data principal component analysis (PCA) an innovative and significant correlation is obtained between main Principal Component and sample molecular weights.Finally, the effect of several industrial integration processes (such as etching or wet cleaning) applied on low-k materials are studied in order to understand their detrimental impact on low-k insulating properties. To achieve a depth-resolved chemical information, low energy caesium sputterings are shown to be the most adapted and sensitive strategy. In addition, PCA is shown to be almost essential to amplify differences between samples significantly. This approach allowed combining the variation of physical properties (dielectric constant) with the chemical ones.
755

Mesoscale Turbulence on the Ocean Surface from Satellite Altimetry

Khatri, Hemant January 2015 (has links) (PDF)
The dynamics captured in the ocean surface current data provided by satellite altimetry has been a subject of debate since the past decade. In particular, the contribution of surface and interior dynamics to altimetry remains unclear. One avenue to settling this issue is to compare the turbulence (for example, the nature of spectra and interscale fluxes) captured by altimetry to theories of two-dimensional, surface and interior quasigeostrophic turbulence. In this thesis, we focus on mesoscales (i.e., scales of the order of few hundred kms) that are well resolved by altimetry data. Aspects of two dimensional, three dimensional, geotropic and surface quasigeostrophic turbulence are revisited and compared with the observations. Specifically, we compute kinetic energy (KE) spectra and fluxes in five geographical regions (all over the globe) using 21 years of 0.25◦resolution daily data as provided by the AVISO project. We report a strong forward cascade of KE at small scales (accompanied by a spectral scaling of the form k−3) and a robust inverse cascade at larger scales. Further, we show that the small diver-gent part in horizontal velocity data drives the strong forward flux of KE. Indeed, on considering only the non-divergent part of the flow, in accord with incompressible two-dimensional turbulence, the inverse cascade is unaffected, but the forward transfer becomes very weak and the spectral slopes over this range of scales tend to a relatively steeper k−3.5scaling. We note that our results do not agree with interior first bar clinic mode quasigeostrophic (incorrect strength of forward flux) or surface-quasigeostrophic (incorrect spectral slopes) turbulence. Rather, the results are compatible with rotating shallow water and rotating stratified Boussinesq models in which condition of geostrophic balance is dominant but the divergence of horizontal velocity field is not exactly zero. Having seen the “mean” picture of fluxes and spectra from altimetry, in the second part of the thesis we investigate the variability of these entities. In particular, we employ Empirical Or-thogonal Function (EOF) analysis and focus on the variability in the spectral flux. Remarkably, over the entire globe, irrespective of the region under consideration, we see that the first two EOFs explain a large part of the variability in flux anomalies. The geometry of these modes is distinct, the first represents a single signed transfer across scales (i.e. large to small or small to large depending on the sign of the associated principal component), while the second is a mixed mode in that it exhibits a forward/inverse transfer at large/small scales.
756

Análise de Sinais Eletrocardiográficos Atriais Utilizando Componentes Principais e Mapas Auto-Organizáveis. / Atrial Eletrocardiographics Signals Analysis Using Principal Components and Self-Organizing Maps.

Coutinho, Paulo Silva 21 November 2008 (has links)
A análise de sinais provenientes de um eletrocardiograma (ECG) pode ser de grande importância para avaliação do comportamento cardíaco de um paciente. Os sinais de ECG possuem características específicas de acordo com os tipos de arritmias e sua classificação depende da morfologia do sinal. Neste trabalho é considerada uma abordagem híbrida utilizando análise de componentes principais (PCA) e mapas auto-organizáveis (SOM) para classificação de agrupamentos provenientes de arritmias como a taquicardia sinusal e, principalmente, fibrilação atrial. Nesse sentido, O PCA é utilizado como um pré-processador buscando suprimir sinais de atividades ventriculares, de maneira que a atividade atrial presente no ECG seja evidenciada sob a forma das ondas f. A Rede Neural SOM, é usada na classificação dos padrões de fibrilação atrial e seus agrupamentos / A análise de sinais provenientes de um eletrocardiograma (ECG) pode ser de grande importância para avaliação do comportamento cardíaco de um paciente. Os sinais de ECG possuem características específicas de acordo com os tipos de arritmias e sua classificação depende da morfologia do sinal. Neste trabalho é considerada uma abordagem híbrida utilizando análise de componentes principais (PCA) e mapas auto-organizáveis (SOM) para classificação de agrupamentos provenientes de arritmias como a taquicardia sinusal e, principalmente, fibrilação atrial. Nesse sentido, O PCA é utilizado como um pré-processador buscando suprimir sinais de atividades ventriculares, de maneira que a atividade atrial presente no ECG seja evidenciada sob a forma das ondas f. A Rede Neural SOM, é usada na classificação dos padrões de fibrilação atrial e seus agrupamentos
757

Modelo HJM com jumps: o caso brasileiro

Suzuki, Fernando Kenji 22 August 2015 (has links)
Submitted by Fernando Kenji Suzuki (fernandok.suzuki@gmail.com) on 2015-09-15T02:03:13Z No. of bitstreams: 1 main.pdf: 1014824 bytes, checksum: 78c5726b7429d94596849075c18716ec (MD5) / Rejected by Renata de Souza Nascimento (renata.souza@fgv.br), reason: Prezado Fernando, boa tarde Conforme Normas da ABNT, será necessário realizar os seguintes ajustes: Na CAPA: Seu nome deve estar um pouco acima, de uma maneira centralizada entre o nome da escola e o título do trabalho. CAPA e CONTRACAPA: Retirar a formatação Itálica da palavra Jumps. Em seguida realize uma nova submissão. Att. on 2015-09-15T18:58:14Z (GMT) / Submitted by Fernando Kenji Suzuki (fernandok.suzuki@gmail.com) on 2015-09-16T02:49:23Z No. of bitstreams: 1 main.pdf: 992654 bytes, checksum: 97c7605bf15b07b1b7554b66c33f1a12 (MD5) / Approved for entry into archive by Renata de Souza Nascimento (renata.souza@fgv.br) on 2015-09-16T19:44:21Z (GMT) No. of bitstreams: 1 main.pdf: 992654 bytes, checksum: 97c7605bf15b07b1b7554b66c33f1a12 (MD5) / Made available in DSpace on 2015-09-16T20:12:00Z (GMT). No. of bitstreams: 1 main.pdf: 992654 bytes, checksum: 97c7605bf15b07b1b7554b66c33f1a12 (MD5) Previous issue date: 2015-08-22 / Using market data obtained from BM&F Bovespa, this paper proposes a possible variation of Heath, Jarrow and Morton model in his discrete and multifactorial way, with the insertion of jumps as a way to consider the effect of the meetings held by the Brazilian Monetary Policy Committee (Copom). Through the use of principal component analysis (PCA), the calibration of the model parameters is made, allowing the simulation of the evolution of the term structure of interest rate known as PRE via Monte Carlo Simulation (MCS). With the scenarios generated by the simulation of the curve at fixed vertices (synthetic), the results are compared to the data observed in the market. / Utilizando dados de mercado obtidos na BM&F Bovespa, este trabalho propõe uma possível variação do modelo Heath, Jarrow e Morton em sua forma discreta e multifatorial, com a introdução de jumps como forma de considerar o efeito das reuniões realizadas pelo Cômite de Políticas Monetárias (Copom). Através do uso da análise de componentes principais (PCA), é feita a calibração dos parâmetros do modelo, possibilitando a simulação da evolução da estrutura a termo de taxa de juros (ETTJ) da curva prefixada em reais via simulação de Monte Carlo (MCS). Com os cenários da curva simulada em vértices fixos (sintéticos), os resultados são comparados aos dados observados no mercado.
758

Evaluating garment size and fit for petit women using 3D body scanned anthropometric data

Phasha, Masejeng Marion 05 1900 (has links)
Research suggests that there is a plethora of information on the size and shape of the average and plus sized women in South Africa (Winks, 1990; Pandarum, 2009; Muthambi, 2012; Afolayan & Mastamet-Mason, 2013 and Makhanya, 2015). However, there is very little information on petite women‟s body shapes, their body measurements and their shopping behaviour, especially in South Africa, for manufacturing ready-to-wear garments. The purpose of this petite women study was to investigate the shapes and sizes of a sample of petite South African women and develop size charts for the upper and lower body dimensions. This study used a mixed-method; purposive, non-probability sampling method to achieve the objectives of the study. A (TC)² NX16 3D full body scanner and an Adam‟s® medical scale were used to collect the body measurement data of 200 petite South African women, aged between 20-54 years with an average height range of 157cm, residing in Gauteng (Pretoria and Johannesburg). Other data collection instruments included a demographic questionnaire to collect the subjects‟ demographic information such as, age, height, weight, etc.; and the psychographic questionnaire to gather the petite subjects‟ demographics as well as their perceptions and preferences on currently available ready-to-wear shirt and trouser garments. Of the 200 subjects that were initially recruited, based on the petite women‟s body height that ranged from 5‟ 4” (163 cm) and below, the most prevalent body shape profile that emerged from the dataset, was the pear body shape which was evident in 180 of the 3D full body scanned petite women subjects. Therefore, the anthropometric data for these 180 subjects was used in the development of the experimental upper and lower body dimensions size charts and as the basis for the fit test garments developed in this study. The collected data was analysed and interpreted in Microsoft Excel and the IBM SPSS Statistics 24 (2016) software package, using principal component analysis (PCA) to produce the experimental size charts for the upper and lower body dimensions necessary for creating prototype shirt and trouser garments. Regression analysis was used to establish the primary and secondary body dimensions for the development of the size charts and for determining the size ranges. The experimental upper and lower body dimensions size charts were developed for sizes ranging from size 6/30 to size 26/50. Subsequently, the accuracy of the size charts developed in this study was evaluated by a panel of experts who analysed the fit of the prototype shirt and trouser garments, manufactured using measurements for a size 10/34 size range from the size chart, on a sample of the petite subjects. The fit of these garments was also compared with the fit of garments manufactured using the 3D full body scanned measurements of a size 10/34 petite tailoring mannequin, that is currently commercially available for use in the production of garments for petite women in South Africa. The shirt and trouser prototype garments developed using the size 10/34 upper and lower body dimensions size chart measurements had, overall, a better quality of fit than the garments made to fit the current, commercially available, size 10/34 mannequin. These findings thereby confirmed that the data extracted from the (TC)² NX16 3D full body scanner and the size charts subsequently developed using the data, has the potential to provide better/improved fit in garments for petite South African women than data hitherto published. From the evidence of this study, it is recommended that the South African garment manufacturing industry needs to revise the current sizing system for petite women to accommodate the body dimensions and shape variations that currently prevail amongst consumers. The South African garment manufacturers and retailers also need to familiarise themselves with the needs, challenges and preferences of the petite consumers‟ target market that purchase ready-to-wear shirt and trouser garments in South Africa. / Life and Consumer Sciences / M.ConSci. (Department of Life and Consumer Science)
759

The role of climate and land use change in Lake Urmia desiccation

Fazel Modares, N. (Nasim) 16 November 2018 (has links)
Abstract Wetlands in arid and semi-arid regions are complex fragile ecosystems that are critical in maintaining and controlling environmental quality and biodiversity. These wetlands and specially closed lake systems depend on support processes in upstream parts of the basin or recharge zone, as small changes in river flow regime can cause significant changes in lake level, salinity and productivity. Recent strong alterations in river flow regimes due to climate and land use change have resulted in ecosystem degradation and desiccation of many saline lakes in arid and semi-arid regions. Because of the low economic value of these lakes, their hydrology has not been monitored accurately, making it difficult to determine water balance and assess the role of water use and climate in lake desiccation. Furthermore, available data are usually of coarse resolution on both spatial and temporal scale. New frameworks using all available data and refining existing information on lake basins were developed in this thesis to assess regional differences in water resource availability, impacts of human activities on river flow regime alteration and agricultural land use change. The frameworks were applied to study causes and impacts of desiccation of a major lake, Lake Urmia, one of the largest saltwater lakes on Earth. This highly endangered ecosystem is on the brink of a major environmental disaster resembling that around the Aral Sea. The spatial pattern of precipitation across the Lake Urmia basin was investigated, to shed light on regional differences in water availability. Using large numbers of rainfall records and a wide array of statistical descriptors, precipitation across space and time was evaluated. Another important research component involved examining streamflow records for headwaters and lowland reaches of the Lake Urmia basin, in order to determine whether observed changes are mainly due to climate change or anthropogenic activities (e.g. water withdrawal for domestic and irrigation purposes). Principal component and clustering analyses of all available precipitation data for the lake basin revealed a heterogeneous precipitation pattern, but also permitted delineation of three homogeneous precipitation areas within the region. Further analysis identified variation in seasonal precipitation as the most important factor controlling the spatial precipitation pattern in the basin. The results showed that climate change impact on headwaters is insignificant and that irrigation is the main driving force for river flow regime alterations in the basin. This is supported by evidence that the headwaters have relatively remained unaffected by agriculture and by lack of significant changes in the historical records. The approach presented, involving clear in terpretation of existing information, can be useful in communicating land use and climate change information to decision makers and lake restoration planners. / Tiivistelmä Kuivilla aridisilla ja semiaridisilla alueilla sijaitsevat kosteikot ovat hauraita ekosysteemejä. Ne ovat myös tavallista tärkeämpiä, koska ne ylläpitävät ja säätelevät ympäristön laatua sekä luonnon monimuotoisuutta. Nämä kosteikot, kuten valtaosa muistakin kosteikoista, ovat riippuvaisia vesistöalueen ylemmillä osilla tehdyistä toimista kuten vesistöjen säännöstelystä. Jopa pienet muutokset jokien virtauksissa voivat aiheuttaa merkittäviä muutoksia järvien vedenpinnan korkeuteen, suolapitoisuuteen ja tuottavuuteen. Viimeaikaiset ilmastonmuutoksen ja maankäytön muutosten aiheuttamat voimakkaat muutokset jokien virtaamiin ovat johtaneet ekosysteemien rappeutumiseen sekä monien suolajärvien kuivumiseen kuivilla ja puolikuivilla alueilla. Kuivilla alueilla sijaitsevien suolajärvien hydrologiaa ei ole tarkkailtu riittävästi niiden alhaisemman taloudellisen arvon vuoksi. Se hankaloittaa vesitaseen määrittämistä. Tarkkojen tietojen puuttuessa on vaikea arvioida myös sitä, miten vedenkäyttö ja ilmasto ovat vaikuttaneet järvien kuivumiseen. Lisäksi saatavilla olevat tiedot ovat yleensä sekä ajallisesti että alueellisesti epätarkkoja. Analysointiin tarvittavien tietojen ja välineiden puute saattaa pahimmillaan johtaa ristiriitaisiin oletuksiin. Väitöstyön päätavoite on tarjota puitteet, joilla parannetaan ymmärrystä vesivarojen alueellisista eroista, ihmisen toiminnan vaikutuksista jokien virtausten muutoksiin ja maatalouden maankäytön muutoksista käyttäen kaikkea saatavilla olevaa dataa sekä täsmentäen samalla vesistöistä jo olemassa olevaa tietoa. Väitöskirja tutkii yhden suuren järven kuivumisen syitä ja seurauksia. Urmiajärvi on yksi maapallon suurimmista suolajärvistä sekä erittäin uhanalainen ekosysteemi. Järvi on samankaltaisen ympäristökatastrofin partaalla, joka aiheutti Araljärven kuivumisen. Väitöskirja antaa tietoa veden saatavuuden alueellisista eroista tutkimalla sademäärien alueellista jakautumista Urmiajärven valuma-alueella. Tutkielmassa arvioidaan sadannan ajallista ja paikallista vaihtelua erilaisten tilastollisten menetelmien avulla. Tutkielman toinen tärkeä osa keskittyy vesialtaan latvavesistön ja tasankoalueiden valumatietoihin. Tämän osuuden päätavoite on määritellä johtuvatko havaitut muutokset järvessä pääasiassa ilmastonmuutoksesta vai ihmisen toiminnasta kuten kastelusta. Sadantatietojen pääkomponentti- ja ryhmittelyanalyysien tulokset osoittavat, että Urmiajärven allas on sadannaltaan heterogeeninen alue. Analyysi johti seudun jakamiseen kolmeen homogeeniseen sadanta-alueeseen. Analyysi osoitti, että sademäärien kausittainen vaihtelu on merkittävin järvialtaan alueellisiin sademääriin vaikuttava tekijä. Tulokset osoittavat, että ilmastonmuutoksen vaikutukset latvavesistöön eivät olleet merkittäviä ja keinokastelu on ylivoimaisesti merkittävin järvialtaan jokien virtausten muutoksiin vaikuttava tekijä. Tätä johtopäätöstä tukee se tosiseikka, että maanviljelys ei ole juurikaan vaikuttanut latvavesistöihin eikä niissä näy historiallisten lähteiden perusteella merkittäviä muutoksia. Tutkimuksen hyöty on siinä, että se tulkitsee saatavilla olevan tiedon selkeästi, joka on avuksi, kun maankäyttöön ja ilmastonmuutokseen liittyviä tietoja välitetään päättäjille ja järven kunnostusta suunnitteleville tahoille.
760

Operational research on an urban planning tool : application in the urban development of Strasbourg 1982 / Approche empirique de la restructuration urbaine : application d'un Système Multi Agent (SMA) à Strasbourg 1982

Kaboli, Mohammad Hadi 28 June 2013 (has links)
L’Impact des caractéristiques spatiales sur les dynamiques de développement urbain est d’un grand intérêt dans les études urbaines. L’interaction entre les résidents et les caractéristiques spatiales est d’un intérêt particulier dans le contexte des modèles urbains car de nombreux modèles urbains sont fondés sur le processus d’installation des individus dans des parties spécifiques des villes. Il s’agit d’une étude sur la dynamique de développement urbain avec des Automates Cellulaires et un Système Multi-Agent. Le développement urbain de cette étude recouvre le « urban renewal » et la mobilité résidentielle. Il correspond à la mobilité résidentielle des ménages qui sont attirés par le confort résidentiel et le confort de centralité ; ces conforts sont localisés dans quelques quartiers de Strasbourg. La diversité et la qualité de ces conforts deviennent des critères pour les choix résidentiels de telle façon que chaque ménage recherche la proximité de ces conforts. Dans cette étude l’Automate Cellulaire modélise les caractéristiques techniques des unités spatiales, celles-ci sont identifiées par des attributs inhérents qui sont égaux aux conforts dans les résidences et dans les zones urbaines. Dans le Système Multi-Agent la population de la ville de Strasbourg interagit entre elle et avec la ville. Les agents représentent les classes socio-professionnelles des ménages. Pendant un changement spatio- temporel, l’aspiration des ménages forme le développement socio-spatio-temporelle de la ville. / The impact of spatial characteristics on the dynamics of urban development is a topic of great interest in urban studies. The interaction between the residents and the spatial characteristics is of particular interest in the context of urban models where some of the most famous urban models have been based on the process of individual settlements in some specific parts of cities.This research investigates the dynamism of urban development modeled by Cellular Automata and Multi-Agent System. The urban development, in this study embraces urban renewal and residential mobility. It corresponds to the residential mobility of households, attracted by residential and centrality comfort; these comforts are crystallized in some areas and residences of Strasbourg. The diversity and quality of these comforts become criteria for residential choice in a way that the households seek for proximity to these comforts.The Cellular Automata in this study, models the spatial characteristics of urban spatial units and they are identified by some inherent attributes that are equal to the comfort in residences and urban areas. The Multi- Agent System represent a system in which the population of the city interact between them and between them and the city; the agents delegate the socio-professional classes of households. During the spatiotemporal change, the aspiration of households forms the socio-spatio-temporal development of the city.

Page generated in 0.0921 seconds