• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 37
  • 9
  • 7
  • 3
  • 3
  • 1
  • Tagged with
  • 75
  • 75
  • 35
  • 31
  • 28
  • 16
  • 14
  • 14
  • 12
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Quantitative Spatial Upscaling of Categorical Data in the Context of Landscape Ecology: A New Scaling Algorithm

Gann, Daniel 28 June 2018 (has links)
Spatially explicit ecological models rely on spatially exhaustive data layers that have scales appropriate to the ecological processes of interest. Such data layers are often categorical raster maps derived from high-resolution, remotely sensed data that must be scaled to a lower spatial resolution to make them compatible with the scale of ecological analysis. Statistical functions commonly used to aggregate categorical data are majority-, nearest-neighbor- and random-rule. For heterogeneous landscapes and large scaling factors, however, use of these functions results in two critical issues: (1) ignoring large portions of information present in the high-resolution grid cells leads to high and uncontrolled loss of information in the scaled dataset; and (2) maintaining classes from the high-resolution dataset at the lower spatial resolution assumes validity of the classification scheme at the low-resolution scale, failing to represent recurring mixes of heterogeneous classes present in the low-resolution grid cells. The proposed new scaling algorithm resolves these issues, aggregating categorical data while simultaneously controlling for information loss by generating a non-hierarchical, representative, classification system valid at the aggregated scale. Implementing scaling parameters, that control class-label precision effectively reduced information loss of scaled landscapes as class-label precision increased. In a neutral-landscape simulation study, the algorithm consistently preserved information at a significantly higher level than the other commonly used algorithms. When applied to maps of real landscapes, the same increase in information retention was observed, and the scaled classes were detectable from lower-resolution, remotely sensed, multi-spectral reflectance data with high accuracy. The framework developed in this research facilitates scaling-parameter selection to address trade-offs among information retention, label fidelity, and spectral detectability of scaled classes. When generating high spatial resolution land-cover maps, quantifying effects of sampling intensity, feature-space dimensionality and classifier method on overall accuracy, confidence estimates, and classifier efficiency allowed optimization of the mapping method. Increase in sampling intensity boosted accuracies in a reasonably predictable fashion. However, adding a second image acquired when ground conditions and vegetation phenology differed from those of the first image had a much greater impact, increasing classification accuracy even at low sampling intensities, to levels not reached with a single season image.
52

Efficient treatment of cross-scale interactions in a land-use model

Dietrich, Jan Philipp 01 November 2011 (has links)
Computermodelle stellen heute ein Standardwerkzeug in vielen wissenschaftlichen Disziplinen dar. Einer ihrer Hauptzwecke ist die Verknüpfung von Prozessen verschiedener Skalen. Verzichtet man auf diese Verknüpfung im Modell, sind realistische Prognosen meist ausgeschlossen, bildet man die Realität 1:1 nach, wird das Modell unlösbar. Wichtig ist daher eine gute Balance zwischen Genauigkeit und Abstraktion. Ich untersuche Möglichkeiten, skalenübergreifende Interaktionen in der Landnutzungsmodellierung effizient zu implementieren. Fokus liegt dabei auf zwei Prozessen: 1.Der Nutzung hochaufgelöster Daten im Modell. 2.Dem technologischer Wandel als landwirtschaftlichem Treiber. Häufig können hochaufgelöste Daten augrund limitierter Modellkomplexität nicht direkt verwendet werden. Meist wird dieses Problem gelöst, indem die Daten nach einem statischen Aggregationsschema hochskaliert werden. Als Alternative diskutiere ich den Einsatz von Clusteralgorithmen. Meine Untersuchungen zeigen, dass der entstehende Informationsverlust bei Verwendung von Clusteralgorithmen signifikant geringer ist als bei der Verwendung statischer Aggregationsvorschriften. Ein weiterer in der Landwirtschaft wichtiger Prozess ist technologischer Wandel. Während in der Vergangenheit Steigerungen in der Produktion meist durch Landexpansion erreicht wurden, so geschieht dies heute häufig durch Intensivierung. Ich präsentiere eine Modellimplementierung dieses Prozesses mitsamt der Rückkopplung der Landnutzungsintensität auf die Effektivität zugehöriger Investitionen. Grundlage dafür ist ein neuentwickeltes Maß für landwirtschaftliche Landnutungsintensität. Damit zeige ich, dass die Effektivität von Investitionen mit steigender Landnutzungsintensität sinkt. Meine Arbeit zeigt, dass außer dem Detailgrad eines Modells auch die Struktur der verwendeten Implementierungen einen signifikanten Einfluss auf die generelle Qualität der Simulation hat und insgesamt mehr Beachtung in der Modellierung finden sollte. / Computer models have become a common tool in various disciplines. A major challenge in modeling is the linking of processes on different scales. Neglecting cross-scale interactions leads to biases in model projections while a 1:1 representation is computational infeasible. Therefore, a good balance between accuracy and abstraction is essential. I investigate efficient implementations of cross-scale interactions in agricultural land-use models. I focus on two dominant aspects: First, the inclusion of spatially explicit data in a global optimization model; second, the proper representation of technological change as a driver for land use change. As a consequence of limitations in complexity of global optimization models the problem arises that high-resolution data cannot be used directly as model input. Typically, the spatially explicit data is upscaled by using a static upscaling rule. As an alternative I discuss the use of clustering methods for upscaling. I provide a general framework including the creation of clusters, the upscaling of inputs, and the downscaling of outputs. My investigations show that the information loss due to upscaling decreases significantly with cluster methods compared to static grids. Another important process in agriculture is technological change. Whereas in the past increases in agricultural production were mainly achieved by agricultural land expansion, nowadays most increases in total production are outcome of intensification due to technological change. To model this feedback I introduce a measure for agricultural land-use intensity. Based on this measure I show that the effectiveness of investments in technological change decreases with the agricultural land-use intensity. My findings imply that apart from detailedness especially the implementation has a significant impact on general model quality. Therefore, in model development the framework used for implementation should be emphasized to a greater extent.
53

Data Collection and Capacity Analysis in Large-scale Wireless Sensor Networks

Ji, Shouling 01 August 2013 (has links)
In this dissertation, we study data collection and its achievable network capacity in Wireless Sensor Networks (WSNs). Firstly, we investigate the data collection issue in dual-radio multi-channel WSNs under the protocol interference model. We propose a multi-path scheduling algorithm for snapshot data collection, which has a tighter capacity bound than the existing best result, and a novel continuous data collection algorithm with comprehensive capacity analysis. Secondly, considering most existing works for the capacity issue are based on the ideal deterministic network model, we study the data collection problem for practical probabilistic WSNs. We design a cell-based path scheduling algorithm and a zone-based pipeline scheduling algorithm for snapshot and continuous data collection in probabilistic WSNs, respectively. By analysis, we show that the proposed algorithms have competitive capacity performance compared with existing works. Thirdly, most of the existing works studying the data collection capacity issue are for centralized synchronous WSNs. However, wireless networks are more likely to be distributed asynchronous systems. Therefore, we investigate the achievable data collection capacity of realistic distributed asynchronous WSNs and propose a data collection algorithm with fairness consideration. Theoretical analysis of the proposed algorithm shows that its achievable network capacity is order-optimal as centralized and synchronized algorithms do and independent of network size. Finally, for completeness, we study the data aggregation issue for realistic probabilistic WSNs. We propose order-optimal scheduling algorithms for snapshot and continuous data aggregation under the physical interference model.
54

Energy-efficient Routing To Maximize Network Lifetime In Wireless Sensor Networks

Zengin, Asli 01 July 2007 (has links) (PDF)
With various new alternatives of low-cost sensor devices, there is a strong demand for large scale wireless sensor networks (WSN). Energy efficiency in routing is crucial for achieving the desired levels of longevity in these networks. Existing routing algorithms that do not combine information on transmission energies on links, residual energies at nodes, and the identity of data itself, cannot reach network capacity. A proof-of-concept routing algorithm that combines data aggregation with the minimum-weight path routing is studied in this thesis work. This new algorithm can achieve much larger network lifetime when there is redundancy in messages to be carried by the network, a practical reality in sensor network applications.
55

Analyse macroscopique des grands systèmes : émergence épistémique et agrégation spatio-temporelle / Macroscopic Analysis of Large-scale Systems : Epistemic Emergence and Spatiotemporal Aggregation

Lamarche-Perrin, Robin 14 October 2013 (has links)
L'analyse des systèmes de grande taille est confrontée à des difficultés d'ordre syntaxique et sémantique : comment observer un million d'entités distribuées et asynchrones ? Comment interpréter le désordre résultant de l'observation microscopique de ces entités ? Comment produire et manipuler des abstractions pertinentes pour l'analyse macroscopique des systèmes ? Face à l'échec de l'approche analytique, le concept d'émergence épistémique - relatif à la nature de la connaissance - nous permet de définir une stratégie d'analyse alternative, motivée par le constat suivant : l'activité scientifique repose sur des processus d'abstraction fournissant des éléments de description macroscopique pour aborder la complexité des systèmes. Cette thèse s'intéresse plus particulièrement à la production d'abstractions spatiales et temporelles par agrégation de données. Afin d'engendrer des représentations exploitables lors du passage à l'échelle, il apparaît nécessaire de contrôler deux aspects essentiels du processus d'abstraction. Premièrement, la complexité et le contenu informationnel des représentations macroscopiques doivent être conjointement optimisés afin de préserver les détails pertinents pour l'observateur, tout en minimisant le coût de l'analyse. Nous proposons des mesures de qualité (critères internes) permettant d'évaluer, de comparer et de sélectionner les représentations en fonction du contexte et des objectifs de l'analyse. Deuxièmement, afin de conserver leur pouvoir explicatif, les abstractions engendrées doivent être cohérentes avec les connaissances mobilisées par l'observateur lors de l'analyse. Nous proposons d'utiliser les propriétés organisationnelles, structurelles et topologiques du système (critères externes) pour contraindre le processus d'agrégation et pour engendrer des représentations viables sur les plans syntaxique et sémantique. Par conséquent, l'automatisation du processus d'agrégation nécessite de résoudre un problème d'optimisation sous contraintes. Nous proposons dans cette thèse un algorithme de résolution générique, s'adaptant aux critères formulés par l'observateur. De plus, nous montrons que la complexité de ce problème d'optimisation dépend directement de ces critères. L'approche macroscopique défendue dans cette thèse est évaluée sur deux classes de systèmes. Premièrement, le processus d'agrégation est appliqué à la visualisation d'applications parallèles de grande taille pour l'analyse de performance. Il permet de détecter les anomalies présentes à plusieurs niveaux de granularité dans les traces d'exécution et d'expliquer ces anomalies à partir des propriétés syntaxiques du système. Deuxièmement, le processus est appliqué à l'agrégation de données médiatiques pour l'analyse des relations internationales. L'agrégation géographique et temporelle de l'attention médiatique permet de définir des évènements macroscopiques pertinents sur le plan sémantique pour l'analyse du système international. Pour autant, nous pensons que l'approche et les outils présentés dans cette thèse peuvent être généralisés à de nombreux autres domaines d'application. / The analysis of large-scale systems faces syntactic and semantic difficulties: How to observe millions of distributed and asynchronous entities? How to interpret the disorder that results from the microscopic observation of such entities? How to produce and handle relevant abstractions for the systems' macroscopic analysis? Faced with the failure of the analytic approach, the concept of epistemic emergence - related to the nature of knowledge - allows us to define an alternative strategy. This strategy is motivated by the observation that scientific activity relies on abstraction processes that provide macroscopic descriptions to broach the systems' complexity. This thesis is more specifically interested in the production of spatial and temporal abstractions through data aggregation. In order to generate scalable representations, the control of two essential aspects of the aggregation process is necessary. Firstly, the complexity and the information content of macroscopic representations should be jointly optimized in order to preserve the relevant details for the observer, while minimizing the cost of the analysis. We propose several measures of quality (internal criteria) to evaluate, compare and select the representations depending on the context and the objectives of the analysis. Secondly, in order to preserve their explanatory power, the generated abstractions should be consistent with the background knowledge exploited by the observer for the analysis. We propose to exploit the systems' organisational, structural and topological properties (external criteria) to constrain the aggregation process and to generate syntactically and semantically consistent representations. Consequently, the automation of the aggregation process requires solving a constrained optimization problem. We propose a generic algorithm that adapts to the criteria expressed by the observer. Furthermore, we show that the complexity of this optimization problem directly depend on these criteria. The macroscopic approach supported by this thesis is evaluated on two classes of systems. Firstly, the aggregation process is applied to the visualisation of large-scale distributed applications for performance analysis. It allows the detection of anomalies at several scales in the execution traces and the explanation of these anomalies according to the system syntactic properties. Secondly, the process is applied to the aggregation of news for the analysis of international relations. The geographical and temporal aggregation of media attention allows the definition of semantically consistent macroscopic events for the analysis of the international system. Furthermore, we believe that the approach and the tools presented in this thesis can be extended to a wider class of application domains.
56

Uma nova metáfora visual escalável para dados tabulares e sua aplicação na análise de agrupamentos / A scalable visual metaphor for tabular data and its application on clustering analysis

Evinton Antonio Cordoba Mosquera 19 September 2017 (has links)
A rápida evolução dos recursos computacionais vem permitindo que grandes conjuntos de dados sejam armazenados e recuperados. No entanto, a exploração, compreensão e extração de informação útil ainda são um desafio. Com relação às ferramentas computacionais que visam tratar desse problema, a Visualização de Informação possibilita a análise de conjuntos de dados por meio de representações gráficas e a Mineração de Dados fornece processos automáticos para a descoberta e interpretação de padrões. Apesar da recente popularidade dos métodos de visualização de informação, um problema recorrente é a baixa escalabilidade visual quando se está analisando grandes conjuntos de dados, resultando em perda de contexto e desordem visual. Com intuito de representar grandes conjuntos de dados reduzindo a perda de informação relevante, o processo de agregação visual de dados vem sendo empregado. A agregação diminui a quantidade de dados a serem representados, preservando a distribuição e as tendências do conjunto de dados original. Quanto à mineração de dados, visualização de informação vêm se tornando ferramental essencial na interpretação dos modelos computacionais e resultados gerados, em especial das técnicas não-supervisionados, como as de agrupamento. Isso porque nessas técnicas, a única forma do usuário interagir com o processo de mineração é por meio de parametrização, limitando a inserção de conhecimento de domínio no processo de análise de dados. Nesta dissertação, propomos e desenvolvemos uma metáfora visual baseada na TableLens que emprega abordagens baseadas no conceito de agregação para criar representações mais escaláveis para a interpretação de dados tabulares. Como aplicação, empregamos a metáfora desenvolvida na análise de resultados de técnicas de agrupamento. O ferramental resultante não somente suporta análise de grandes bases de dados com reduzida perda de contexto, mas também fornece subsídios para entender como os atributos dos dados contribuem para a formação de agrupamentos em termos da coesão e separação dos grupos formados. / The rapid evolution of computing resources has enabled large datasets to be stored and retrieved. However, exploring, understanding and extracting useful information is still a challenge. Among the computational tools to address this problem, information visualization techniques enable the data analysis employing the human visual ability by making a graphic representation of the data set, and data mining provides automatic processes for the discovery and interpretation of patterns. Despite the recent popularity of information visualization methods, a recurring problem is the low visual scalability when analyzing large data sets resulting in context loss and visual disorder. To represent large datasets reducing the loss of relevant information, the process of aggregation is being used. Aggregation decreases the amount of data to be represented, preserving the distribution and trends of the original dataset. Regarding data mining, information visualization has become an essential tool in the interpretation of computational models and generated results, especially of unsupervised techniques, such as clustering. This occurs because, in these techniques, the only way the user interacts with the mining process is through parameterization, limiting the insertion of domain knowledge in the process. In this thesis, we propose and develop the new visual metaphor based on the TableLens that employs approaches based on the concept of aggregation to create more scalable representations of tabular data. As application, we use the developed metaphor in the analysis of the results of clustering techniques. The resulting framework does not only support large database analysis but also provides insights into how data attributes contribute to clustering regarding cohesion and separation of the composed groups
57

Decentralized Architecture for Load Balancing in District Heating Systems

Rodriguez, German Darío Rivas January 2011 (has links)
Context. In forthcoming years, sustainability will lead the development of society. Implementation of innovative systems to make the world more sustainable is becoming one of the key points for science. Load balancing strategies aim to reduce economic and ecological cost of the heat production in district heating systems. Development of a decentralized solution lies in the objective of making the load balancing more accessible and attractive for the companies in charge of providing district-heating services. Objectives. This master thesis aims to find a new alternative for implementing decentralized load balancing in district heating systems. Methods. The development of this master thesis involved the review of the state-of-the-art on demand side management in district heating systems and power networks. It also implied the design of the architecture, creation of a software prototype and execution of a simulation of the system to measure the performance in terms of response time. Results. Decentralized demand side management algorithm and communication framework, software architecture description and analysis of the prototype simulation performance. Conclusions. The main conclusion is that it is possible to create a decentralized algorithm that performs load balancing without compromising the individuals’ privacy. It is possible to say that the algorithm shows good levels of performance not only from the system aggregated response time, but also from the individual performance, in terms of memory consumption and CPU consumption. / (+46) 709706206
58

Smart Grid security : protecting users' privacy in smart grid applications

Mustafa, Mustafa Asan January 2015 (has links)
Smart Grid (SG) is an electrical grid enhanced with information and communication technology capabilities, so it can support two-way electricity and communication flows among various entities in the grid. The aim of SG is to make the electricity industry operate more efficiently and to provide electricity in a more secure, reliable and sustainable manner. Automated Meter Reading (AMR) and Smart Electric Vehicle (SEV) charging are two SG applications tipped to play a major role in achieving this aim. The AMR application allows different SG entities to collect users’ fine-grained metering data measured by users’ Smart Meters (SMs). The SEV charging application allows EVs’ charging parameters to be changed depending on the grid’s state in return for incentives for the EV owners. However, both applications impose risks on users’ privacy. Entities having access to users’ fine-grained metering data may use such data to infer individual users’ personal habits. In addition, users’ private information such as users’/EVs’ identities and charging locations could be exposed when EVs are charged. Entities may use such information to learn users’ whereabouts, thus breach their privacy. This thesis proposes secure and user privacy-preserving protocols to support AMR and SEV charging in an efficient, scalable and cost-effective manner. First, it investigates both applications. For AMR, (1) it specifies an extensive set of functional requirements taking into account the way liberalised electricity markets work and the interests of all SG entities, (2) it performs a comprehensive threat analysis, based on which, (3) it specifies security and privacy requirements, and (4) it proposes to divide users’ data into two types: operational data (used for grid management) and accountable data (used for billing). For SEV charging, (1) it specifies two modes of charging: price-driven mode and price-control-driven mode, and (2) it analyses two use-cases: price-driven roaming SEV charging at home location and price-control-driven roaming SEV charging at home location, by performing threat analysis and specifying sets of functional, security and privacy requirements for each of the two cases. Second, it proposes a novel Decentralized, Efficient, Privacy-preserving and Selective Aggregation (DEP2SA) protocol to allow SG entities to collect users’ fine-grained operational metering data while preserving users’ privacy. DEP2SA uses the homomorphic Paillier cryptosystem to ensure the confidentiality of the metering data during their transit and data aggregation process. To preserve users’ privacy with minimum performance penalty, users’ metering data are classified and aggregated accordingly by their respective local gateways based on the users’ locations and their contracted suppliers. In this way, authorised SG entities can only receive the aggregated data of users they have contracts with. DEP2SA has been analysed in terms of security, computational and communication overheads, and the results show that it is more secure, efficient and scalable as compared with related work. Third, it proposes a novel suite of five protocols to allow (1) suppliers to collect users accountable metering data, and (2) users (i) to access, manage and control their own metering data and (ii) to switch between electricity tariffs and suppliers, in an efficient and scalable manner. The main ideas are: (i) each SM to have a register, named accounting register, dedicated only for storing the user’s accountable data, (ii) this register is updated by design at a low frequency, (iii) the user’s supplier has unlimited access to this register, and (iv) the user cancustomise how often this register is updated with new data. The suite has been analysed in terms of security, computational and communication overheads. Fourth, it proposes a novel protocol, known as Roaming Electric Vehicle Charging and Billing, an Anonymous Multi-User (REVCBAMU) protocol, to support the priced-driven roaming SEV charging at home location. During a charging session, a roaming EV user uses a pseudonym of the EV (known only to the user’s contracted supplier) which is anonymously signed by the user’s private key. This protocol protects the user’s identity privacy from other suppliers as well as the user’s privacy of location from its own supplier. Further, it allows the user’s contracted supplier to authenticate the EV and the user. Using two-factor authentication approach a multi-user EV charging is supported and different legitimate EV users (e.g., family members) can be held accountable for their charging sessions. With each charging session, the EV uses a different pseudonym which prevents adversaries from linking the different charging sessions of the same EV. On an application level, REVCBAMU supports fair user billing, i.e., each user pays only for his/her own energy consumption, and an open EV marketplace in which EV users can safely choose among different remote host suppliers. The protocol has been analysed in terms of security and computational overheads.
59

Utilisation de l'ingénierie dirigée par les modèles pour l'agrégation continue de données hétérogènes : application à la supervision de réseaux de gaz / Model-based Interoperability IoT Hub for the aggregation of data from heterogeneous systems : application to the Smart Gas Distribution Networks

Ahmed, Ahmed 17 December 2018 (has links)
Durant les dix dernières années, l'infrastructure informatique et l'infrastructure industrielle ont évolué de manière à passer de systèmes monolithiques à des systèmes hétérogènes, autonomes et largement distribués. Tous les systèmes ne peuvent pas coexister de manière isolée et exigent que leurs données soient partagées de manière à accroître la productivité de l'entreprise. En fait, nous progressons vers des systèmes complexes plus vastes où des millions des systèmes doivent être intégrés. Ainsi, l'exigence d'une solution d'interopérabilité peu coûteuse et rapide devient un besoin essentiel. Aujourd'hui, les solutions imposent les normes ou les middlewares pour gérer cette problématique. Cependant, ces solutions ne sont pas suffisantes et nécessitent souvent des développements ad-hoc spécifiques. Ainsi, ce travail propose l'étude et le développement d'une architecture d'interopérabilité générique, modulaire, agnostique et extensible basée sur des principes de l'architecture dirigée par les modèles et les concepts de la séparation de préoccupations. Il vise à promouvoir l'interopérabilité et l'échange de données entre les systèmes hétérogènes en temps réels sans obliger les systèmes à se conformer à des normes ou technologies spécifiques. La proposition s'applique à des cas d'usages industriels dans le contexte de réseau de distribution de gaz français. La validation théorique et empirique de notre proposition corrobore note hypothèses que l'interopérabilité entre les systèmes hétérogènes peut être atteinte en utilisant les concepts de la séparation de préoccupations et de l'ingénierie dirigée par les modèles et que le coût et le temps pour promouvoir l'interopérabilité est réduit en favorisant les caractéristiques de la réutilisabilité et de l'extensibilité. / Over the last decade, the information technology and industrial infrastructures have evolved from containing monolithic systems to heterogeneous, autonomous, and widely distributed systems. Most systems cannot coexist while completely isolated and need to share their data in order to increase business productivity. In fact, we are moving towards larger complex systems where millions of systems and applications need to be integrated. Thus, the requirement of an inexpensive and fast interoperability solution becomes an essential need. The existing solutions today impose standards or middleware to handle this issue. However, these solutions are not sufficient and often require specific ad-hoc developments. Thus, this work proposes the study and the development of a generic, modular, agnostic and extensible interoperability architecture based on modeling principles and software engineering aspects. It aims to promote interoperability and data exchange between heterogeneous systems in real time without requiring systems to comply with specific standards or technologies. The industrial use cases for this work takes place in the context of the French gas distribution network. The theoretical and empirical validation of our proposal corroborates assumptions that the interoperability between heterogeneous systems can be achieved by using the aspects of separation of concerns and model-driven engineering. The cost and time to promote the interoperability are also reduced by promoting the characteristics of re-usability and extensibility.
60

The nature and extent of intra-industry trade in South Africa

Parr, Richard Geoffrey 06 1900 (has links)
Intra-industry trade occurs when goods from the same industry category are both exported and imported. Types of intra-industry trade are identified, and theoretical models of intraindustry trade under conditions of imperfect competition are examined. The results of thirtyseven empirical studies on the determinants of intra-industry trade are analysed. Methods of measuring intra-industry trade and marginal intra-industry trade are discussed, and various measurement problems are dealt with. The extent of intra-industry trade in South Africa in 1992 and 1997 is measured, using the Grubel-Lloyd and Michaely indices. The BrUlhart indices are applied to measure marginal intra-industry trade. South Africa has a relatively low and stable level of intra-industry trade in manufactured goods: the GrubelLloyd index for 1997 is calculated to be 37 per cent. / Economics and Management Sciences / M.Com. (Economics)

Page generated in 0.679 seconds