• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 25
  • 11
  • 8
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 109
  • 16
  • 15
  • 14
  • 13
  • 12
  • 12
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

A new link lifetime estimation method for greedy and contention-based routing in mobile ad hoc networks

Noureddine, H., Ni, Q., Min, Geyong, Al-Raweshidy, H. January 2014 (has links)
No / Greedy and contention-based forwarding schemes were proposed for mobile ad hoc networks (MANETs) to perform data routing hop-by-hop, without prior discovery of the end-to-end route to the destination. Accordingly, the neighboring node that satisfies specific criteria is selected as the next forwarder of the packet. Both schemes require the nodes participating in the selection process to be within the area that confronts the location of the destination. Therefore, the lifetime of links for such schemes is not only dependent on the transmission range, but also on the location parameters (position, speed and direction) of the sending node and the neighboring node as well as the destination. In this paper, we propose a new link lifetime prediction method for greedy and contention-based routing which can also be utilized as a new stability metric. The evaluation of the proposed method is conducted by the use of stability-based greedy routing algorithm, which selects the next hop node having the highest link stability.
62

Supporting Software Transactional Memory in Distributed Systems: Protocols for Cache-Coherence, Conflict Resolution and Replication

Zhang, Bo 05 December 2011 (has links)
Lock-based synchronization on multiprocessors is inherently non-scalable, non-composable, and error-prone. These problems are exacerbated in distributed systems due to an additional layer of complexity: multinode concurrency. Transactional memory (TM) is an emerging, alternative synchronization abstraction that promises to alleviate these difficulties. With the TM model, code that accesses shared memory objects are organized as transactions, which speculatively execute, while logging changes. If transactional conflicts are detected, one of the conflicting transaction is aborted and re-executed, while the other is allowed to commit, yielding the illusion of atomicity. TM for multiprocessors has been proposed in software (STM), in hardware (HTM), and in a combination (HyTM). This dissertation focuses on supporting the TM abstraction in distributed systems, i.e., distributed STM (or D-STM). We focus on three problem spaces: cache-coherence (CC), conflict resolution, and replication. We evaluate the performance of D-STM by measuring the competitive ratio of its makespan --- i.e., the ratio of its makespan (the last completion time for a given set of transactions) to the makespan of an optimal off-line clairvoyant scheduler. We show that the performance of D-STM for metric-space networks is O(N^2) for N transactions requesting an object under the Greedy contention manager and an arbitrary CC protocol. To improve the performance, we propose a class of location-aware CC protocols, called LAC protocols. We show that the combination of the Greedy manager and a LAC protocol yields an O(NlogN s) competitive ratio for s shared objects. We then formalize two classes of CC protocols: distributed queuing cache-coherence (DQCC) protocols and distributed priority queuing cache-coherence (DPQCC) protocols, both of which can be implemented using distributed queuing protocols. We show that a DQCC protocol is O(NlogD)-competitive and a DPQCC protocol is O(log D_delta)-competitive for N dynamically generated transactions requesting an object, where D_delta is the normalized diameter of the underlying distributed queuing protocol. Additionally, we propose a novel CC protocol, called Relay, which reduces the total number of aborts to O(N) for N conflicting transactions requesting an object, yielding a significantly improvement over past CC protocols which has O(N^2) total number of aborts. We also analyze Relay's dynamic competitive ratio in terms of the communication cost (for dynamically generated transactions), and show that Relay's dynamic competitive ratio is O(log D_0), where D_0 is the normalized diameter of the underlying network spanning tree. To reduce unnecessary aborts and increase concurrency for D-STM based on globally-consistent contention management policies, we propose the distributed dependency-aware (DDA) conflict resolution model, which adopts different conflict resolution strategies based on transaction types. In the DDA model, read-only transactions never abort by keeping a set of versions for each object. Each transaction only keeps precedence relations based on its local knowledge of precedence relations. We show that the DDA model ensures that 1) read-only transactions never abort, 2) every transaction eventually commits, 3) supports invisible reads, and 4) efficiently garbage collects useless object versions. To establish competitive ratio bounds for contention managers in D-STM, we model the distributed transactional contention management problem as the traveling salesman problem (TSP). We prove that for D-STM, any online, work conserving, deterministic contention manager provides an Omega(max[s,s^2/D]) competitive ratio in a network with normalized diameter D and s shared objects. Compared with the Omega(s) competitive ratio for multiprocessor STM, the performance guarantee for D-STM degrades by a factor proportional to s/D. We present a randomized algorithm, called Randomized, with a competitive ratio O(sClog n log ^{2} n) for s objects shared by n transactions, with a maximum conflicting degree C. To break this lower bound, we present a randomized algorithm Cutting, which needs partial information of transactions and an approximate TSP algorithm A with approximation ratio phi_A. We show that the average case competitive ratio of Cutting is O(s phi_A log^{2}m log^{2}n), which is close to O(s). Single copy (SC) D-STM keeps only one writable copy of each object, and thus cannot tolerate node failures. We propose a quorum-based replication (QR) D-STM model, which provides provable fault-tolerance without incurring high communication overhead, when compared with the SC model. The QR model stores object replicas in a tree quorum system, where two quorums intersect if one of them is a write quorum, and ensures the consistency among replicas at commit-time. The communication cost of an operation in the QR model is proportional to the communication cost from the requesting node to its closest read or write quorum. In the presence of node failures, the QR model exhibits high availability and degrades gracefully when the number of failed nodes increases, with reasonable higher communication cost. We develop a protoytpe implementation of the dissertation's proposed solutions, including DQCC and DPQCC protocols, Relay protocol, and the DDA model, in the HyFlow Java D-STM framework. We experimentally evaluated these solutions with respective competitor solutions on a set of microbenchmarks (e.g., data structures including distributed linked list, binary search tree and red-black tree) and macrobenchmarks (e.g., distributed versions of the applications in the STAMP STM benchmark suite for multiprocessors). Our experimental studies revealed that: 1) based on the same distributed queuing protocol (i.e., Ballistic CC protocol), DPQCC yields better transactional throughput than DQCC, by a factor of 50% - 100%, on a range of transactional workloads; 2) Relay outperforms competitor protocols (including Arrow, Ballistic and Home) by more than 200% when the network size and contention increase, as it efficiently reduces the average aborts per transaction (less than 0.5); 3) the DDA model outperforms existing contention management policies (including Greedy, Karma and Kindergarten managers) by upto 30%-40% in high contention environments; For read/write-balanced workloads, the DDA model outperforms these contention management policies by 30%-60% on average; for read-dominated workloads, the model outperforms by over 200%. / Ph. D.
63

Scalability Analysis and Optimization for Large-Scale Deep Learning

Pumma, Sarunya 03 February 2020 (has links)
Despite its growing importance, scalable deep learning (DL) remains a difficult challenge. Scalability of large-scale DL is constrained by many factors, including those deriving from data movement and data processing. DL frameworks rely on large volumes of data to be fed to the computation engines for processing. However, current hardware trends showcase that data movement is already one of the slowest components in modern high performance computing systems, and this gap is only going to increase in the future. This includes data movement needed from the filesystem, within the network subsystem, and even within the node itself, all of which limit the scalability of DL frameworks on large systems. Even after data is moved to the computational units, managing this data is not easy. Modern DL frameworks use multiple components---such as graph scheduling, neural network training, gradient synchronization, and input pipeline processing---to process this data in an asynchronous uncoordinated manner, which results in straggler processes and consequently computational imbalance, further limiting scalability. This thesis studies a subset of the large body of data movement and data processing challenges that exist in modern DL frameworks. For the first study, we investigate file I/O constraints that limit the scalability of large-scale DL. We first analyze the Caffe DL framework with Lightning Memory-Mapped Database (LMDB), one of the most widely used file I/O subsystems in DL frameworks, to understand the causes of file I/O inefficiencies. Based on our analysis, we propose LMDBIO---an optimized I/O plugin for scalable DL that addresses the various shortcomings in existing file I/O for DL. Our experimental results show that LMDBIO significantly outperforms LMDB in all cases and improves overall application performance by up to 65-fold on 9,216 CPUs of the Blues and Bebop supercomputers at Argonne National Laboratory. Our second study deals with the computational imbalance problem in data processing. For most DL systems, the simultaneous and asynchronous execution of multiple data-processing components on shared hardware resources causes these components to contend with one another, leading to severe computational imbalance and degraded scalability. We propose various novel optimizations that minimize resource contention and improve performance by up to 35% for training various neural networks on 24,576 GPUs of the Summit supercomputer at Oak Ridge National Laboratory---the world's largest supercomputer at the time of writing of this thesis. / Doctor of Philosophy / Deep learning is a method for computers to automatically extract complex patterns and trends from large volumes of data. It is a popular methodology that we use every day when we talk to Apple Siri or Google Assistant, when we use self-driving cars, or even when we witnessed IBM Watson be crowned as the champion of Jeopardy! While deep learning is integrated into our everyday life, it is a complex problem that has gotten the attention of many researchers. Executing deep learning is a highly computationally intensive problem. On traditional computers, such as a generic laptop or desktop machine, the computation for large deep learning problems can take years or decades to complete. Consequently, supercomputers, which are machines with massive computational capability, are leveraged for deep learning workloads. The world's fastest supercomputer today, for example, is capable of performing almost 200 quadrillion floating point operations every second. While that is impressive, for large problems, unfortunately, even the fastest supercomputers today are not fast enough. The problem is not that they do not have enough computational capability, but that deep learning problems inherently rely on a lot of data---the entire concept of deep learning centers around the fact that the computer would study a huge volume of data and draw trends from it. Moving and processing this data, unfortunately, is much slower than the computation itself and with the current hardware trends it is not expected to get much faster in the future. This thesis aims at making deep learning executions on large supercomputers faster. Specifically, it looks at two pieces associated with managing data: (1) data reading---how to quickly read large amounts of data from storage, and (2) computational imbalance---how to ensure that the different processors on the supercomputer are not waiting for each other and thus wasting time. We first analyze each performance problem to identify the root cause of it. Then, based on the analysis, we propose several novel techniques to solve the problem. With our optimizations, we are able to significantly improve the performance of deep learning execution on a number of supercomputers, including Blues and Bebop at Argonne National Laboratory, and Summit---the world's fastest supercomputer---at Oak Ridge National Laboratory.
64

On the Interaction of High-Performance Network Protocol Stacks with Multicore Architectures

Chunangad Narayanaswamy, Ganesh 20 May 2008 (has links)
Multicore architectures have been one of the primary driving forces in the recent rapid growth in high-end computing systems, contributing to its growing scales and capabilities. With significant enhancements in high-speed networking technologies and protocol stacks which support these high-end systems, a growing need to understand the interaction between them closely is realized. Since these two components have been designed mostly independently, there tend to have often serious and surprising interactions that result in heavy asymmetry in the effective capability of the different cores, thereby degrading the performance for various applications. Similarly, depending on the communication pattern of the application and the layout of processes across nodes, these interactions could potentially introduce network scalability issues, which is also an important concern for system designers. In this thesis, we analyze these asymmetric interactions and propose and design a novel systems level management framework called SIMMer (Systems Interaction Mapping Manager) that automatically monitors these interactions and dynamically manages the mapping of processes on processor cores to transparently maximize application performance. Performance analysis of SIMMer shows that it can improve the communication performance of applications by more than twofold and the overall application performance by 18%. We further analyze the impact of contention in network and processor resources and relate it to the communication pattern of the application. Insights learnt from these analyses can lead to efficient runtime configurations for scientific applications on multicore architectures. / Master of Science
65

Scalable Data Management for Object-based Storage Systems

Wadhwa, Bharti 19 August 2020 (has links)
Parallel I/O performance is crucial to sustain scientific applications on large-scale High-Performance Computing (HPC) systems. Large scale distributed storage systems, in particular the object-based storage systems, face severe challenges for managing the data efficiently. Inefficient data management leads to poor I/O and storage performance in HPC applications and scientific workflows. Some of the main challenges for efficient data management arise from poor resource allocation, load imbalance in object storage targets, and inflexible data sharing between applications in a workflow. In addition, parallel I/O makes it challenging to shoehorn new interfaces, such as taking advantage of multiple layers of storage and support for analysis in the data path. Solving these challenges to improve performance and efficiency of object-based storage systems is crucial, especially for upcoming era of exascale systems. This dissertation is focused on solving these major challenges in object-based storage systems by providing scalable data management strategies. In the first part of the dis-sertation (Chapter 3), we present a resource contention aware load balancing tool (iez) for large scale distributed object-based storage systems. In Chapter 4, we extend iez to support Progressive File Layout for object-based storage system: Lustre. In the second part (Chapter 5), we present a technique to facilitate data sharing in scientific workflows using object-based storage, with our proposed tool Workflow Data Communicator. In the last part of this dissertation, we present a solution for transparent data management in multi-layer storage hierarchy of present and next-generation HPC systems.This dissertation shows that by intelligently employing scalable data management techniques, scientific applications' and workflows' flexibility and performance in object-based storage systems can be enhanced manyfold. Our proposed data management strategies can guide next-generation HPC storage systems' software design to efficiently support data for scientific applications and workflows. / Doctor of Philosophy / Large scale object-based storage systems face severe challenges to manage the data efficiently for HPC applications and workflows. These storage systems often manage and share data inflexibly, without considering the load imbalance and resource contention in the underlying multi-layer storage hierarchy. This dissertation first studies how resource contention and inflexible data sharing mechanisms impact HPC applications' storage and I/O performance; and then presents a series of efficient techniques, tools and algorithms to provide efficient and scalable data management for current and next-generation HPC storage systems
66

HyFlow: A High Performance Distributed Software Transactional Memory Framework

Saad Ibrahim, Mohamed Mohamed 14 June 2011 (has links)
We present HyFlow - a distributed software transactional memory (D-STM) framework for distributed concurrency control. Lock-based concurrency control suffers from drawbacks including deadlocks, livelocks, and scalability and composability challenges. These problems are exacerbated in distributed systems due to their distributed versions which are more complex to cope with (e.g., distributed deadlocks). STM and D-STM are promising alternatives to lock-based and distributed lock-based concurrency control for centralized and distributed systems, respectively, that overcome these difficulties. HyFlow is a Java framework for DSTM, with pluggable support for directory lookup protocols, transactional synchronization and recovery mechanisms, contention management policies, cache coherence protocols, and network communication protocols. HyFlow exports a simple distributed programming model that excludes locks: using (Java 5) annotations, atomic sections are defiend as transactions, in which reads and writes to shared, local and remote objects appear to take effect instantaneously. No changes are needed to the underlying virtual machine or compiler. We describe HyFlow's architecture and implementation, and report on experimental studies comparing HyFlow against competing models including Java remote method invocation (RMI) with mutual exclusion and read/write locks, distributed shared memory (DSM), and directory-based D-STM. / Master of Science
67

Resistance to School Consolidation in a Rural Appalachian Community

Kelly, Amanda 02 November 2007 (has links)
School consolidation, which involves closing one or more schools and combining them into a single school, is a common phenomenon in rural Appalachian communities due to out-migration and lack of funding for public schools. When school consolidation occurs, the local school may be closed, or students from other communities may be bused to the school. Community residents, however, do not always agree with the decision to consolidate their local schools. When this disagreement occurs, residents may choose to participate in organized resistance activities to show their opposition, make their voices heard to local politicians and the media, and seek an alternative to the proposed consolidation. This case study of school consolidation in one rural Appalachian county seeks to document and analyze the struggle in which community residents engaged in an effort to prevent local schools from being consolidated. Data was collected in the form of semi-structured interviews conducted with members and sympathizers of a resistance organization called TOPS. TOPS was formed in 2001 to oppose school consolidation, but its members were not successful in keeping their local schools open. Many schools in McDowell County have been consolidated or are scheduled to be consolidated in the near future. For example, Big Creek High School, which was at the center of many consolidation debates, will be closed in 2010. Its students will be bused to a new, consolidated high school. I conducted interviews during fall 2006 and spring 2007 to determine community members' grievances concerning consolidation, to establish a narrative of their struggle against state government officials, and to provide a basis for analyzing the movement's failure to achieve its goals. I used these interviews, along with TOPS' documents, local newspaper articles, and literature from other anti-consolidation efforts, to examine possible reasons why TOPS was not successful. Social movements literature, particularly the concepts of framing and repertoires of contention, formed the theoretical basis of this analysis. / Master of Science
68

Le rôle de la contention physique dans le développement du delirium chez les aînés atteints de déficits cognitifs hébergés dans les milieux de soins de longue durée

Doucet, Lise 12 April 2018 (has links)
Cette étude prospective visait à analyser la relation qui existe entre la contention physique et le delirium chez les aînés atteints de déficits cognitifs hébergés en milieux de soins de longue durée (MSLD) dans la grande région métropolitaine de Québec. La collecte systématique de données s'est déroulé de mai 2004 à juin 2006, auprès de 138 participants. L'élément prospectif de cette étude consistait à suivre la cohorte à deux reprises en sept jours, soit au jour un temps un (Tl) et au V^ jour temps deux (T2). L'étudiante chercheure a passé plus de six heures dans les départements de soins, afin d'observer directement la population cible. Les résultats démontrent qu'il existe des liens significatifs entre quatre types de contentions physiques (les côtés de lit de jour au Tl, la contention abdominale de jour au Tl et T2 et la contention abdominale de soir au T2) et le delirium. À la lumière des résultats, les infirmières œuvrant en MSLD sont invitées à utiliser plus de méthodes alternatives à l'usage de la contention physique, afin d'assurer des soins de qualité aux résidents et de préserver la santé cognitive des aînés, déjà fragilisés par la maladie
69

Les facteurs environnementaux associés à la réduction de l’utilisation des mesures de contrôle chez les patients atteints de troubles mentaux : une revue de la portée.

Nabil, Samira 05 1900 (has links)
L’utilisation des mesures de contrôle pour la gestion des comportements violents constitue une préoccupation majeure pour les infirmières qui pratiquent dans les unités de soins de santé mentale adulte. Le recours à ces mesures engendre des conséquences physiques et des traumatismes psychologiques chez les patients et tout le personnel soignant. Par conséquent, la prévention et la réduction de leur utilisation deviennent une priorité. De par l’aspect multifactoriel de cette problématique, la connaissance des facteurs qui influencent l’utilisation de ces mesures est primordiale pour cibler les interventions qui permettent de les prévenir ou les réduire. Les facteurs reliés aux caractéristiques cliniques des patients et au personnel soignant sont bien décrits dans la littérature. Toutefois, les facteurs reliés à l’environnement du patient ne sont pas attribués à l’ensemble des dimensions qui le constituent. Ceci est dû à la rareté des modèles conceptuels qui donnent une représentation structurée et globale de cet environnement. L’absence de cette représentation laisse les facteurs associés à l’environnement circonscrits seulement dans sa dimension physique, alors que d’autres facteurs reliés à ses autres dimensions sont rapportés dans la littérature sans être définis comme des facteurs environnementaux. Le but de cette revue de la portée a donc été d’explorer l’étendue des connaissances et d’identifier les facteurs de l’environnement qui sont associés à l’utilisation des mesures de contrôle chez les patients atteints de troubles mentaux. Afin d'intégrer une représentation globale de l’environnement, le cadre de référence du modèle de l’environnement thérapeutique optimal a été retenu (Optimal healing environment, ETO) (Jonas et al. , 2014). Les étapes de la revue de la portée selon Peters et al. (2020) ont été suivis, ce qui a donné lieu à l'inclusion de 35 écrits. L’analyse thématique des données extraites a permis d'identifier deux dimensions, à savoir l’environnement interpersonnel et l’environnement externe du patient. L’environnement interpersonnel décrit le développement et le maintien d’une relation thérapeutique à travers l’amélioration des compétences de communication du personnel soignant, l’utilisation des stratégies de prévention de crise d’agressivité, l’implication du patient, le retour post-évènement d’isolement et/ou contention et le sentiment d’appartenance à la communauté de l’unité de soins. Il décrit également la création d’organisations thérapeutiques via l’exercice du leadership organisationnel, les initiatives d’améliorations de l’organisation des soins, et la gestion des ressources humaines et technologiques. Pour sa part, l’environnement externe décrit la conception architecturale et le design intérieur des unités de soins où le patient est hospitalisé. Finalement, on pourrait conclure que des trois dimensions de l’ETO qui ont été incluses dans ce travail, les dimensions de l’environnement interpersonnel et l’environnement externe sont les plus représentées dans la littérature des cinq dernières années. De plus, les interventions de prévention de crise d’agressivité et le leadership organisationnel se sont montrés des facteurs clés d'un environnement thérapeutique favorisant la réduction de l’utilisation des mesures de contrôle. / The use of coercive measures (seclusion and restraints) to manage violent behaviors is a major preoccupation for adult mental health nurses. The use of these measures results in physical consequences and psychological trauma for patients and all caregivers. Therefore, prevention and reduction of their use becomes a priority. Due to the multifactorial aspect of this problem, understanding of the factors influencing this measures use is essential in order to target interventions to reduce them. Factors related to the clinical characteristics of patients and caregivers are well described in the literature. However, factors related to the patient's environment are not attributed to all of its dimensions. This is due to the scarcity of conceptual models that provide a structured, global representation of this environment. The absence of such a representation leaves the factors associated with the environment circumscribed only within its physical dimension, while other factors related to its other dimensions are reported in the literature without being defined as environmental factors. The aim of this scope review was therefore to explore the extent of knowledge and identify the environmental factors associated with reduced use of coercive measures in patients with mental disorders. In order to incorporate a holistic representation of the environment, the framework of the Optimal healing environment (ETO) model was selected (Jonas et al., 2014). The scope review steps according to Peters et al. (2020) were followed, resulting in the inclusion of 35 literatures. Thematic analysis of the extracted data identified two dimensions, namely the patient's interpersonal environment and external environment. The interpersonal environment describes the development and maintenance of a therapeutic relationship through the improvement of caregivers' communication skills, the use of aggressive crisis prevention strategies, patient involvement, the post-event return from isolation and/or restraint, and the sense of belonging to the care community. It also describes the creation of therapeutic organizations through the exercise of organizational leadership, initiatives to improve the organization of care, and the management of human and technological resources. For its part, the external environment describes the architectural and interior design of the care units where the patient is hospitalized. Finally, we may conclude that of the ETO three dimensions included in this work, interpersonal environment and external environment are the most represented in the literature of the last five years. In addition, aggression crisis prevention interventions and organizational leadership have been shown to be key factors in a therapeutic environment conducive to reducing the use of coercive measures.
70

Ações coletivas e movimento ambiental na Cantareira : 25 anos de resistência / Collective actions and environmental movement in Cantareira, 25 years of contention

Ferreira, Ivini Vaneska Rodrigues Ferraz 05 August 2013 (has links)
Nas últimas décadas do séc. XX, mais precisamente a partir do final da década de 80, uma questão fundamental começa a ser discutida multisetorialmente na Região Metropolitana de São Paulo (RMSP): Como lidar com as questões relacionadas à infra-estrutura urbana e aos limites do crescimento, considerando a necessidade de preservar o Cinturão Verde da RMSP? O objetivo principal desta dissertação de mestrado é descrever e analisar as ações coletivas e o movimento socioambiental, tomando como estudo de caso o movimento liderado por moradores do entorno da Cantareira que culminaram no reconhecimento internacional do Cinturão Verde da Cidade de São Paulo como Reserva da Biosfera pela UNESCO em 1994. Após mais de 20 anos de resistência, ainda hoje, este movimento persiste na forma de abaixo assinados, passeatas e ações judiciais, o que o transforma em uma das mais expressivas formas de ativismo ambiental em favor da preservação de uma floresta urbana. Ao traçarmos um panorama histórico, até os dias de hoje, das ações coletivas e do movimento ambiental em prol da Cantareira temos como objetivo investigar as razões pelas quais as populações urbanas participam de arenas políticas que decidem o futuro e a preservação de uma grande floresta dentro de uma cidade / In the last decades of the twentieth century, more precisely from the end of the decade of the eighties, a key issue started being discussed multisectorally in the Metropolitan Region of São Paulo (MASP): how to deal with the issues related to urban infrastructure and the limits of growth considering the need to maintain the green belt around the metropolitan region of São Paulo. The main objective of this master thesis is to describe and analyze the collective actions and the environmental movement taking as case study of the movement led by the Cantareira Park`s entour inhabitants, which resulted in the in international recognition of the Green Belt of the City of São Paulo as a Biosphere Reserve by UNESCO in 1994. After more than 20 years of resistence, still today, this movement continues in the form of undersigned, parades, lawsuits, which makes it one of the most expressive forms of environmental activism in favor of preserving an urban forest. When we draw a historical overview, until today, of collective action in environmental movement in favor of Cantareira, we have as objective to investigate the reasons for which the urban populations participate in policies that decide the future and the preservation of a great forest within a city

Page generated in 0.1246 seconds