Spelling suggestions: "subject:"contention"" "subject:"ontention""
61 |
A new link lifetime estimation method for greedy and contention-based routing in mobile ad hoc networksNoureddine, H., Ni, Q., Min, Geyong, Al-Raweshidy, H. January 2014 (has links)
No / Greedy and contention-based forwarding schemes were proposed for mobile ad hoc networks (MANETs) to perform data routing hop-by-hop, without prior discovery of the end-to-end route to the destination. Accordingly, the neighboring node that satisfies specific criteria is selected as the next forwarder of the packet. Both schemes require the nodes participating in the selection process to be within the area that confronts the location of the destination. Therefore, the lifetime of links for such schemes is not only dependent on the transmission range, but also on the location parameters (position, speed and direction) of the sending node and the neighboring node as well as the destination. In this paper, we propose a new link lifetime prediction method for greedy and contention-based routing which can also be utilized as a new stability metric. The evaluation of the proposed method is conducted by the use of stability-based greedy routing algorithm, which selects the next hop node having the highest link stability.
|
62 |
Supporting Software Transactional Memory in Distributed Systems: Protocols for Cache-Coherence, Conflict Resolution and ReplicationZhang, Bo 05 December 2011 (has links)
Lock-based synchronization on multiprocessors is inherently non-scalable, non-composable, and error-prone. These problems are exacerbated in distributed systems due to an additional layer of complexity: multinode concurrency. Transactional memory (TM) is an emerging, alternative synchronization abstraction that promises to alleviate these difficulties. With the TM model, code that accesses shared memory objects are organized as transactions, which speculatively execute, while logging changes. If transactional conflicts are detected, one of the conflicting transaction is aborted and re-executed, while the other is allowed to commit, yielding the illusion of atomicity. TM for multiprocessors has been proposed in software (STM), in hardware (HTM), and in a combination (HyTM).
This dissertation focuses on supporting the TM abstraction in distributed systems, i.e., distributed STM (or D-STM). We focus on three problem spaces: cache-coherence (CC), conflict resolution, and replication. We evaluate the performance of D-STM by measuring the competitive ratio of its makespan --- i.e., the ratio of its makespan (the last completion time for a given set of transactions) to the makespan of an optimal off-line clairvoyant scheduler. We show that the performance of D-STM for metric-space networks is O(N^2) for N transactions requesting an object under the Greedy contention manager and an arbitrary CC protocol. To improve the performance, we propose a class of location-aware CC protocols, called LAC protocols.
We show that the combination of the Greedy manager and a LAC protocol yields an O(NlogN s) competitive ratio for s shared objects.
We then formalize two classes of CC protocols: distributed queuing cache-coherence (DQCC) protocols and distributed priority queuing cache-coherence (DPQCC) protocols, both of which can be implemented using distributed queuing protocols. We show that a DQCC protocol is O(NlogD)-competitive and a DPQCC protocol is O(log D_delta)-competitive for N dynamically generated transactions requesting an object, where D_delta is the normalized diameter of the underlying distributed queuing protocol. Additionally, we propose a novel CC protocol, called Relay, which reduces the total number of aborts to O(N) for N conflicting transactions requesting an object, yielding a significantly improvement over past CC protocols which has O(N^2) total number of aborts. We also analyze Relay's dynamic competitive ratio in terms of the communication cost (for dynamically generated transactions), and show that Relay's dynamic competitive ratio is O(log D_0), where D_0 is the normalized diameter of the underlying network spanning tree.
To reduce unnecessary aborts and increase concurrency for D-STM based on globally-consistent contention management policies, we propose the distributed dependency-aware (DDA) conflict resolution model, which adopts different conflict resolution strategies based on transaction types. In the DDA model, read-only transactions never abort by keeping a set of versions for each object. Each transaction only keeps precedence relations based on its local knowledge of precedence relations. We show that the DDA model ensures that 1) read-only transactions never abort, 2) every transaction eventually commits, 3) supports invisible reads, and 4) efficiently garbage collects useless object versions.
To establish competitive ratio bounds for contention managers in D-STM, we model the distributed transactional contention management problem as the traveling salesman problem (TSP). We prove that for D-STM, any online, work conserving, deterministic contention manager provides an Omega(max[s,s^2/D]) competitive ratio in a network with normalized diameter D and s shared objects. Compared with the Omega(s) competitive ratio for multiprocessor STM, the performance guarantee for D-STM degrades by a factor proportional to s/D. We present a randomized algorithm, called Randomized, with a competitive ratio O(sClog n log ^{2} n) for s objects shared by n transactions, with a maximum conflicting degree C. To break this lower bound, we present a randomized algorithm Cutting, which needs partial information of transactions and an approximate TSP algorithm A with approximation ratio phi_A. We show that the average case competitive ratio of Cutting is O(s phi_A log^{2}m log^{2}n), which is close to O(s).
Single copy (SC) D-STM keeps only one writable copy of each object, and thus cannot tolerate node failures. We propose a quorum-based replication (QR) D-STM model, which provides provable fault-tolerance without incurring high communication overhead, when compared with the SC model. The QR model stores object replicas in a tree quorum system, where two quorums intersect if one of them is a write quorum, and ensures the consistency among replicas at commit-time. The communication cost of an operation in the QR model is proportional to the communication cost from the requesting node to its closest read or write quorum. In the presence of node failures, the QR model exhibits high availability and degrades gracefully when the number of failed nodes increases, with reasonable higher communication cost.
We develop a protoytpe implementation of the dissertation's proposed solutions, including DQCC and DPQCC protocols, Relay protocol, and the DDA model, in the HyFlow Java D-STM framework. We experimentally evaluated these solutions with respective competitor solutions on a set of microbenchmarks (e.g., data structures including distributed linked list, binary search tree and red-black tree) and macrobenchmarks (e.g., distributed versions of the applications in the STAMP STM benchmark suite for multiprocessors). Our experimental studies revealed that: 1) based on the same distributed queuing protocol (i.e., Ballistic CC protocol), DPQCC yields better transactional throughput than DQCC, by a factor of 50% - 100%, on a range of transactional workloads; 2) Relay outperforms competitor protocols (including Arrow, Ballistic and Home) by more than 200% when the network size and contention increase, as it efficiently reduces the average aborts per transaction (less than 0.5); 3) the DDA model outperforms existing contention management policies (including Greedy, Karma and Kindergarten managers) by upto 30%-40% in high contention environments; For read/write-balanced workloads, the DDA model outperforms these contention management policies by 30%-60% on average; for read-dominated workloads, the model outperforms by over 200%. / Ph. D.
|
63 |
Scalability Analysis and Optimization for Large-Scale Deep LearningPumma, Sarunya 03 February 2020 (has links)
Despite its growing importance, scalable deep learning (DL) remains a difficult challenge. Scalability of large-scale DL is constrained by many factors, including those deriving from data movement and data processing. DL frameworks rely on large volumes of data to be fed to the computation engines for processing. However, current hardware trends showcase that data movement is already one of the slowest components in modern high performance computing systems, and this gap is only going to increase in the future. This includes data movement needed from the filesystem, within the network subsystem, and even within the node itself, all of which limit the scalability of DL frameworks on large systems. Even after data is moved to the computational units, managing this data is not easy. Modern DL frameworks use multiple components---such as graph scheduling, neural network training, gradient synchronization, and input pipeline processing---to process this data in an asynchronous uncoordinated manner, which results in straggler processes and consequently computational imbalance, further limiting scalability. This thesis studies a subset of the large body of data movement and data processing challenges that exist in modern DL frameworks.
For the first study, we investigate file I/O constraints that limit the scalability of large-scale DL. We first analyze the Caffe DL framework with Lightning Memory-Mapped Database (LMDB), one of the most widely used file I/O subsystems in DL frameworks, to understand the causes of file I/O inefficiencies. Based on our analysis, we propose LMDBIO---an optimized I/O plugin for scalable DL that addresses the various shortcomings in existing file I/O for DL. Our experimental results show that LMDBIO significantly outperforms LMDB in all cases and improves overall application performance by up to 65-fold on 9,216 CPUs of the Blues and Bebop supercomputers at Argonne National Laboratory.
Our second study deals with the computational imbalance problem in data processing. For most DL systems, the simultaneous and asynchronous execution of multiple data-processing components on shared hardware resources causes these components to contend with one another, leading to severe computational imbalance and degraded scalability. We propose various novel optimizations that minimize resource contention and improve performance by up to 35% for training various neural networks on 24,576 GPUs of the Summit supercomputer at Oak Ridge National Laboratory---the world's largest supercomputer at the time of writing of this thesis. / Doctor of Philosophy / Deep learning is a method for computers to automatically extract complex patterns and trends from large volumes of data. It is a popular methodology that we use every day when we talk to Apple Siri or Google Assistant, when we use self-driving cars, or even when we witnessed IBM Watson be crowned as the champion of Jeopardy! While deep learning is integrated into our everyday life, it is a complex problem that has gotten the attention of many researchers.
Executing deep learning is a highly computationally intensive problem. On traditional computers, such as a generic laptop or desktop machine, the computation for large deep learning problems can take years or decades to complete. Consequently, supercomputers, which are machines with massive computational capability, are leveraged for deep learning workloads. The world's fastest supercomputer today, for example, is capable of performing almost 200 quadrillion floating point operations every second. While that is impressive, for large problems, unfortunately, even the fastest supercomputers today are not fast enough. The problem is not that they do not have enough computational capability, but that deep learning problems inherently rely on a lot of data---the entire concept of deep learning centers around the fact that the computer would study a huge volume of data and draw trends from it. Moving and processing this data, unfortunately, is much slower than the computation itself and with the current hardware trends it is not expected to get much faster in the future.
This thesis aims at making deep learning executions on large supercomputers faster. Specifically, it looks at two pieces associated with managing data: (1) data reading---how to quickly read large amounts of data from storage, and (2) computational imbalance---how to ensure that the different processors on the supercomputer are not waiting for each other and thus wasting time. We first analyze each performance problem to identify the root cause of it. Then, based on the analysis, we propose several novel techniques to solve the problem. With our optimizations, we are able to significantly improve the performance of deep learning execution on a number of supercomputers, including Blues and Bebop at Argonne National Laboratory, and Summit---the world's fastest supercomputer---at Oak Ridge National Laboratory.
|
64 |
On the Interaction of High-Performance Network Protocol Stacks with Multicore ArchitecturesChunangad Narayanaswamy, Ganesh 20 May 2008 (has links)
Multicore architectures have been one of the primary driving forces in the recent rapid growth in high-end computing systems, contributing to its growing scales and capabilities. With significant enhancements in high-speed networking technologies and protocol stacks which support these high-end systems, a growing need to understand the interaction between them closely is realized. Since these two components have been designed mostly independently, there tend to have often serious and surprising interactions that result in heavy asymmetry in the effective capability of the different cores, thereby degrading the performance for various applications. Similarly, depending on the communication pattern of the application and the layout of processes across nodes, these interactions could potentially introduce network scalability issues, which is also an important concern for system designers.
In this thesis, we analyze these asymmetric interactions and propose and design a novel systems level management framework called SIMMer (Systems Interaction Mapping Manager) that automatically monitors these interactions and dynamically manages the mapping of processes on processor cores to transparently maximize application performance. Performance analysis of SIMMer shows that it can improve the communication performance of applications by more than twofold and the overall application performance by 18%. We further analyze the impact of contention in network and processor resources and relate it to the communication pattern of the application. Insights learnt from these analyses can lead to efficient runtime configurations for scientific applications on multicore architectures. / Master of Science
|
65 |
Scalable Data Management for Object-based Storage SystemsWadhwa, Bharti 19 August 2020 (has links)
Parallel I/O performance is crucial to sustain scientific applications on large-scale High-Performance Computing (HPC) systems. Large scale distributed storage systems, in particular the object-based storage systems, face severe challenges for managing the data efficiently. Inefficient data management leads to poor I/O and storage performance in HPC applications and scientific workflows. Some of the main challenges for efficient data management arise from poor resource allocation, load imbalance in object storage targets, and inflexible data sharing between applications in a workflow. In addition, parallel I/O makes it challenging to shoehorn new interfaces, such as taking advantage of multiple layers of storage and support for analysis in the data path. Solving these challenges to improve performance and efficiency of object-based storage systems is crucial, especially for upcoming era of exascale systems.
This dissertation is focused on solving these major challenges in object-based storage systems by providing scalable data management strategies. In the first part of the dis-sertation (Chapter 3), we present a resource contention aware load balancing tool (iez) for large scale distributed object-based storage systems. In Chapter 4, we extend iez to support Progressive File Layout for object-based storage system: Lustre. In the second part (Chapter 5), we present a technique to facilitate data sharing in scientific workflows using object-based storage, with our proposed tool Workflow Data Communicator. In the last part of this dissertation, we present a solution for transparent data management in multi-layer storage hierarchy of present and next-generation HPC systems.This dissertation shows that by intelligently employing scalable data management techniques, scientific applications' and workflows' flexibility and performance in object-based storage systems can be enhanced manyfold. Our proposed data management strategies can guide next-generation HPC storage systems' software design to efficiently support data for scientific applications and workflows. / Doctor of Philosophy / Large scale object-based storage systems face severe challenges to manage the data efficiently for HPC applications and workflows. These storage systems often manage and share data inflexibly, without considering the load imbalance and resource contention in the underlying multi-layer storage hierarchy. This dissertation first studies how resource contention and inflexible data sharing mechanisms impact HPC applications' storage and I/O performance; and then presents a series of efficient techniques, tools and algorithms to provide efficient and scalable data management for current and next-generation HPC storage systems
|
66 |
HyFlow: A High Performance Distributed Software Transactional Memory FrameworkSaad Ibrahim, Mohamed Mohamed 14 June 2011 (has links)
We present HyFlow - a distributed software transactional memory (D-STM) framework for distributed concurrency control. Lock-based concurrency control suffers from drawbacks including deadlocks, livelocks, and scalability and composability challenges. These problems are exacerbated in distributed systems due to their distributed versions which are more complex to cope with (e.g., distributed deadlocks). STM and D-STM are promising alternatives to lock-based and distributed lock-based concurrency control for centralized and distributed systems, respectively, that overcome these difficulties. HyFlow is a Java framework for DSTM, with pluggable support for directory lookup protocols, transactional synchronization and recovery mechanisms, contention management policies, cache coherence protocols, and network communication protocols. HyFlow exports a simple distributed programming model that excludes locks: using (Java 5) annotations, atomic sections are defiend as transactions, in which reads and writes to shared, local and remote objects appear to take effect instantaneously. No changes are needed to the underlying virtual machine or compiler. We describe HyFlow's architecture and implementation, and report on experimental studies comparing HyFlow against competing models including Java remote method invocation (RMI) with mutual exclusion and read/write locks, distributed shared memory (DSM), and directory-based D-STM. / Master of Science
|
67 |
Contention-Aware Scheduling for SMT Multicore ProcessorsFeliu Pérez, Josué 27 March 2017 (has links)
The recent multicore era and the incoming manycore/manythread era generate a lot of challenges for computer scientists going from productive parallel programming, over network congestion avoidance and intelligent power management, to circuit design issues. The ultimate goal is to squeeze out as much performance as possible while limiting power and energy consumption and guaranteeing a reliable execution. The increasing number of hardware contexts of current and future systems makes the scheduler an important component to achieve this goal, as there is often a combinatorial amount of different ways to schedule the distinct threads or applications, each with a different performance due to the inter-application interference. Picking an optimal schedule can result in substantial performance gains.
This thesis deals with inter-application interference, covering the problems this fact causes on performance and fairness on actual machines. The study starts with single-threaded multicore processors (Intel Xeon X3320), follows with simultaneous multithreading (SMT) multicores supporting up to two threads per core (Intel Xeon E5645), and goes to the most highly threaded per-core processor that has ever been built (IBM POWER8). The dissertation analyzes the main contention points of each experimental platform and proposes scheduling algorithms that tackle the interference arising at each contention point to improve the system throughput and fairness.
First we analyze contention through the memory hierarchy of current multicore processors. The performed studies reveal high performance degradation due to contention on main memory and any shared cache the processors implement. To mitigate such contention, we propose different bandwidth-aware scheduling algorithms with the key idea of balancing the memory accesses through the workload execution time and the cache requests among the different caches at each cache level.
The high interference that different applications suffer when running simultaneously on the same SMT core, however, does not only affect performance, but can also compromise system fairness. In this dissertation, we also analyze fairness in current SMT multicores. To improve system fairness, we design progress-aware scheduling algorithms that estimate, at runtime, how the processes progress, which allows to improve system fairness by prioritizing the processes with lower accumulated progress.
Finally, this dissertation tackles inter-application contention in the IBM POWER8 system with a symbiotic scheduler that addresses overall SMT interference. The symbiotic scheduler uses an SMT interference model, based on CPI stacks, that estimates the slowdown of any combination of applications if they are scheduled on the same SMT core. The number of possible schedules, however, grows too fast with the number of applications and makes unfeasible to explore all possible combinations. To overcome this issue, the symbiotic scheduler models the scheduling problem as a graph problem, which allows finding the optimal schedule in reasonable time.
In summary, this thesis addresses contention in the shared resources of the memory hierarchy and SMT cores of multicore processors. We identify the main contention points of three systems with different architectures and propose scheduling algorithms to tackle contention at these points. The evaluation on the real systems shows the benefits of the proposed algorithms. The symbiotic scheduler improves system throughput by 6.7\% over Linux. Regarding fairness, the proposed progress-aware scheduler reduces Linux unfairness to a third. Besides, since the proposed algorithm are completely software-based, they could be incorporated as scheduling policies in Linux and used in small-scale servers to achieve the mentioned benefits. / La actual era multinúcleo y la futura era manycore/manythread generan grandes retos en el área de la computación incluyendo, entre otros, la programación paralela productiva o la gestión eficiente de la energía. El último objetivo es alcanzar las mayores prestaciones limitando el consumo energético y garantizando una ejecución confiable. El incremento del número de contextos hardware de los sistemas hace que el planificador se convierta en un componente importante para lograr este objetivo debido a que existen múltiples formas diferentes de planificar las aplicaciones, cada una con distintas prestaciones debido a las interferencias que se producen entre las aplicaciones. Seleccionar la planificación óptima puede proporcionar importantes mejoras de prestaciones.
Esta tesis se ocupa de las interferencias entre aplicaciones, cubriendo los problemas que causan en las prestaciones y equidad de los sistemas actuales. El estudio empieza con procesadores multinúcleo monohilo (Intel Xeon X3320), sigue con multinúcleos con soporte para la ejecución simultanea (SMT) de dos hilos (Intel Xeon E5645), y llega al procesador que actualmente soporta un mayor número de hilos por núcleo (IBM POWER8). La disertación analiza los principales puntos de contención en cada plataforma y propone algoritmos de planificación que mitigan las interferencias que se generan en cada uno de ellos para mejorar la productividad y equidad de los sistemas.
En primer lugar, analizamos la contención a lo largo de la jerarquía de memoria. Los estudios realizados revelan la alta degradación de prestaciones provocada por la contención en memoria principal y en cualquier cache compartida. Para mitigar esta contención, proponemos diversos algoritmos de planificación cuya idea principal es distribuir los accesos a memoria a lo largo del tiempo de ejecución de la carga y las peticiones a las caches entre las diferentes caches compartidas en cada nivel.
Las altas interferencias que sufren las aplicaciones que se ejecutan simultáneamente en un núcleo SMT, sin embargo, no solo afectan a las prestaciones, sino que también pueden comprometer la equidad del sistema. En esta tesis, también abordamos la equidad en los actuales multinúcleos SMT. Para mejorarla, diseñamos algoritmos de planificación que estiman el progreso de las aplicaciones en tiempo de ejecución, lo que permite priorizar los procesos con menor progreso acumulado para reducir la inequidad.
Finalmente, la tesis se centra en la contención entre aplicaciones en el sistema IBM POWER8 con un planificador simbiótico que aborda la contención en todo el núcleo SMT. El planificador simbiótico utiliza un modelo de interferencia basado en pilas de CPI que predice las prestaciones para la ejecución de cualquier combinación de aplicaciones en un núcleo SMT. El número de posibles planificaciones, no obstante, crece muy rápido y hace inviable explorar todas las posibles combinaciones. Por ello, el problema de planificación se modela como un problema de teoría de grafos, lo que permite obtener la planificación óptima en un tiempo razonable.
En resumen, esta tesis aborda la contención en los recursos compartidos en la jerarquía de memoria y el núcleo SMT de los procesadores multinúcleo. Identificamos los principales puntos de contención de tres sistemas con diferentes arquitecturas y proponemos algoritmos de planificación para mitigar esta contención. La evaluación en sistemas reales muestra las mejoras proporcionados por los algoritmos propuestos. Así, el planificador simbiótico mejora la productividad, en promedio, un 6.7% con respecto a Linux. En cuanto a la equidad, el planificador que considera el progreso consigue reducir la inequidad de Linux a una tercera parte. Además, dado que los algoritmos propuestos son completamente software, podrían incorporarse como políticas de planificación en Linux y usarse en servidores a pequeña escala para obtener los benefi / L'actual era multinucli i la futura era manycore/manythread generen grans reptes en l'àrea de la computació incloent, entre d'altres, la programació paral·lela productiva o la gestió eficient de l'energia. L'últim objectiu és assolir les majors prestacions limitant el consum energètic i garantint una execució confiable. L'increment del número de contextos hardware dels sistemes fa que el planificador es convertisca en un component important per assolir aquest objectiu donat que existeixen múltiples formes distintes de planificar les aplicacions, cadascuna amb unes prestacions diferents degut a les interferències que es produeixen entre les aplicacions. Seleccionar la planificació òptima pot donar lloc a millores importants de les prestacions.
Aquesta tesi s'ocupa de les interferències entre aplicacions, cobrint els problemes que provoquen en les prestacions i l'equitat dels sistemes actuals. L'estudi comença amb processadors multinucli monofil (Intel Xeon X3320), segueix amb multinuclis amb suport per a l'execució simultània (SMT) de dos fils (Intel Xeon E5645), i arriba al processador que actualment suporta un major nombre de fils per nucli (IBM POWER8). Aquesta dissertació analitza els principals punts de contenció en cada plataforma i proposa algoritmes de planificació que aborden les interferències que es generen en cadascun d'ells per a millorar la productivitat i l'equitat dels sistemes.
En primer lloc, estudiem la contenció al llarg de la jerarquia de memòria en els processadors multinucli. Els estudis realitzats revelen l'alta degradació de prestacions provocada per la contenció en memòria principal i en qualsevol cache compartida. Per a mitigar la contenció, proposem diversos algoritmes de planificació amb la idea principal de distribuir els accessos a memòria al llarg del temps d'execució de la càrrega i les peticions a les caches entre les diferents caches compartides en cada nivell.
Les altes interferències que sofreixen las aplicacions que s'executen simultàniament en un nucli SMT, no obstant, no sols afecten a las prestacions, sinó que també poden comprometre l'equitat del sistema.
En aquesta tesi, també abordem l'equitat en els actuals multinuclis SMT. Per a millorar-la, dissenyem algoritmes de planificació que estimen el progrés de les aplicacions en temps d'execució, el que permet prioritzar els processos amb menor progrés acumulat para a reduir la inequitat.
Finalment, la tesi es centra en la contenció entre aplicacions en el sistema IBM POWER8 amb un planificador simbiòtic que aborda la contenció en tot el nucli SMT. El planificador simbiòtic utilitza un model d'interferència basat en piles de CPI que prediu les prestacions per a l'execució de qualsevol combinació d'aplicacions en un nucli SMT. El nombre de possibles planificacions, no obstant, creix molt ràpid i fa inviable explorar totes les possibles combinacions. Per resoldre aquest contratemps, el problema de planificació es modela com un problema de teoria de grafs, la qual cosa permet obtenir la planificació òptima en un temps raonable.
En resum, aquesta tesi aborda la contenció en els recursos compartits en la jerarquia de memòria i el nucli SMT dels processadors multinucli. Identifiquem els principals punts de contenció de tres sistemes amb diferents arquitectures i proposem algoritmes de planificació per a mitigar aquesta contenció. L'avaluació en sistemes reals mostra les millores proporcionades pels algoritmes proposats.
Així, el planificador simbiòtic millora la productivitat una mitjana del 6.7% respecte a Linux. Pel que fa a l'equitat, el planificador que considera el progrés aconsegueix reduir la inequitat de Linux a una tercera part. A més, donat que els algoritmes proposats son completament software, podrien incorporar-se com a polítiques de planificació en Linux
i emprar-se en servidors a petita escala per obtenir els avantatges mencionats. / Feliu Pérez, J. (2017). Contention-Aware Scheduling for SMT Multicore Processors [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/79081 / Premios Extraordinarios de tesis doctorales
|
68 |
Resistance to School Consolidation in a Rural Appalachian CommunityKelly, Amanda 02 November 2007 (has links)
School consolidation, which involves closing one or more schools and combining them into a single school, is a common phenomenon in rural Appalachian communities due to out-migration and lack of funding for public schools. When school consolidation occurs, the local school may be closed, or students from other communities may be bused to the school. Community residents, however, do not always agree with the decision to consolidate their local schools. When this disagreement occurs, residents may choose to participate in organized resistance activities to show their opposition, make their voices heard to local politicians and the media, and seek an alternative to the proposed consolidation.
This case study of school consolidation in one rural Appalachian county seeks to document and analyze the struggle in which community residents engaged in an effort to prevent local schools from being consolidated. Data was collected in the form of semi-structured interviews conducted with members and sympathizers of a resistance organization called TOPS. TOPS was formed in 2001 to oppose school consolidation, but its members were not successful in keeping their local schools open. Many schools in McDowell County have been consolidated or are scheduled to be consolidated in the near future. For example, Big Creek High School, which was at the center of many consolidation debates, will be closed in 2010. Its students will be bused to a new, consolidated high school.
I conducted interviews during fall 2006 and spring 2007 to determine community members' grievances concerning consolidation, to establish a narrative of their struggle against state government officials, and to provide a basis for analyzing the movement's failure to achieve its goals. I used these interviews, along with TOPS' documents, local newspaper articles, and literature from other anti-consolidation efforts, to examine possible reasons why TOPS was not successful. Social movements literature, particularly the concepts of framing and repertoires of contention, formed the theoretical basis of this analysis. / Master of Science
|
69 |
Le rôle de la contention physique dans le développement du delirium chez les aînés atteints de déficits cognitifs hébergés dans les milieux de soins de longue duréeDoucet, Lise 12 April 2018 (has links)
Cette étude prospective visait à analyser la relation qui existe entre la contention physique et le delirium chez les aînés atteints de déficits cognitifs hébergés en milieux de soins de longue durée (MSLD) dans la grande région métropolitaine de Québec. La collecte systématique de données s'est déroulé de mai 2004 à juin 2006, auprès de 138 participants. L'élément prospectif de cette étude consistait à suivre la cohorte à deux reprises en sept jours, soit au jour un temps un (Tl) et au V^ jour temps deux (T2). L'étudiante chercheure a passé plus de six heures dans les départements de soins, afin d'observer directement la population cible. Les résultats démontrent qu'il existe des liens significatifs entre quatre types de contentions physiques (les côtés de lit de jour au Tl, la contention abdominale de jour au Tl et T2 et la contention abdominale de soir au T2) et le delirium. À la lumière des résultats, les infirmières œuvrant en MSLD sont invitées à utiliser plus de méthodes alternatives à l'usage de la contention physique, afin d'assurer des soins de qualité aux résidents et de préserver la santé cognitive des aînés, déjà fragilisés par la maladie
|
70 |
Les facteurs environnementaux associés à la réduction de l’utilisation des mesures de contrôle chez les patients atteints de troubles mentaux : une revue de la portée.Nabil, Samira 05 1900 (has links)
L’utilisation des mesures de contrôle pour la gestion des comportements violents constitue une préoccupation majeure pour les infirmières qui pratiquent dans les unités de soins de santé mentale adulte. Le recours à ces mesures engendre des conséquences physiques et des traumatismes psychologiques chez les patients et tout le personnel soignant. Par conséquent, la prévention et la réduction de leur utilisation deviennent une priorité. De par l’aspect multifactoriel de cette problématique, la connaissance des facteurs qui influencent l’utilisation de ces mesures est primordiale pour cibler les interventions qui permettent de les prévenir ou les réduire. Les facteurs reliés aux caractéristiques cliniques des patients et au personnel soignant sont bien décrits dans la littérature. Toutefois, les facteurs reliés à l’environnement du patient ne sont pas attribués à l’ensemble des dimensions qui le constituent. Ceci est dû à la rareté des modèles conceptuels qui donnent une représentation structurée et globale de cet environnement. L’absence de cette représentation laisse les facteurs associés à l’environnement circonscrits seulement dans sa dimension physique, alors que d’autres facteurs reliés à ses autres dimensions sont rapportés dans la littérature sans être définis comme des facteurs environnementaux. Le but de cette revue de la portée a donc été d’explorer l’étendue des connaissances et d’identifier les facteurs de l’environnement qui sont associés à l’utilisation des mesures de contrôle chez les patients atteints de troubles mentaux. Afin d'intégrer une représentation globale de l’environnement, le cadre de référence du modèle de l’environnement thérapeutique optimal a été retenu (Optimal healing environment, ETO) (Jonas et al. , 2014). Les étapes de la revue de la portée selon Peters et al. (2020) ont été suivis, ce qui a donné lieu à l'inclusion de 35 écrits. L’analyse thématique des données extraites a permis d'identifier deux dimensions, à savoir l’environnement interpersonnel et l’environnement externe du patient. L’environnement interpersonnel décrit le développement et le maintien d’une relation thérapeutique à travers l’amélioration des compétences de communication du personnel soignant, l’utilisation des stratégies de prévention de crise d’agressivité, l’implication du patient, le retour post-évènement d’isolement et/ou contention et le sentiment d’appartenance à la communauté de l’unité de soins. Il décrit également la création d’organisations thérapeutiques via l’exercice du leadership organisationnel, les initiatives d’améliorations de l’organisation des soins, et la gestion des ressources humaines et technologiques. Pour sa part, l’environnement externe décrit la conception architecturale et le design intérieur des unités de soins où le patient est hospitalisé. Finalement, on pourrait conclure que des trois dimensions de l’ETO qui ont été incluses dans ce travail, les dimensions de l’environnement interpersonnel et l’environnement externe sont les plus représentées dans la littérature des cinq dernières années. De plus, les interventions de prévention de crise d’agressivité et le leadership organisationnel se sont montrés des facteurs clés d'un environnement thérapeutique favorisant la réduction de l’utilisation des mesures de contrôle. / The use of coercive measures (seclusion and restraints) to manage violent behaviors is a major preoccupation for adult mental health nurses. The use of these measures results in physical consequences and psychological trauma for patients and all caregivers. Therefore, prevention and reduction of their use becomes a priority. Due to the multifactorial aspect of this problem, understanding of the factors influencing this measures use is essential in order to target interventions to reduce them. Factors related to the clinical characteristics of patients and caregivers are well described in the literature. However, factors related to the patient's environment are not attributed to all of its dimensions. This is due to the scarcity of conceptual models that provide a structured, global representation of this environment. The absence of such a representation leaves the factors associated with the environment circumscribed only within its physical dimension, while other factors related to its other dimensions are reported in the literature without being defined as environmental factors. The aim of this scope review was therefore to explore the extent of knowledge and identify the environmental factors associated with reduced use of coercive measures in patients with mental disorders. In order to incorporate a holistic representation of the environment, the framework of the Optimal healing environment (ETO) model was selected (Jonas et al., 2014). The scope review steps according to Peters et al. (2020) were followed, resulting in the inclusion of 35 literatures. Thematic analysis of the extracted data identified two dimensions, namely the patient's interpersonal environment and external environment. The interpersonal environment describes the development and maintenance of a therapeutic relationship through the improvement of caregivers' communication skills, the use of aggressive crisis prevention strategies, patient involvement, the post-event return from isolation and/or restraint, and the sense of belonging to the care community. It also describes the creation of therapeutic organizations through the exercise of organizational leadership, initiatives to improve the organization of care, and the management of human and technological resources. For its part, the external environment describes the architectural and interior design of the care units where the patient is hospitalized. Finally, we
may conclude that of the ETO three dimensions included in this work, interpersonal environment and external environment are the most represented in the literature of the last five years. In addition, aggression crisis prevention interventions and organizational leadership have been shown to be key factors in a therapeutic environment conducive to reducing the use of coercive measures.
|
Page generated in 0.0733 seconds