Spelling suggestions: "subject:"contention"" "subject:"ontention""
31 |
Implication du stress oxydatif et modélisation du risque suicidaire dans la schizophrénie : développement de deux modèles murins à double atteinte / Involvement of oxidative stress and development of suicide-related behaviors in schizphrenia : development of two double-hit murine modelsDeslauriers, Jessica January 2014 (has links)
L’hypothèse neurodéveloppementale de la schizophrénie suggère, entre autres, que l’inflammation prénatale sensibilise le cerveau en développement à une atteinte subséquente durant le jeune âge, augmentant le risque de développer la maladie. Pour mieux comprendre la pathophysiologie de la maladie, nous avons développé un modèle à double atteinte basé sur l’activation immunitaire gestationnelle au polyIC (PIC) suivie d’un stress de contention à l’âge juvénile chez la souris. Un effet additif est observé sur le déficit de l’inhibition du réflexe de sursaut par stimulus sonore (IPP), accompagné d’anomalies oxydatives, dopaminergiques et GABAergiques dans le cortex préfrontal (PFC) et le striatum, chez les souris exposées aux deux atteintes dès l’âge juvénile. À notre connaissance, il s’agit du premier modèle à double atteinte démontrant un déficit de l’IPP dès l’âge pubertaire. De plus, l’acide lipoïque, un antioxydant, prévient les déficits d’IPP et les anomalies neurochimiques du PFC, supportant ainsi l’implication du stress oxydatif dans la schizophrénie et suggérant que les déficits d’IPP sont associés à des anomalies du PFC. Ce modèle aidera à étudier de nouvelles alternatives thérapeutiques pour le traitement de la schizophénie, spécialement durant la phase prodromale. Sur un autre ordre d’idée, un taux élevé de mortalité par suicide existe chez les patients schizophrènes et les mécanismes pathophysiologiques de ce phénomène demeurent peu compris. Un deuxième modèle à double atteinte a été développé, présentant des comportements associés au risque suicidaire (agressivité, impulsivité, anxiété et perte d’espoir), via le polyIC prénatal suivi de l’isolement social dès le sevrage des souriceaux. Les deux atteintes interagissent pour induire plusieurs comportements de type schizophrénique et associés au risque suicidaire chez les mâles. Le chlorure de lithium, connu pour ses effets de « prévention du suicide » dans la population générale, améliore les composantes comportementales chez les animaux isolés seulement, alors que la clozapine, l’antipsychotique avec le meilleur effet de « prévention du suicide », prévient les anomalies comportementales principalement dans le modèle à double atteinte. Les effets distincts des deux molécules suggèrent que les souris exposées aux deux atteintes modélisent un phénotype distinct de celui des souris seulement isolées. Comme le diagnostic du risque suicidaire chez les patients schizophrènes demeure un défi pour les psychiatres, notre modèle in vivo aidera à mieux comprendre les mécanismes impliqués dans le comportement suicidaire chez ces patients, favorisant ainsi le développement de nouveaux traitements pour cette population vulnérable.
|
32 |
Cost- and Performance-Aware Resource Management in Cloud InfrastructuresNasim, Robayet January 2017 (has links)
High availability, cost effectiveness and ease of application deployment have accelerated the adoption rate of cloud computing. This fast proliferation of cloud computing promotes the rapid development of large-scale infrastructures. However, large cloud datacenters (DCs) require infrastructure, design, deployment, scalability and reliability and need better management techniques to achieve sustainable design benefits. Resources inside cloud infrastructures often operate at low utilization, rarely exceeding 20-30%, which increases the operational cost significantly, especially due to energy consumption. To reduce operational cost without affecting quality of service (QoS) requirements, cloud applications should be allocated just enough resources to minimize their completion time or to maximize utilization. The focus of this thesis is to enable resource-efficient and performance-aware cloud infrastructures by addressing above mentioned cost and performance related challenges. In particular, we propose algorithms, techniques, and deployment strategies for improving the dynamic allocation of virtual machines (VMs) into physical machines (PMs). For minimizing the operational cost, we mainly focus on optimizing energy consumption of PMs by applying dynamic VM consolidation methods. To make VM consolidation techniques more efficient, we propose to utilize multiple paths to spread traffic and deploy recent queue management schemes which can maximize network resource utilization and reduce both downtime and migration time for live migration techniques. In addition, a dynamic resource allocation scheme is presented to distribute workloads among geographically dispersed DCs considering their location based time varying costs due to e.g. carbon emission or bandwidth provision. For optimizing performance level objectives, we focus on interference among applications contending in shared resources and propose a novel VM consolidation scheme considering sensitivity of the VMs to their demanded resources. Further, to investigate the impact of uncertain parameters on cloud resource allocation and applications’ QoS such as unpredictable variations in demand, we develop an optimization model based on the theory of robust optimization. Furthermore, in order to handle the scalability issues in the context of large scale infrastructures, a robust and fast Tabu Search algorithm is designed and evaluated. / High availability, cost effectiveness and ease of application deployment have accelerated the adoption rate of cloud computing. This fast proliferation of cloud computing promotes the rapid development of large-scale infrastructures. However, large cloud datacenters (DCs) require infrastructure, design, deployment, scalability and reliability and need better management techniques to achieve sustainable design benefits. Resources inside cloud infrastructures often operate at low utilization, rarely exceeding 20-30%, which increases the operational cost significantly, especially due to energy consumption. To reduce operational cost without affecting quality of service (QoS) requirements, cloud applications should be allocated just enough resources to minimize their completion time or to maximize utilization. The focus of this thesis is to enable resource-efficient and performance-aware cloud infrastructures by addressing above mentioned cost and performance related challenges. In particular, we propose algorithms, techniques, and deployment strategies for improving the dynamic allocation of virtual machines (VMs) into physical machines (PMs).
|
33 |
Routage par déflexion dans les réseaux tout optique à commutation de burstsMetnani, Ammar January 2004 (has links)
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
|
34 |
Métodos para contenção de poluição em Redes P2P / Contention pollution methods in P2P networksSilva, Juliano Freitas da 13 January 2007 (has links)
Made available in DSpace on 2015-03-05T13:58:26Z (GMT). No. of bitstreams: 0
Previous issue date: 13 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Apesar de ser uma das principais aplicações da Internet na atualidade, o compartilhamento de arquivos P2P tem sido fortemente prejudicado por ataques de poluição de conteúdo. Esta dissertação propõe e analisa uma classe de métodos
de contenção de poluição cujo princípio básico é a limitação do número instantâneo de downloads de acordo com a reputação de versões. Inicialmente, o método é proposto e avaliado em termos de um ambiente idealizado, mostrando sua eficiência na contenção de poluição e baixa sobrecarga induzida quando o título não é poluído. A seguir, valendo-se de modelos clássicos para projeto de redes P2P, são propostos
e comparados métodos de contenção distribuída / Despite currently one of the main Internet applications, P2P file sharing has been hampered by content pollution attacks. This work proposes and analyzes a class of contention methods to reduce the dissemination of polluted content whose basic principle is to limit the amount of instantaneous downloads according to its reputation. The method is firstly proposed and evaluated in terms of an idealized environment. The evaluation shows the eficiency of the contention method and the low overhead induced when the content is not polluted. Then, inspired by classic P2P designs, we propose and compare distributed contention methods
|
35 |
IMPROVING PERFORMANCE AND ENERGY EFFICIENCY FOR THE INTEGRATED CPU-GPU HETEROGENEOUS SYSTEMSWen, Hao 01 January 2018 (has links)
Current heterogeneous CPU-GPU architectures integrate general purpose CPUs and highly thread-level parallelized GPUs (Graphic Processing Units) in the same die. This dissertation focuses on improving the energy efficiency and performance for the heterogeneous CPU-GPU system.
Leakage energy has become an increasingly large fraction of total energy consumption, making it important to reduce leakage energy for improving the overall energy efficiency. Cache occupies a large on-chip area, which are good targets for leakage energy reduction. For the CPU cache, we study how to reduce the cache leakage energy efficiently in a hybrid SPM (Scratch-Pad Memory) and cache architecture. For the GPU cache, the access pattern of GPU cache is different from the CPU, which usually has little locality and high miss rate. In addition, GPU can hide memory latency more effectively due to multi-threading. Because of the above reasons, we find it is possible to place the cache lines of the GPU data caches into the low power mode more aggressively than traditional leakage management for CPU caches, which can reduce more leakage energy without significant performance degradation.
The contention in shared resources between CPU and GPU, such as the last level cache (LLC), interconnection network and DRAM, may degrade both CPU and GPU performance. We propose a simple yet effective method based on probability to control the LLC replacement policy for reducing the CPU’s inter-core conflict misses caused by GPU without significantly impacting GPU performance. In addition, we develop two strategies to combine the probability based method for the LLC and an existing technique called virtual channel partition (VCP) for the interconnection network to further improve the CPU performance.
For a specific graph application of Breadth first search (BFS), which is a basis for graph search and a core building block for many higher-level graph analysis applications, it is a typical example of parallel computation that is inefficient on GPU architectures. In a graph, a small portion of nodes may have a large number of neighbors, which leads to irregular tasks on GPUs. These irregularities limit the parallelism of BFS executing on GPUs. Unlike the previous works focusing on fine-grained task management to address the irregularity, we propose Virtual-BFS (VBFS) to virtually change the graph itself. By adding virtual vertices, the high-degree nodes in the graph are divided into groups that have an equal number of neighbors, which increases the parallelism such that more GPU threads can work concurrently. This approach ensures correctness and can significantly improve both the performance and energy efficiency on GPUs.
|
36 |
Dynamics of Radicalization: The Rise of Radical Activism against Climate ChangeGibson, Shannon M. 26 July 2011 (has links)
Recognizing that over the past decade transnational environmental activism focusing on climate change has radicalized in public tactics and discourse, this project employs a mechanism-process approach to analyze and explain processes of tactical and discursive radicalization within the global climate justice movement(s) over time. As global activists within this movement construct and pursue public, as well as covert, campaigns directed at states, international institutions, corporations, the media and society at large, it asks why, how and to what effect specific sectors of the broader movement have radicalized from the period 2006-2010. Utilizing longitudinal quantitative protest event and political claims analysis and ethnographic field work and participant action research, it aims to provide a descriptive and comparative account of tactical and discursive variations at international climate change protests situated within the context of a broader cycle of transnational global justice contention.
|
37 |
A Priority MAC Scheme in Ad-hoc NetworksHsu, Chih-chun 24 August 2005 (has links)
The emerging widespread use of real-time multimedia applications over wireless networks makes the support of Quality of Service (QoS) a key problem. In this paper, we focus on QoS support mechanisms for IEEE 802.11 Wireless ad-hoc networks.
First, we review the IEEE 802.11 standard and other enhanced MAC schemes that have been proposed to support QoS for 802.11 ad hoc networks. Then we propose a new priority MAC scheme which uses the different initial contention window instead of CWmin in IEEE 802.11 MAC to reduce the collision rate, then reduces the average delay and increases the throughput.
|
38 |
Slot Allocation Strategy for Clustered Ad Hoc NetworksYao, Chin-Yi 09 February 2006 (has links)
This work studies the allocation of bandwidth resources in wireless ad hoc networks. The highest-density clustering algorithm is presented to promote reuse of the spatial channel and a new slot allocation algorithm is proposed to achieve conflict-free scheduling for transmissions. Since the location-dependent contention is an important characteristic of ad hoc networks, in this paper we consider this feature of ad hoc networks to present a new cluster formation algorithm, by increasing the number of simultaneous links to enhance spatial channel reuse. Furthermore, because each cluster has its own scheduler and schedulers operate independently of each other, the transmissions may conflict among the clusters. In this paper, we classify the flows by the locations of their endpoints to prevent this problem. Finally, the proposed mechanism is implemented by simulation and the results reveal that the conflicts can be efficiently avoided without global information and the network throughput is improved without violating fairness.
|
39 |
A New Feedback-based Contention Avoidance Algorithm For Optical Burst Switching NetworksToku, Hadi Alper 01 December 2008 (has links) (PDF)
In this thesis, a feedback-based contention avoidance technique based on weighted Dijkstra algorithm is proposed to address the contention avoidance problem for Optical Burst Switching networks.
Optical Burst Switching (OBS) has been proposed as a promising technique to support high-bandwidth, bursty data traffic in the next-generation optical Internet. Nevertheless, there are still some challenging issues that need to be solved to achieve an effective implementation of OBS. Contention problem occurs when two or more bursts are destined for the same wavelength. To solve this problem, various reactive contention resolution methods have been proposed in the
literature. However, many of them are very vulnerable to network load and may suffer severe loss in case of heavy traffic. By proactively controlling the overall traffic, network is able to update itself in case of high congestion and by means of this method / contention avoidance can be achieved efficiently.
The performance analysis of the proposed algorithm is presented through network simulation results provided by OMNET++ simulation environment. The simulation results show that the proposed contention avoidance technique significantly reduces the burst loss probability as compared to networks without any contention avoidance techniques.
|
40 |
Knowledge-Based Video Compression for Robots and Sensor NetworksWilliams, Chris Williams 11 July 2006 (has links)
Robot and sensor networks are needed for safety, security, and rescue applicationssuch as port security and reconnaissance during a disaster. These applications rely on realtimetransmission of images, which generally saturate the available wireless networkinfrastructure. Knowledge-based Compression is a strategy for reducing the video frametransmission rate between robots or sensors and remote operators. Because images mayneed to be archived as evidence and/or distributed to multiple applications with differentpost processing needs, lossy compression schemes, such as MPEG, H.26x, etc., are notacceptable. This work proposes a lossless video server system consisting of three classesof filters (redundancy, task, and priority) which use different levels of knowledge (localsensed environment, human factors associated with a local task, and relative globalpriority of a task) at the application layer of the network. It demonstrates the redundancyand task filters for realistic robot search scenarios. The redundancy filter is shown toreduce the overall transmission bandwidth by 24.07% to 33.42%, and when combinedwith the task filter, reduces overall transmission bandwidth by 59.08% to 67.83%. Byitself, the task filter has the capability to reduce transmission bandwidth by 32.95% to33.78%. While Knowledge-based Compression generally does not reach the same levels ofreduction as MPEG, there are instances where the system outperforms MPEG encoding.
|
Page generated in 0.1147 seconds