151 |
Évaluation de la collaboration intersectorielle en contexte de santé sociale : cas de l'Alliance sherbrookoise pour les jeunes en santéSte-Marie, Kyanne January 2016 (has links)
La proposition présente les résultats d’une analyse qualitative des enjeux de la collaboration inter-organisationnelle basée sur une étude empirique (22 entrevues semi-dirigées à travers le prisme des cinq dimensions du modèle conceptuel de Thomson, 2009) des mécanismes de collaboration mis en place par l’Alliance sherbrookoise pour les jeunes en santé. L’Alliance est composée de grandes organisations publiques sherbrookoises, soit le CIUSSS de l’Estrie-CHUS, la Commission scolaire de la région de Sherbrooke, la Ville de Sherbrooke et le Réseau des centres de la petite enfance.
La recherche a comme objectif de comprendre comment les processus collaboratifs au sein de l’Alliance affectent son fonctionnement. L’étude du fonctionnement de l’Alliance permet de saisir la complexité des enjeux entourant les collaborations inter-organisationnelles, une compréhension essentielle pour éventuellement outiller et accompagner les organisations sujettes à ce type de pratiques collaboratives.
Les résultats générés par cette étude ont permis de dresser un portrait des processus entourant la collaboration inter-organisationnelle à l’Alliance, ce qui mène à une compréhension plus fine de son fonctionnement actuel, de ses forces, de ses limites et, surtout, des leviers de changements efficaces pour l’organisation. Dans un spectre plus large, la recherche contribue à nourrir les discussions entourant le développement de modèles de gouvernance plus ouverts et collaboratifs.
|
152 |
On reducing the decoding complexity of shingled magnetic recording systemAwad, Nadia January 2013 (has links)
Shingled Magnetic Recording (SMR) has been recognised as one of the alternative technologies to achieve an areal density beyond the limit of the perpendicular recording technique, 1 Tb/in2, which has an advantage of extending the use of the conventional method media and read/write head. This work presents SMR system subject to both Inter Symbol Interference (ISI) and Inter Track Interference (ITI) and investigates different equalisation/detection techniques in order to reduce the complexity of this system. To investigate the ITI in shingled systems, one-track one-head system model has been extended into two-track one-head system model to have two interfering tracks. Consequently, six novel decoding techniques have been applied to the new system in order to find the Maximum Likelihood (ML) sequence. The decoding complexity of the six techniques has been investigated and then measured. The results show that the complexity is reduced by more than three times with 0.5 dB loss in performance. To measure this complexity practically, perpendicular recording system has been implemented in hardware. Hardware architectures are designed for that system with successful Quartus II fitter which are: Perpendicular Magnetic Recording (PMR) channel, digital filter equaliser with and without Additive White Gaussian Noise (AWGN) and ideal channel architectures. Two different hardware designs are implemented for Viterbi Algorithm (VA), however, Quartus II fitter for both of them was unsuccessful. It is found that, Simulink/Digital Signal Processing (DSP) Builder based designs are not efficient for complex algorithms and the eligible solution for such designs is writing Hardware Description Language (HDL) codes for those algorithms.
|
153 |
The Relationship of Elderly Health Issues and Intergenerational Financial TransactionsGreen, Natalie 01 January 2017 (has links)
The recent advancements in healthcare is extending the lives of older people. However, such advancements come at a cost: higher medical expenses with less financial resources and limited, if not truncated, monetary assistance. The dilemma is further compounded by the unreliable quality of life produced by extending life of the chronically ill. Using the RAND data, I examine three financial transaction outcomes at different points-in-time in context of the onset of a health issue: one, the probability of a transaction occurring, two, how much is given, and three, the frequency of transactions. I also examine how a health issue impacts financial transaction choices within a given year, a year after the health issue occurs, and the longer term impacts on subsequent intergenerational financial transactions. I find no change in financial behavior of an adult child immediately after the health issue occurs and minimal over the longer period of time. However, this study does show a slight and statistically significant shift in financial transactions within the first year after a health issue occurs. Additionally, the results suggest that those who can live in assisted care and near respondent children have higher transactions between family members.
|
154 |
The inter-cloud meta-schedulingSotiriadis, Stelios January 2013 (has links)
Inter-cloud is a recently emerging approach that expands cloud elasticity. By facilitating an adaptable setting, it purposes at the realization of a scalable resource provisioning that enables a diversity of cloud user requirements to be handled efficiently. This study’s contribution is in the inter-cloud performance optimization of job executions using metascheduling concepts. This includes the development of the inter-cloud meta-scheduling (ICMS) framework, the ICMS optimal schemes and the SimIC toolkit. The ICMS model is an architectural strategy for managing and scheduling user services in virtualized dynamically inter-linked clouds. This is achieved by the development of a model that includes a set of algorithms, namely the Service-Request, Service-Distribution, Service-Availability and Service-Allocation algorithms. These along with resource management optimal schemes offer the novel functionalities of the ICMS where the message exchanging implements the job distributions method, the VM deployment offers the VM management features and the local resource management system details the management of the local cloud schedulers. The generated system offers great flexibility by facilitating a lightweight resource management methodology while at the same time handling the heterogeneity of different clouds through advanced service level agreement coordination. Experimental results are productive as the proposed ICMS model achieves enhancement of the performance of service distribution for a variety of criteria such as service execution times, makespan, turnaround times, utilization levels and energy consumption rates for various inter-cloud entities, e.g. users, hosts and VMs. For example, ICMS optimizes the performance of a non-meta-brokering inter-cloud by 3%, while ICMS with full optimal schemes achieves 9% optimization for the same configurations. The whole experimental platform is implemented into the inter-cloud Simulation toolkit (SimIC) developed by the author, which is a discrete event simulation framework.
|
155 |
'Using graphic symbols' : an investigation into the experiences and attitudes of a range of practitioners using graphic symbols with children in the Foundation Stage (three to five year olds) school settingsGreenstock, Louise January 2010 (has links)
There has been a recent increase in the use of graphic symbols in school settings (Abbott and Lucey, 2003). However, the use of graphic symbols in schools remains, to date, an under-researched area. In order to address this and develop understanding of practitioners’ experiences of using graphic symbols in school settings, exploratory research was conducted investigating the experiences of a range of practitioners using symbols in Foundation Stage school settings. A qualitative research design was used drawing upon an interpretive phenomenological philosophical framework. The research sample consisted of three groups of practitioners; teachers, early years practitioners (teaching assistants, learning support assistants and nursery nurses) and speech and language therapists. Data were collected through semi-structured interviews which were conducted face-to-face by the researcher. In the interviews participants were encouraged to explore their experiences of using graphic symbols and their associated beliefs and attitudes about this topic. Interview data was analysed using thematic analysis which was facilitated by the use of qualitative data management software QSR NVivo2. Prolonged engagement with the data led to the development of a theoretical framework based on a set of themes and subthemes. Four major themes were identified: practitioners’ beliefs about which children to use symbols with; practitioners’ thoughts about children’s understanding of symbols; practitioners’ accounts of the ways symbols are used; and, practitioners’ experiences of the implementation of symbols. Interpretations of the data were extended further to develop two original theoretical constructs; ‘models of reasoning’ and ‘perceptions of professional roles’. These constructs were developed to provide an over-arching framework depicting the researcher’s interpretations of the data set as a whole. The findings suggest that practitioners go through a process of reasoning and decision making surrounding the use of symbols. Practitioners in this study also appeared to be influenced by their perceptions of their own professional role and those of others in their decisions surrounding the implementation of symbols. The theoretical model may provide some explanation for the ways in which individual practitioners interact and work alongside practitioners from the same and different professional groups. The findings of the research were related to existing literature in the fields of symbolic development, symbols and literacy, and, collaborative working. The findings led to the development of five suggestions for future research.
|
156 |
Comparaison des dimensions de l'arcade mandibulaire avant et après traitement orthodontique sans extractionCardona, Cédric January 2009 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal.
|
157 |
Analyse des disparités provinciales dans l'application des lois sur les drogues au Canada de 1977 à 2000Dion, Guy Ati January 2003 (has links)
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
|
158 |
Participation de l'endocarde dans les malformations cardiaques du syndrome Holt-OramNadeau, Mathieu January 2007 (has links)
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
|
159 |
Réorganisation cérébrale en réponse à une privation visuelle prolongée : analyse des potentiels évoqués auditifs chez des sujets non-voyantsLeclerc, Charles January 2004 (has links)
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
|
160 |
On the simulation and design of manycore CMPsThompson, Christopher Callum January 2015 (has links)
The progression of Moore’s Law has resulted in both embedded and performance computing systems which use an ever increasing number of processing cores integrated in a single chip. Commercial systems are now available which provide hundreds of cores, and academics have proposed architectures for up to 1024 cores. Embedded multicores are increasingly popular as it is easier to guarantee hard-realtime constraints using individual cores dedicated for tasks, than to use traditional time-multiplexed processing. However, finding the optimal hardware configuration to meet these requirements at minimum cost requires extensive trial and error approaches to investigate the design space. This thesis tackles the problems encountered in the design of these large scale multicore systems by first addressing the problem of fast, detailed micro-architectural simulation. Initially addressing embedded systems, this work exploits the lack of hardware cache-coherence support in many deeply embedded systems to increase the available parallelism in the simulation. Then, through partitioning the NoC and using packet counting and cycle skipping reduces the amount of computation required to accurately model the NoC interconnect. In combination, this enables simulation speeds significantly higher than the state of the art, while maintaining less error, when compared to real hardware, than any similar simulator. Simulation speeds reach up to 370MIPS (Million (target) Instructions Per Second), or 110MHz, which is better than typical FPGA prototypes, and approaching final ASIC production speeds. This is achieved while maintaining an error of only 2.1%, significantly lower than other similar simulators. The thesis continues by scaling the simulator past large embedded systems up to 64-1024 core processors, adding support for coherent architectures using the same packet counting techniques along with low overhead context switching to enable the simulation of such large systems with stricter synchronisation requirements. The new interconnect model was partitioned to enable parallel simulation to further improve simulation speeds in a manner which did not sacrifice any accuracy. These innovations were leveraged to investigate significant novel energy saving optimisations to the coherency protocol, processor ISA, and processor micro-architecture. By introducing a new instruction, with the name wait-on-address, the energy spent during spin-wait style synchronisation events can be significantly reduced. This functions by putting the core into a low-power idle state while the cache line of the indicated address is monitored for coherency action. Upon an update or invalidation (or traditional timer or external interrupts) the core will resume execution, but the active energy of running the core pipeline and repeatedly accessing the data and instruction caches is effectively reduced to static idle power. The thesis also shows that existing combined software-hardware schemes to track data regions which do not require coherency can adequately address the directory-associativity problem, and introduces a new coherency sharer encoding which reduces the energy consumed by sharer invalidations when sharers are grouped closely together, such as would be the case with a system running many tasks with a small degree of parallelism in each. The research concludes by using the extremely fast simulation speeds developed to produce a large set of training data, collecting various runtime and energy statistics for a wide range of embedded applications on a huge diverse range of potential MPSoC designs. This data was used to train a series of machine learning based models which were then evaluated on their capacity to predict performance characteristics of unseen workload combinations across the explored MPSoC design space, using only two sample simulations, with promising results from some of the machine learning techniques. The models were then used to produce a ranking of predicted performance across the design space, and on average Random Forest was able to predict the best design within 89% of the runtime performance of the actual best tested design, and better than 93% of the alternative design space. When predicting for a weighted metric of energy, delay and area, Random Forest on average produced results within 93% of the optimum result. In summary this thesis improves upon the state of the art for cycle accurate multicore simulation, introduces novel energy saving changes the the ISA and microarchitecture of future multicore processors, and demonstrates the viability of machine learning techniques to significantly accelerate the design space exploration required to bring a new manycore design to market.
|
Page generated in 0.0626 seconds