21 |
Quality-of-Service Aware Design and Management of Embedded Mixed-Criticality SystemsRanjbar, Behnaz 12 April 2024 (has links)
Nowadays, implementing a complex system, which executes various applications with different levels of assurance, is a growing trend in modern embedded real-time systems to meet cost, timing, and power consumption requirements. Medical devices, automotive, and avionics industries are the most common safety-critical applications, exploiting these systems known as Mixed-Criticality (MC) systems. MC applications are real-time, and to ensure the correctness of these applications, it is essential to meet strict timing requirements as well as functional specifications. The correct design of such MC systems requires a thorough understanding of the system's functions and their importance to the system. A failure/deadline miss in functions with various criticality levels has a different impact on the system, from no effect to catastrophic consequences. Failure in the execution of tasks with higher criticality levels (HC tasks) may lead to system failure and cause irreparable damage to the system, while although Low-Criticality (LC) tasks assist the system in carrying out its mission successfully, their failure has less impact on the system's functionality and does not harm the system itself to fail.
In order to guarantee the MC system safety, tasks are analyzed with different assumptions to obtain different Worst-Case Execution Times (WCETs) corresponding to the multiple criticality levels and the operation mode of the system. If the execution time of at least one HC task exceeds its low WCET, the system switches from low-criticality mode (LO mode) to high-criticality mode (HI mode). Then, all HC tasks continue executing by considering the high WCET to guarantee the system's safety. In this HI mode, all or some LC tasks are dropped/degraded in favor of HC tasks to ensure HC tasks' correct execution.
Determining an appropriate low WCET for each HC task is crucial in designing efficient MC systems and ensuring QoS maximization. However, in the case where the low WCETs are set correctly, it is not recommended to drop/degrade the LC tasks in the HI mode due to its negative impact on the other functions or on the entire system in accomplishing its mission correctly. Therefore, how to analyze the task dropping in the HI mode is a significant challenge in designing efficient MC systems that must be considered to guarantee the successful execution of all HC tasks to prevent catastrophic damages while improving the QoS.
Due to the continuous rise in computational demand for MC tasks in safety-critical applications, like controlling autonomous driving, the designers are motivated to deploy MC applications on multi-core platforms. Although the parallel execution feature of multi-core platforms helps to improve QoS and ensures the real-timeliness, high power consumption and temperature of cores may make the system more susceptible to failures and instability, which is not desirable in MC applications. Therefore, improving the QoS while managing the power consumption and guaranteeing real-time constraints is the critical issue in designing such MC systems in multi-core platforms.
This thesis addresses the challenges associated with efficient MC system design. We first focus on application analysis by determining the appropriate WCET by proposing a novel approach to provide a reasonable trade-off between the number of scheduled LC tasks at design-time and the probability of mode switching at run-time to improve the system utilization and QoS. The approach presents an analytic-based scheme to obtain low WCETs based on the Chebyshev theorem at design-time. We also show the relationship between the low WCETs and mode switching probability, and formulate and solve the problem for improving resource utilization and reducing the mode switching probability. Further, we analyze the LC task dropping in the HI mode to improve QoS. We first propose a heuristic in which a new metric is defined that determines the number of allowable drops in the HI mode. Then, the task schedulability analysis is developed based on the new metric. Since the occurrence of the worst-case scenario at run-time is a rare event, a learning-based drop-aware task scheduling mechanism is then proposed, which carefully monitors the alterations in the behavior of MC systems at run-time to exploit the dynamic slacks for improving the QoS.
Another critical design challenge is how to improve QoS using the parallel feature of multi-core platforms while managing the power consumption and temperature of these platforms. We develop a tree of possible task mapping and scheduling at design-time to cover all possible scenarios of task overrunning and reduce the LC task drop rate in the HI mode while managing the power and temperature in each scenario of task scheduling. Since the dynamic slack is generated due to the early execution of tasks at run-time, we propose an online approach to reduce the power consumption and maximum temperature by using low-power techniques like DVFS and task re-mapping, while preserving the QoS. Specifically, our approach examines multiple tasks ahead to determine the most appropriate task for the slack assignment that has the most significant effect on power consumption and temperature. However, changing the frequency and selecting a proper task for slack assignment and a suitable core for task re-mapping at run-time can be time-consuming and may cause deadline violation. Therefore, we analyze and optimize the run-time scheduler.:1. Introduction
1.1. Mixed-Criticality Application Design
1.2. Mixed-Criticality Hardware Design
1.3. Certain Challenges and Questions
1.4. Thesis Key Contributions
1.4.1. Application Analysis and Modeling
1.4.2. Multi-Core Mixed-Criticality System Design
1.5. Thesis Overview
2. Preliminaries and Literature Reviews
2.1. Preliminaries
2.1.1. Mixed-Criticality Systems
2.1.2. Fault-Tolerance, Fault Model and Safety Requirements
2.1.3. Hardware Architectural Modeling
2.1.4. Low-Power Techniques and Power Consumption Model
2.2. Related Works
2.2.1. Mixed-Criticality Task Scheduling Mechanisms
2.2.2. QoS Improvement Methods in Mixed-Criticality Systems
2.2.3. QoS-Aware Power and Thermal Management in Multi-Core Mixed-Criticality Systems
2.3. Conclusion
3. Bounding Time in Mixed-Criticality Systems
3.1. BOT-MICS: A Design-Time WCET Adjustment Approach
3.1.1. Motivational Example
3.1.2. BOT-MICS in Detail
3.1.3. Evaluation
3.2. A Run-Time WCET Adjustment Approach
3.2.1. Motivational Example
3.2.2. ADAPTIVE in Detail
3.2.3. Evaluation
3.3. Conclusion
4. Safety- and Task-Drop-Aware Mixed-Criticality Task Scheduling
4.1. Problem Objectives and Motivational Example
4.2. FANTOM in detail
4.2.1. Safety Quantification
4.2.2. MC Tasks Utilization Bounds Definition
4.2.3. Scheduling Analysis
4.2.4. System Upper Bound Utilization
4.2.5. A General Design Time Scheduling Algorithm
4.3. Evaluation
4.3.1. Evaluation with Real-Life Benchmarks
4.3.2. Evaluation with Synthetic Task Sets
4.4. Conclusion
5. Learning-Based Drop-Aware Mixed-Criticality Task Scheduling
5.1. Motivational Example and Problem Statement
5.2. Proposed Method in Detail
5.2.1. An Overview of the Design-Time Approach
5.2.2. Run-Time Approach: Employment of SOLID
5.2.3. LIQUID Approach
5.3. Evaluation
5.3.1. Evaluation with Real-Life Benchmarks
5.3.2. Evaluation with Synthetic Task Sets
5.3.3. Investigating the Timing and Memory Overheads of ML Technique
5.4. Conclusion
6. Fault-Tolerance and Power-Aware Multi-Core Mixed-Criticality System Design
6.1. Problem Objectives and Motivational Example
6.2. Design Methodology
6.3. Tree Generation and Fault-Tolerant Scheduling and Mapping
6.3.1. Making Scheduling Tree
6.3.2. Mapping and Scheduling
6.3.3. Time Complexity Analysis
6.3.4. Memory Space Analysis
6.4. Evaluation
6.4.1. Experimental Setup
6.4.2. Analyzing the Tree Construction Time
6.4.3. Analyzing the Run-Time Timing Overhead
6.4.4. Peak Power Management and Thermal Distribution for Real-Life and Synthetic Applications
6.4.5. Analyzing the QoS of LC Tasks
6.4.6. Analyzing the Peak Power Consumption and Maximum Temperature
6.4.7. Effect of Varying Different Parameters on Acceptance Ratio
6.4.8. Investigating Different Approaches at Run-Time
6.5. Conclusion
7. QoS- and Power-Aware Run-Time Scheduler for Multi-Core Mixed-Criticality Systems
7.1. Research Questions, Objectives and Motivational Example
7.2. Design-Time Approach
7.3. Run-Time Mixed-Criticality Scheduler
7.3.1. Selecting the Appropriate Task to Assign Slack
7.3.2. Re-Mapping Technique
7.3.3. Run-Time Management Algorithm
7.3.4. DVFS governor in Clustered Multi-Core Platforms
7.4. Run-Time Scheduler Algorithm Optimization
7.5. Evaluation
7.5.1. Experimental Setup
7.5.2. Analyzing the Relevance Between a Core Temperature and Energy Consumption
7.5.3. The Effect of Varying Parameters of Cost Functions
7.5.4. The Optimum Number of Tasks to Look-Ahead and the Effect of Task Re-mapping
7.5.5. The Analysis of Scheduler Timings Overhead on Different Real Platforms
7.5.6. The Latency of Changing Frequency in Real Platform
7.5.7. The Effect of Latency on System Schedulability
7.5.8. The Analysis of the Proposed Method on Peak Power, Energy and Maximum Temperature Improvement
7.5.9. The Analysis of the Proposed Method on Peak power, Energy and Maximum Temperature Improvement in a Multi-Core Platform Based on the ODROID-XU3 Architecture
7.5.10. Evaluation of Running Real MC Task Graph Model (Unmanned Air Vehicle) on Real Platform
7.6. Conclusion
8. Conclusion and Future Work
8.1. Conclusions
8.2. Future Work
|
22 |
Quality-of-Service Aware Design and Management of Embedded Mixed-Criticality SystemsRanjbar, Behnaz 06 December 2022 (has links)
Nowadays, implementing a complex system, which executes various applications with different levels of assurance, is a growing trend in modern embedded real-time systems to meet cost, timing, and power consumption requirements. Medical devices, automotive, and avionics industries are the most common safety-critical applications, exploiting these systems known as Mixed-Criticality (MC) systems. MC applications are real-time, and to ensure the correctness of these applications, it is essential to meet strict timing requirements as well as functional specifications. The correct design of such MC systems requires a thorough understanding of the system's functions and their importance to the system. A failure/deadline miss in functions with various criticality levels has a different impact on the system, from no effect to catastrophic consequences. Failure in the execution of tasks with higher criticality levels (HC tasks) may lead to system failure and cause irreparable damage to the system, while although Low-Criticality (LC) tasks assist the system in carrying out its mission successfully, their failure has less impact on the system's functionality and does not harm the system itself to fail.
In order to guarantee the MC system safety, tasks are analyzed with different assumptions to obtain different Worst-Case Execution Times (WCETs) corresponding to the multiple criticality levels and the operation mode of the system. If the execution time of at least one HC task exceeds its low WCET, the system switches from low-criticality mode (LO mode) to high-criticality mode (HI mode). Then, all HC tasks continue executing by considering the high WCET to guarantee the system's safety. In this HI mode, all or some LC tasks are dropped/degraded in favor of HC tasks to ensure HC tasks' correct execution.
Determining an appropriate low WCET for each HC task is crucial in designing efficient MC systems and ensuring QoS maximization. However, in the case where the low WCETs are set correctly, it is not recommended to drop/degrade the LC tasks in the HI mode due to its negative impact on the other functions or on the entire system in accomplishing its mission correctly. Therefore, how to analyze the task dropping in the HI mode is a significant challenge in designing efficient MC systems that must be considered to guarantee the successful execution of all HC tasks to prevent catastrophic damages while improving the QoS.
Due to the continuous rise in computational demand for MC tasks in safety-critical applications, like controlling autonomous driving, the designers are motivated to deploy MC applications on multi-core platforms. Although the parallel execution feature of multi-core platforms helps to improve QoS and ensures the real-timeliness, high power consumption and temperature of cores may make the system more susceptible to failures and instability, which is not desirable in MC applications. Therefore, improving the QoS while managing the power consumption and guaranteeing real-time constraints is the critical issue in designing such MC systems in multi-core platforms.
This thesis addresses the challenges associated with efficient MC system design. We first focus on application analysis by determining the appropriate WCET by proposing a novel approach to provide a reasonable trade-off between the number of scheduled LC tasks at design-time and the probability of mode switching at run-time to improve the system utilization and QoS. The approach presents an analytic-based scheme to obtain low WCETs based on the Chebyshev theorem at design-time. We also show the relationship between the low WCETs and mode switching probability, and formulate and solve the problem for improving resource utilization and reducing the mode switching probability. Further, we analyze the LC task dropping in the HI mode to improve QoS. We first propose a heuristic in which a new metric is defined that determines the number of allowable drops in the HI mode. Then, the task schedulability analysis is developed based on the new metric. Since the occurrence of the worst-case scenario at run-time is a rare event, a learning-based drop-aware task scheduling mechanism is then proposed, which carefully monitors the alterations in the behavior of MC systems at run-time to exploit the dynamic slacks for improving the QoS.
Another critical design challenge is how to improve QoS using the parallel feature of multi-core platforms while managing the power consumption and temperature of these platforms. We develop a tree of possible task mapping and scheduling at design-time to cover all possible scenarios of task overrunning and reduce the LC task drop rate in the HI mode while managing the power and temperature in each scenario of task scheduling. Since the dynamic slack is generated due to the early execution of tasks at run-time, we propose an online approach to reduce the power consumption and maximum temperature by using low-power techniques like DVFS and task re-mapping, while preserving the QoS. Specifically, our approach examines multiple tasks ahead to determine the most appropriate task for the slack assignment that has the most significant effect on power consumption and temperature. However, changing the frequency and selecting a proper task for slack assignment and a suitable core for task re-mapping at run-time can be time-consuming and may cause deadline violation. Therefore, we analyze and optimize the run-time scheduler.:1. Introduction
1.1. Mixed-Criticality Application Design
1.2. Mixed-Criticality Hardware Design
1.3. Certain Challenges and Questions
1.4. Thesis Key Contributions
1.4.1. Application Analysis and Modeling
1.4.2. Multi-Core Mixed-Criticality System Design
1.5. Thesis Overview
2. Preliminaries and Literature Reviews
2.1. Preliminaries
2.1.1. Mixed-Criticality Systems
2.1.2. Fault-Tolerance, Fault Model and Safety Requirements
2.1.3. Hardware Architectural Modeling
2.1.4. Low-Power Techniques and Power Consumption Model
2.2. Related Works
2.2.1. Mixed-Criticality Task Scheduling Mechanisms
2.2.2. QoS Improvement Methods in Mixed-Criticality Systems
2.2.3. QoS-Aware Power and Thermal Management in Multi-Core Mixed-Criticality Systems
2.3. Conclusion
3. Bounding Time in Mixed-Criticality Systems
3.1. BOT-MICS: A Design-Time WCET Adjustment Approach
3.1.1. Motivational Example
3.1.2. BOT-MICS in Detail
3.1.3. Evaluation
3.2. A Run-Time WCET Adjustment Approach
3.2.1. Motivational Example
3.2.2. ADAPTIVE in Detail
3.2.3. Evaluation
3.3. Conclusion
4. Safety- and Task-Drop-Aware Mixed-Criticality Task Scheduling
4.1. Problem Objectives and Motivational Example
4.2. FANTOM in detail
4.2.1. Safety Quantification
4.2.2. MC Tasks Utilization Bounds Definition
4.2.3. Scheduling Analysis
4.2.4. System Upper Bound Utilization
4.2.5. A General Design Time Scheduling Algorithm
4.3. Evaluation
4.3.1. Evaluation with Real-Life Benchmarks
4.3.2. Evaluation with Synthetic Task Sets
4.4. Conclusion
5. Learning-Based Drop-Aware Mixed-Criticality Task Scheduling
5.1. Motivational Example and Problem Statement
5.2. Proposed Method in Detail
5.2.1. An Overview of the Design-Time Approach
5.2.2. Run-Time Approach: Employment of SOLID
5.2.3. LIQUID Approach
5.3. Evaluation
5.3.1. Evaluation with Real-Life Benchmarks
5.3.2. Evaluation with Synthetic Task Sets
5.3.3. Investigating the Timing and Memory Overheads of ML Technique
5.4. Conclusion
6. Fault-Tolerance and Power-Aware Multi-Core Mixed-Criticality System Design
6.1. Problem Objectives and Motivational Example
6.2. Design Methodology
6.3. Tree Generation and Fault-Tolerant Scheduling and Mapping
6.3.1. Making Scheduling Tree
6.3.2. Mapping and Scheduling
6.3.3. Time Complexity Analysis
6.3.4. Memory Space Analysis
6.4. Evaluation
6.4.1. Experimental Setup
6.4.2. Analyzing the Tree Construction Time
6.4.3. Analyzing the Run-Time Timing Overhead
6.4.4. Peak Power Management and Thermal Distribution for Real-Life and Synthetic Applications
6.4.5. Analyzing the QoS of LC Tasks
6.4.6. Analyzing the Peak Power Consumption and Maximum Temperature
6.4.7. Effect of Varying Different Parameters on Acceptance Ratio
6.4.8. Investigating Different Approaches at Run-Time
6.5. Conclusion
7. QoS- and Power-Aware Run-Time Scheduler for Multi-Core Mixed-Criticality Systems
7.1. Research Questions, Objectives and Motivational Example
7.2. Design-Time Approach
7.3. Run-Time Mixed-Criticality Scheduler
7.3.1. Selecting the Appropriate Task to Assign Slack
7.3.2. Re-Mapping Technique
7.3.3. Run-Time Management Algorithm
7.3.4. DVFS governor in Clustered Multi-Core Platforms
7.4. Run-Time Scheduler Algorithm Optimization
7.5. Evaluation
7.5.1. Experimental Setup
7.5.2. Analyzing the Relevance Between a Core Temperature and Energy Consumption
7.5.3. The Effect of Varying Parameters of Cost Functions
7.5.4. The Optimum Number of Tasks to Look-Ahead and the Effect of Task Re-mapping
7.5.5. The Analysis of Scheduler Timings Overhead on Different Real Platforms
7.5.6. The Latency of Changing Frequency in Real Platform
7.5.7. The Effect of Latency on System Schedulability
7.5.8. The Analysis of the Proposed Method on Peak Power, Energy and Maximum Temperature Improvement
7.5.9. The Analysis of the Proposed Method on Peak power, Energy and Maximum Temperature Improvement in a Multi-Core Platform Based on the ODROID-XU3 Architecture
7.5.10. Evaluation of Running Real MC Task Graph Model (Unmanned Air Vehicle) on Real Platform
7.6. Conclusion
8. Conclusion and Future Work
8.1. Conclusions
8.2. Future Work
|
23 |
Comprometimento, valores e crenças em escolas na Bahia: um estudo de caso da cultura da organização escolarNery, Márcia Oliveira January 2005 (has links)
Submitted by Edileide Reis (leyde-landy@hotmail.com) on 2013-04-30T15:23:58Z
No. of bitstreams: 1
Nery, Marcia.pdf: 907807 bytes, checksum: dc503bd7caa4134fa6e5e99ae5ea917c (MD5) / Approved for entry into archive by Maria Auxiliadora Lopes(silopes@ufba.br) on 2013-06-12T16:50:06Z (GMT) No. of bitstreams: 1
Nery, Marcia.pdf: 907807 bytes, checksum: dc503bd7caa4134fa6e5e99ae5ea917c (MD5) / Made available in DSpace on 2013-06-12T16:50:06Z (GMT). No. of bitstreams: 1
Nery, Marcia.pdf: 907807 bytes, checksum: dc503bd7caa4134fa6e5e99ae5ea917c (MD5)
Previous issue date: 2005 / Esta pesquisa se propôs a analisar a cultura da organização escolar, através de um estudo de caso comparativo realizado em duas escolas, uma pública e outra particular. O estudo buscou identificar a partir dos indicadores de clima organizacional, a percepção dos professores acerca dos fatores relacionados à cultura escolar, agrupados em duas categorias: zona de visibilidade e zona de invisibilidade. A primeira categoria é composta pelos fatores denominados de comprometimento com o trabalho docente, comprometimento com a aprendizagem dos alunos, comprometimento com a própria formação / qualificação profissional docente - elementos conceituais que têm de ser escritos, pois devem expressar as representações e a linguagem utilizada em documentos escolares. A segunda categoria, zona de Invisibilidade, é composta pelos elementos invisíveis (linguagem não-verbal, cerimônias, ritos, modismos, comportamentos sociais manifestos) que são os valores e as crenças representativas das práticas cotidianas da escola. Assim, investigou-se, como a escola estabelece sua configuração social e se consolida como uma organização viva e dinâmica, cuja disposição interna e cujo funcionamento, resultam do jogo de forças entre as influências externas e as inter-relações dos seus diferentes atores, mesmo estando submetida às normas e ao controle externo dos sistemas escolares. A busca da compreensão da cultura escolar levou à análise de seus símbolos, artefatos, crenças e valores. Os resultados obtidos na pesquisa, permitiram constatar que os professores das escolas pública e particular compartilham das mesmas crenças e se diferenciam quanto aos valores e aos fatores relacionados ao comprometimento, principalmente nas questões relativas à aprendizagem dos alunos mais pobres. Outro dado que se deve considerar como relevante, diz respeito à importância do investimento na própria formação continuada por parte dos professores, tendo em vista que mesmo quando as iniciativas de promoção de cursos e eventos de natureza pedagógica são da escola, os professores da escola pública apresentarem índices de participação inferiores aos dos professores da escola particular. Os estudos sobre cultura, satisfação com o trabalho e de outras dimensões do campo do comportamento da organização escolar no Brasil são escassos. Este estudo pretende contribuir para a ampliação dos conhecimentos no campo da administração da educação, para compreender como as características culturais de práticas pedagógicas ora conservadoras ora inovadoras, se fazem igualmente presentes em escolas que atendem a realidades distintas, organizadas e administradas, de forma diferenciada, interpondo-se e sobrepondo-se a diversos elementos culturais que lhes são contrários, ainda assim eles ganham força e 6 identidade próprias, determinando de maneira singular suas práticas pedagógicas cotidianas e sua cultura escolar. / Salvador
|
24 |
Análise de ações gestoras de uma escola estadual no município de Carauari - AM com bom desempenho nas avaliações externasAmorim, Juarez Damasceno de 21 February 2017 (has links)
Submitted by isabela.moljf@hotmail.com (isabela.moljf@hotmail.com) on 2017-06-20T15:17:18Z
No. of bitstreams: 1
juarezdamascenodeamorim.pdf: 1195460 bytes, checksum: e5aa377944e34f1c4eef589f2a9415eb (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-06-29T12:41:22Z (GMT) No. of bitstreams: 1
juarezdamascenodeamorim.pdf: 1195460 bytes, checksum: e5aa377944e34f1c4eef589f2a9415eb (MD5) / Made available in DSpace on 2017-06-29T12:41:22Z (GMT). No. of bitstreams: 1
juarezdamascenodeamorim.pdf: 1195460 bytes, checksum: e5aa377944e34f1c4eef589f2a9415eb (MD5)
Previous issue date: 2017-02-21 / A presente dissertação foi desenvolvida no âmbito do Mestrado Profissional em Gestão e Avaliação da Educação (PPGP) do Centro de Políticas Públicas e Avaliação da Educação da Universidade Federal de Juiz de Fora (CAEd/UFJF). O caso de gestão estudado analisou as práticas gestoras (administrativas e pedagógicas) da Escola Estadual Professora Nazaré Varela, pertencente à Coordenadoria Regional de Educação de Carauari (CREC), no município de Carauari-Amazonas. Essa unidade escolar vem se destacando em sua região por apresentar evolução no desempenho das avaliações externas de nível estadual e federal, isto é, no Sistema de Avaliação do Desempenho Educacional do Amazonas (SADEAM) nos anos de 2008 a 2014 e no Índice de Desenvolvimento da Educação Básica (IDEB) nos anos de 2009 a 2013. Para compreendermos o porquê desse diferencial, procuramos investigar como
problema: que ações gestoras, focando as ações pedagógicas, possivelmente contribuíram para a melhora dos dados das avaliações externas da Escola? Embasado nessa questão, esse trabalho buscou analisar que possíveis fatores vêm contribuindo para essa evolução no desempenho da escola, focando os projetos e as práticas da equipe gestora que possivelmente agregaram positivamente para essa melhoria. Para isso, utilizamos como metodologia a análise documental de registros nos diários de classe, planos de curso, planos de intervenção pedagógica, atas de reuniões pedagógicas e de planejamento, entrevistas semiestruturadas realizadas com a gestora, professor de apoio e professores da turma do 5º ano e aplicação de questionários com alunos das turmas de 5º ano. A revisão bibliográfica teve como base os trabalhos de Sousa e Oliveira (2010), Franco, Brooke e Alves (2008), Cardelli e Elliot (2012), Lück (2008; 2009) e Silva (2014) e de teóricos que estudam a relação entre desempenho e os
fatores de gestão, como Chiavenato (2005); Herzberg (1973). / The present thesis was developed under the Professional Master in Management and Evaluation of Education (PPGP) of the Center for Public Policies and Education Evaluation of the Federal University of Juiz de Fora (CAEd / UFJF). The management case to be studied will analyze the management practices (administrative and pedagogical) of the State School Teacher Nazaré Varela, belonging to the Regional Coordination of Education of Carauari (CREC), in the municipality of Carauari-Amazonas. This school unit has been highlighting in its region due to the evolution in the performance of external evaluations at the state and federal levels, that is, in the System of Assessment of Educational Performance of Amazonas
(SADEAM) in the years 2008 to 2014 and in the Development Index of Basic Education (IDEB) in the years 2009 to 2013. In order to understand why this differential, we seek to investigate as a problem: which management actions, focusing on pedagogical actions, possibly contributed to the improvement of data from external evaluations of the School? Based on this question, this study sought to analyze the possible factors contributing to this evolution in school performance, focusing on the projects and practices of the management team that possibly added positively to this improvement. For this, we used as a methodology
the documentary analysis of records in class diaries, course plans, pedagogical intervention plans, minutes of pedagogic and planning meetings, semi-structured interviews with the manager, support teacher and teachers of the 5th grade class And application of questionnaires with students of the 5th grade classes. The literature review was based on the works of Sousa and Oliveira (2010), Franco, Brooke and Alves (2008), Cardelli and Elliot (2012), Lück (2008; 2009) and Silva (2014) and theorists who study the relationship Between performance and management factors, such as Chiavenato (2005); Herzberg (1973).
|
25 |
Aplicação da ferramenta de gerenciamento de risco HFMEA no setor de expurgo do centro de material e esterilização / Application of risk management tool in HFMEA purge sector of central supply and sterilizationSousa, Michele Cristina Almeida, 1986- 06 April 2014 (has links)
Orientador: Sérgio Santos Mühlen / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-25T18:16:12Z (GMT). No. of bitstreams: 1
Sousa_MicheleCristinaAlmeida_M.pdf: 12999828 bytes, checksum: 780db7c2e34e93006e841bdc475aef8d (MD5)
Previous issue date: 2014 / Resumo: Introdução: O Centro de Material e Esterilização (CME) em unidades de saúde deve garantir a qualidade dos instrumentais médico-hospitalares para um atendimento seguro ao paciente. Para que o processo de esterilização seja realizado de forma adequada, é imprescindível que o artigo a ser esterilizado esteja livre da matéria orgânica e de certas substâncias inorgânicas. Se as atividades de recebimento, limpeza, enxágue e secagem dos instrumentais cirúrgicos, localizadas no Expurgo, forem realizadas de modo inadequado, elas podem comprometer a limpeza total dos instrumentais. Estas falhas podem ter várias origens e determinam fatores importantes a serem identificados para qualificar o processo de trabalho. Objetivo: A fim de avaliar pontos críticos nos processos e identificar pontos de melhoria nas atividades realizadas no setor de expurgo foi aplicada a técnica Healthcare Failure Mode and Effect Analysis (HFMEA) nos procedimentos ali realizados. Métodos: O HFMEA é uma ferramenta que permite avaliar de modo sistemático os pontos críticos nos processo classificando-os de acordo com a severidade dos efeitos potenciais de suas falhas e com a sua probabilidade de ocorrência, permitindo priorizar os riscos a serem controlados. Para a sua aplicação formou-se uma equipe, mapeou-se o processo, fez-se uma análise de risco e depois se avaliaram os modos de falha a eles relacionados. Resultados: Foram encontrados 89 modos de falhas envolvendo os procedimentos de limpeza e secagem dos instrumentais, e associados a esses modos de falha foram encontradas 262 causas potenciais. Desse total, 131 causas potenciais (50%) foram analisadas e selecionadas para propor medidas e ações de melhoria. Por fim, uma proposta de ações e medidas de risco envolvendo técnicas relacionadas aos procedimentos do expurgo, gestão, ambiente de trabalho, equipamento, insumos e equipe operacional foi elaborada, e se for usado, ajudará a equipe de gestores no gerenciamento de risco. Conclusão: A aplicação da ferramenta HFMEA permitiu diagnosticar os pontos críticos do processo e propor soluções de melhoria que foram condensadas numa proposta de ações e medidas que pode contribuir para que a equipe de gestores do CME incorpore rotinas mais seguras na realização de suas atividades / Abstract: Introduction: Material and Sterilization Centers in health care facilities should ensure the quality of medical instruments for safe patient care. For the sterilization process to be carried out properly, it is essential that the article to be sterilized is free from of organic matter and some inorganic substances. If the activities of receiving, cleaning, rinsing and drying of surgical instruments, located in the Cleaning sector, are incomplete or inadequate because of being performed quickly, they can compromise the cleanup. These failures may have diverse origins and they determine important factors to be identified for qualifying the work process. Objective: To assess critical points in processes and identify areas for improvement in the activities undertaken in the cleaning sector, Healthcare Failure Mode and Effect Analysis (HFMEA) technique was applied to the procedures performed in that sector. Methods: The HFMEA is tool that provides a systematic evaluation of the critical points in processes by classifying them according to the severity of the potential effects of their failures and to their probability of occurrence, allowing the prioritization of the risks to be controlled. For its implementation a multidisciplinary team was formed, the process was mapped, the risk analysis was executed and the failure modes related to the process were evaluated. Results: 89 failure modes involving the cleaning and drying of instrumentation were found, and 262 potential causes associated with these failure modes were identified. From this total, 131 potential causes (50%) were selected and analyzed to propose measures and actions for improvement. Finally, a proposal of actions and measures involving risk related to the purge procedures, management, work environment, equipment, supplies and technical operations team was prepared, and if used, will help the management team to manage risk. Conclusion: The application of the HFMEA tool provided a diagnosis of the critical points of the process and resulted in the proposition of improvement solutions that have been condensed into a proposal action and measures that can aid the team managers of CME in incorporating safer routines in the execution of the activities / Mestrado / Engenharia Biomedica / Mestra em Engenharia Elétrica
|
26 |
A Questão dos Conselhos Escolares da Escola Pública Brasileira /Silva, Evaldo Eliezer da January 2020 (has links)
Orientador: Hilda Maria Gonçaves da Silva / Resumo: A política educacional é regida por uma série de leis que a sancionam e que a regulamentam, a fim de torná-la exequível. Ela se insere no contexto de política universal, gratuita, não contributiva e compulsória, na questão da educação básica pública, a qual se estrutura em três níveis: educação infantil; fundamental de anos iniciais e anos finais; e ensino médio. Dentro deste âmbito, delineiam-se os Conselhos Escolares, que são órgãos colegiados de fundamental importância para a representatividade dos elementos constituintes da comunidade escolar e local, para que estes possam compartilhar o poder decisório e a corresponsabilidade da escola, por meio da gestão democrática. A perspectiva que traz este modelo de gestão se concentra na possibilidade de congregar os principais elementos que representam o arcabouço da política educacional, desvelando seus direitos, outorgando-lhes vez, voz e voto, com a finalidade de lhes permitir a participação política e social. O que caracteriza a participação política é o dever ético e a necessidade primordial da natureza humana na convivência em dada sociedade. O hábito da participação intensa e constante faz com que haja o impedimento de uma atuação injusta, por parte de alguns, em detrimento de todos. No entanto, acredita-se que os membros colegiados, na sua generalidade, não conhecem as atribuições que lhes são próprias e não exercem a efetiva participação nos Conselhos Escolares, os quais visam a garantia de um processo de melhoria da edu... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: Educational policy is governed by a series of laws that sanction and regulate it, in order to make it enforceable. It is insered in the context of universal, free, non- contributory and compulsory policy, in the issue of public basic education, which is structured on three levels : child education ; primary years and final years ; and high school. Within this scope, the School Councils are outlined, which are collegiate bodies of fundamental importance for the representation of the constituent elements of the school and local community, so they can share the decision-making power and co-responsibility to the school through democratic management. The perspective that this management model brings focuses on the possibility of bringing together the main elements that represent the framework of educational policy, unveiling their rights, giving them time, voice and vote, in order to allow them to participate politically and socially. What characterizes political participation is the ethical duty and the primordial need for human nature to live together in a given society. That habit of intense and constant participation makes it possible for some to act unfairly to the detriment of all. However, it is believed that the collegiate members, in general, do not know their duties and do not exercises effective participation in School Councils, which aim to garantee a process of improving education, which is proposed for the formation of citizens, through the participation of the entir... (Complete abstract click electronic access below) / Mestre
|
27 |
Varför misslyckas IT-projekt? : En sammanställning av 30 års forskning om risker, orsaker och möjligheter - kan DevOps vara lösningen? / Why do IT-projects fail? : a compilation of 30 years of research on risks, causes and challenges - can DevOps be the solution?Allgulin, Jonathan, Hansen, Daniel January 2020 (has links)
IT-projekt har misslyckats till hög grad under väldigt lång tid, studier visar på att uppemot 80% av alla IT-projekt anses vara misslyckade. Det har gjorts studier som visar på att agila metoder, såsom DevOps, kan vara lösningen till att fler IT-projekt ska lyckas. Syftet med denna studie är att bidra till förståelse för hur risker relaterade till IT-projekt har sett ut mellan 1990–2020, samt undersöka om metoder såsom DevOps är rätt väg att gå för att reducera misslyckade IT-projekt. I denna studie kartläggs de vanligaste orsakerna till misslyckade IT-projekt genom att kategorisera risker identifierade i forskning från 1990 till 2020. Detta görs och presenteras genom en litteraturstudie. Denna litteraturstudie resulterar i en överblick över hur litteraturen för de vanligaste riskerna sett ut över 30 år. Kartläggningen visar att tidigare studier mellan 1990 -2010 haft en bred spridning kring risker relaterade till IT-projekt bland samtliga kategorier. De senare åren, 2010–2020 har fokus i litteraturen legat mot lednings-, process samt personalrelaterade risker, något som även får stöd av respondenterna. Vi har även studerat DevOps och genomfört två semistrukturerade intervjuer med respondenter som har erfarenhet av DevOps, agila metoder och att driva IT-projekt. Resultatet av studien är tydligt, teori och empiri är väl överens om att agila metoder är rätt väg att gå. DevOps anses av respondenterna som en effektiv metod att använda för att nå fler lyckade IT-projekt. De två respondenterna verifierar även de riskkategorier som tagits fram i litteraturstudien och bekräftar att det är dessa som är aktuella i IT-projekt. / Literature shows that IT-projects have failed to a large extent for a long time, studies shows that up to 80 % of all IT-projects are considered as failed. There are studies that shows that agile methods, such as DevOps, can be the solution for more IT-project success. The purpose of this study is to contribute to an understanding of what risks related to IT projects that has been identified in literature between 1990-2020 and investigate if methods such as DevOps is the right way to reduce IT-project failure. This study maps all the most common causes to failed IT-projects by categorize the most frequent risks identified in research from 1990 to 2020 and is performed and presented by a literature study. This literature study results in an overview of the literature of the most frequent risks occurred in studies from the last 30 years. The overview identifies that studies between 1990 to 2020 has a range between risks related to IT-projects in all categories. In the most recent studies, from 2010- 2020, focus in research points to management-, process and personnel-related risks, which is also supported by the respondents. We have studied DevOps and completed two semi structured interviews with respondents that have experience of DevOps, agile methods and managing IT-projects. The result of this study is clear, the theory and empirics are aligned, agile methods are the right way to go. The respondents consider DevOps as an effective method to reach a higher success rate for IT-projects. The respondents verify the risk categories from the literature study and confirm these risks in their own IT-projects.
|
28 |
To trender møtes – ISO og miljøstandardene : The International Organization for Standardization (ISO) og deres miljøstandarder (14000 familien) / The international Organization for standardization (ISO) and their environmental standards (the 14000-family)Bårnås, Kristin Stanwick January 2013 (has links)
ISO sine miljøstandarder ble først publisert i 1996. Arbeidet derimot hadde startet alt i 1991, på bakgrunn av at ISO hadde blitt en annerkjent organisasjon for også ikke-tekniske standarder og på bakgrunn av det økte internasjonale miljøfokuset. Hoveddeltagerne i arbeidet var ledere og viktige personer i mellomstore og store bedrifter, og dette bidro til at fokuset ble på miljøstyringsystemer istedenfor på konkrete miljøkrav. Arbeidet de første 2 årene ble gjort gjennom Strategic Advisory Group on the Environment (SAGE), som var et sammarbeid mellom ISO og IEC. I 1993 ble en teknisk komitee opprettet. Denne fikk navn TC 207: miljøstyring, og har sitt hovedkontor i Canada.
|
Page generated in 0.0497 seconds