Spelling suggestions: "subject:"[een] PERFORMANCE MONITORING"" "subject:"[enn] PERFORMANCE MONITORING""
51 |
Die Bedeutung des Thalamus für das menschliche Handlungsüberwachungssystem im fronto-striato-thalamo-corticalen NetzwerkSeifert, Sebastian 20 September 2012 (has links)
Für das zielgerichtete Verhalten des Menschen ist ein funktionierendes Handlungsüberwachungssystem eine wichtige Voraussetzung. Somit können Fehlhandlungen registriert und verarbeitet werden, um dann anschließend das Verhalten an die entsprechende Situation besser anzupassen. Ein wichtiges neuroanatomisches Korrelat dieses Handlungsüberwachungssystems ist der anteriore Anteil des mittleren cingulären Cortex (anterior midcingulate cortex, aMCC), der in der Funktion der Fehlerverarbeitung eng mit den Basalganglien und dem lateralen präfrontalen Cortex verknüpft ist. In der vorliegenden Arbeit wurde die Bedeutung des Thalamus im Netzwerk der Fehlerverarbeitung genauer untersucht. Es konnte mittels diffusionsgewichteter Traktografie bei 16 gesunden Probanden gezeigt werden, dass speziell der Nucleus ventralis anterior (VA) und der Nucleus ventralis lateralis anterior (VLa) quantitativ stärkere Faserverbindungen mit dem aMCC aufweisen, als die restlichen Thalamuskerne. Desweiteren zeigten 15 Patienten mit Läsionen im Thalamus im Vergleich zur gesunden Kontrollgruppe im Eriksen Flanker Task fehlerspezifische Verhaltensunterschiede. Obwohl die Fehlerrate zwischen diesen Patienten und den Kontrollprobanden nahezu identisch war, konnten die Patienten ihre Fehler als solche signifikant schlechter detektieren und ihr Verhalten nach einem Fehler daher auch schlechter anpassen. Die EEG Daten zeigten für die Patientengruppe eine in der Amplitude signifikant verminderte error-related negativity (ERN – ein ereignis-korreliertes Hirnpotential, ausgelöst durch Fehlhandlungen, z.B. in Flankierreizaufgaben) im Vergleich zur Kontrollgruppe. Bei 6 Patienten mit Läsionen der VA und VLa Kerngruppe war die ERN nahezu komplett erloschen, wohingegen bei den 9 Patienten, deren Läsionen nicht VA und VLa betrafen, die ERN lediglich vermindert war. / Performance monitoring is an essential prerequisite of successful goal-directed behavior. Research of the last two decades implicates the anterior midcingulate cortex (aMCC) in the human medial frontal cortex and frontostriatal basal ganglia circuits in this function. Here, we addressed the function of the thalamus in detecting errors and adjusting behavior accordingly. Using diffusion-based tractography we found that, among the thalamic nuclei, the ventral anterior and ventral lateral anterior nuclei (VA, VLa) have the relatively strongest connectivity with the RCZ. Patients with focal thalamic lesions showed diminished error-related negativity, behavioral error detection, and post-error adjustments. When the lesions specifically affected the thalamic VA/VLa nuclei these effects were significantly pronounced, which was reflected by complete absence of the error-related negativity. These results reveal that the thalamus, particularly its VA/VLa region, is a necessary constituent of the performance-monitoring network, anatomically well connected and functionally closely interacting with the aMCC.
|
52 |
Concentrated network tomography and bound-based network tomographyFeng, Cuiying 17 September 2020 (has links)
Modern computer networks pose a great challenge for monitoring the network performance due
to their large scale and high complexity. Directly measuring the performance of internal network
elements is prohibitive due to the tremendous overhead. Alternatively, network tomography, a
technique that infers the unobserved network characteristics (e.g., link delays) from a small number
of measurements (e.g., end-to-end path delays), is a promising solution for monitoring the internal
network state in an e cient and e ective manner. This thesis initiates two variants of network
tomography: concentrated network tomography and bound-based network tomography. The former
is motivated by the practical needs that network operators normally concentrate on the performance
of critical paths; the latter is due to the need of estimating performance bounds whenever exact
performance values cannot be determined.
This thesis tackles core technical di culties in concentrated network tomography and bound-
based network tomography, including (1) the path identi ability problem and the monitor deploy-
ment strategy for identifying a set of target paths, (2) strategies for controlling the total error bound
as well as the maximum error bound over all network links, and (3) methods of constructing measure-
ment paths to obtain the tightest total error bound. We evaluate all the solutions with real-world
Internet service provider (ISP) networks. The theoretical results and the algorithms developed in
this thesis are directly applicable to network performance management in various types of networks,
where directly measuring all links is practically impossible. / Graduate
|
53 |
An assessment of the performance management system for senior managers at Chris Hani district municipalitySotenjwa, Fundiswa Patience January 2021 (has links)
Masters in Public Administration - MPA / This study examines the implementation of the Performance Management System
(PMS) in local government, with specific reference to municipalities in the Chris Hani
District in the Eastern Cape. The research is premised on the assumption that even
though a PMS has been adopted in municipalities with the aim of assisting them to
function effectively, municipalities in the Eastern Cape, particularly in Christ Hani
District, continue to experience performance challenges.
The study presupposes that the implementation of the performance management
system at the municipality, whether effective or ineffective, has a direct relationship
with the performance of the municipality. The study includes a historical overview of
local government with the aim of understanding government reforms introduced to
assist municipalities to build their capacity to enable them to perform well.
It utilises purposive sampling to identify the most appropriate participants based on
the research objectives. The data was collected through semi-structured interviews
and a review of relevant documents. As part of the analysis, summaries of the
responses of interviewees were written in a meaningful way in line with the thematic
areas determined in accordance with the research objectives. The municipality uses
the Balances Scorecard as a performance management tool to determine the
performance level of individuals and to detect areas that need corrective measures
across the local municipalities. There are inconsistencies in the implementation,
depending on how well the particular local municipality is resourced. In any
municipality, the effective implementation of the PMS requires the municipality to
reward excellent performers, which requires increases in the personnel budget to
cater for monitory rewards.
|
54 |
Meaningful Metrics in Software Engineering : The Value and Risks of Using Repository Metrics in a CompanyJacobsson, Frida January 2023 (has links)
Many large companies use various business intelligence solutions to filter, process, and visualize their software source code repository data. These tools focus on improving continuous integration and are used to get insights about people, products, and projects in the organization. However, research has shown that the quality of measurement programs in software engineering often is low since the science behind them is unexplored. In addition, code repositories contain a considerable amount of information about the developers, and several ethical and legal aspects need to be considered before using these tools, such as compliance with GDPR. This thesis aims to investigate how companies can use repository metrics and these business intelligence tools in a safe and valuable way. In order to answer the research questions, a case study was conducted in a Swedish company, and repository metrics from a real business intelligence tool were analyzed based on several questions. These questions were related to software measurement theory, ethical and legal aspects of software engineering and metrics, and institutionalized theory. The results show how these metrics could be of value to a company in different ways, for instance by visualization collaboration in a project or by differentiating between read and active repositories. These metrics could also be valuable by linking them to other data in the company such as bug reports and repository downloads. The findings show that the visualizations could potentially be perceived as a type of performance monitoring by developers, causing stress and unhealthy incitements in the organization. In addition, repository metrics are based on identifiable data from Git, which according to the GDPR is classified as personal data. Further, there is a risk that these tools are used simply because they are available, as a way to legitimize the company. In order to mitigate these risks, the thesis states that the metrics should be anonymized, and the focus of the metrics should be on teams and processes rather than individual developers. The teams themself should be a part of creating the Goal-Question-Metrics that link the metrics to what the teams wish to establish.
|
55 |
PathCase SB: Automating Performance Monitoring And Bugs DetectionAzzam, Yves Said 24 August 2012 (has links)
No description available.
|
56 |
True-time all optical performance monitoring by means of optical correlationAbou-Galala, Feras Moustafa 06 June 2007 (has links)
No description available.
|
57 |
Analytics adoption in manufacturing – benefits, challenges and enablersCupertino Ribeiro, Junia January 2022 (has links)
Digitalisation is changing the manufacturing landscape with promises to enhance industrial competitiveness with new technologies and business approaches. Various data-driven applications, enabled by digital technologies, can support process monitoring, production quality control, smart planning, and optimisation by making relevant data available and accessible to different roles in production. In this context, analytics is a relevant tool for improved decision-making for production activities since it entails extracting insights from data to create value for decision-makers. However, previous research has identified a lack of guidelines to manage the technological implementation needed for analytics. Furthermore, there are few studies in a real manufacturing setting that describe how companies are exploiting analytics. To address this gap, the purpose of this study is to investigate the implementation and use of analytics for production activities in the manufacturing industry. To fulfil the purpose of the study, the following research questions were formulated: RQ1: What does the adoption of analytics look like and what results can it bring to production activities of a manufacturing company? RQ2: What are the challenges and enablers for analytics adoption in production activities of a manufacturing company? This study was based on a literature review in addition to a single case study in a large multinational machinery manufacturing company. Data collection included observations and semi-structured interviews about three analytics use cases: for production performance follow-up, production disturbances tracking and production planning and scheduling. The first use case was based on the Design Thinking process and tools while the other two cases were narrower in scope and do not cover the development process in detail. Qualitative data analysis was the method used to examine the empirical and theoretical data. The empirical findings indicate that analytics solutions for production activities do not need to be sophisticated and characterised by high automation and complexity to bring meaningful value to manufacturing companies. The three analytics use cases investigated improved effectiveness and efficiency of production performance follow-up, production disturbances and production planning and scheduling activities. The main contributor to these benefits was a higher level of transparency of the factory manufacturing operations, which in turn aids collaboration, preventive decision-making, prioritization and better resource allocation. The identified challenges for analytics adoption were related to information system challenges and people & organization challenges. In other to address these challenges, this study suggests that manufacturing companies should focus on securing sponsorship from senior management and leadership, implementing cultural change to embrace fact-based decisions, training the existing workforce in analytics skills and empowering and recruiting people with digital skills. Moreover, it is recommended that manufacturing companies integrate information systems vertically and horizontally, link and aggregate data to deliver contextualised information to different roles and finally, invest in data-related Industry 4.0 technologies to capture, transfer, store, and process manufacturing data efficiently.
|
58 |
EMPIRICALLY-BASED INTERVENTIONS FOR ERROR MONITORING DEFICITS IN DEMENTIABettcher, Brianne Magouirk January 2010 (has links)
The diminished ability to perform everyday tasks is a salient problem for individuals diagnosed with a dementia. Recent research suggest that dementia patients detect significantly fewer action errors than age-matched controls; however, very little is known about the derivation of their error monitoring difficulties. The primary aims of my dissertation were to evaluate a novel, task training action intervention (TT-NAT) designed to increase error monitoring in dementia patients, and to pinpoint the relation between error monitoring and neuropsychological processes in participants who receive the task training intervention. Results indicated that dementia participants in the TT-NAT condition produced fewer total errors and detected significantly more of their errors than individuals in the Standard condition (z = 3.0 and t = 3.36, respectively; p < . 05). Error detection in the TT-NAT condition was strongly related to the language/semantic knowledge composite index only (r = .57, p = .00), whereas it was moderately related to both the language and executive composite indices in the Standard condition. No differences in error correction rates were noted, although patients in all groups corrected the majority of errors detected. The findings suggest that the TT-NAT may be a promising intervention for error monitoring deficits in dementia patients, and have considerable implications for neuropsychological rehabilitation. / Psychology
|
59 |
Automated Vision-Based Tracking and Action Recognition of Earthmoving Construction OperationsHeydarian, Arsalan 06 June 2012 (has links)
The current practice of construction productivity and emission monitoring is performed by either manual stopwatch studies which are significantly labor intensive and subject to human errors, or by the use of RFID and GPS tracking devices which may be costly and impractical. To address these limitations, a novel computer vision based method for automated 2D tracking, 3D localization, and action recognition of construction equipment from different camera viewpoints is presented. In the proposed method, a new algorithm based on Histograms of Oriented Gradients and hue-saturation Colors (HOG+C) is used for 2D tracking of the earthmoving equipment. Once the equipment is detected, using a Direct Linear Transformation followed by a non-linear optimization, their positions are localized in 3D. In order to automatically analyze the performance of these operations, a new algorithm to recognize actions of the equipment is developed. First, a video is represented as a collection of spatio-temporal features by extracting space-time interest points and describing each with a Histogram of Oriented Gradients (HOG). The algorithm automatically learns the distributions of these features by clustering their HOG descriptors. Equipment action categories are then learned using a multi-class binary Support Vector Machine (SVM) classifier. Given a novel video sequence, the proposed method recognizes and localizes equipment actions. The proposed method has been exhaustively tested on 859 videos from earthmoving operations. Experimental results with an average accuracy of 86.33% and 98.33% for excavator and truck action recognition respectively, reflect the promise of the proposed method for automated performance monitoring. / Master of Science
|
60 |
Experiments with the pentium Performance monitoring countersAgarwal, Gunjan 06 1900 (has links)
Performance monitoring counters are implemented in most recent microprocessors. In this thesis, we describe various performance measurement experiments for a program and a system that we conducted on a Linux operating system using the Pentium performance counters. We carried out our performance measurements on a Pentium II microprocessor. The Pentium II performance counters can be configured to count events such as cache misses, TLB misses, instructions executed etc. We used a low intrusive overhead technique to access these performance counters.
We used these performance counters to measure the cache miss overheads due to context switches in Linux system. Our methodology involves sampling the hardware counters every 50ps. The sampling was set up using signals related to interval timers. We describe an analytical cache performance model under multiprogrammed condition from the literature and validate it using the performance monitoring counters.
We next explores the long term performance of a system under different workload conditions. Various performance monitoring events - data cache h, data TLB misses, data cache reads or writes, branches etc. - are monitored over a 24 hour period. This is useful in identifying activities which cause loss of system performance. We used timer interrupts for sampling the performance counters.
We develop a profiling methodology to give a perspective of performance of the different functions of a program, not only on the basis of execution-time but also on the data cache misses. Available tools like prof on Unix can be used to pinpoint the regions of performance loss of programs, but they mainly rely on an execution-time profiles. This does not give insight into problems in cache performance for that program. So we develop this methodology to get the performance of each function of the program not only on the basis of its execution time but also on the basis of its cache behavior.
|
Page generated in 0.0633 seconds