71 |
Type of automation failure: the effects on trust and reliance in automationJohnson, Jason D. 01 December 2004 (has links)
Past automation research has focused primarily on machine-related factors (e.g., automation reliability) and human-related factors (e.g., accountability). Other machine-related factors such as type of automation errors, misses or false alarms, have been noticeably overlooked. These two automation errors correspond to potential operator errors, omission (misses) and commission (false alarms), which have proven to directly affect operators trust in automation. This research examined how automation-error-type affects operator trust and reliance in and perceived reliability of automated decision aids. This present research confirmed that perceived reliability is often lower than actual system reliability and that false alarms significantly reduced operator trust in the automation more so than do misses. In addition, this study found that there does not appear to be an effect on the level of subjective trust within each experimental condition (i.e., type of automation error) based on age. There does, however, appear to be a significant difference in the reliance on automation between older and younger adult participants attributed to differences in perceived workload.
|
72 |
Generating and Analyzing Synthetic Workloads using Iterative DistillationKurmas, Zachary Alan 14 May 2004 (has links)
The exponential growth in computing capability and use has produced a
high demand for large, high-performance storage systems.
Unfortunately, advances in storage system research have been limited
by (1) a lack of evaluation workloads, and (2) a limited understanding
of the interactions between workloads and storage systems. We have
developed a tool, the Distiller that helps address both
limitations.
Our thesis is as follows: Given a storage system and a workload for
that system, one can automatically identify a set of workload
characteristics that describes a set of synthetic workloads with the
same performance as the workload they model. These representative
synthetic workloads increase the number of available workloads with
which storage systems can be evaluated. More importantly, the
characteristics also identify those workload properties that affect
disk array performance, thereby highlighting the interactions between
workloads and storage systems.
This dissertation presents the design and evaluation of the Distiller.
Specifically, our contributions are as follows. (1) We demonstrate
that the Distiller finds synthetic workloads with at most 10% error
for six out of the eight workloads we tested. (2) We also find that
all of the potential error metrics we use to compare workload
performance have limitations. Additionally, although the internal
threshold that determines which attributes the Distiller chooses has a
small effect on the accuracy of the final synthetic workloads, it has
a large effect on the Distiller's running time. Similarly, (3) we find
that we can reduce the precision with which we measure attributes and
only moderately reduce the resulting synthetic workload's
accuracy. Finally, (4) we show how to use the information contained in
the chosen attributes to predict the performance effects of modifying
the storage system's prefetch length and stripe unit size.
|
73 |
The effect of workload and age on compliance with and reliance on an automated systemMcBride, Sara E. 08 April 2010 (has links)
Automation provides the opportunity for many tasks to be done more effectively and with greater safety. However, these benefits are unlikely to be attained if an automated system is designed without the human user in mind. Many characteristics of the human and automation, such as trust and reliability, have been rigorously examined in the literature in an attempt to move towards a comprehensive understanding of the interaction between human and machine. However, workload has primarily been examined solely as an outcome variable, rather than as a predictor of compliance, reliance, and performance. This study was designed to gain a deeper understanding of whether workload experienced by human operators influences compliance with and reliance on an automated warehouse management system, as well to assess whether age-related differences exist in this interaction.
As workload increased, performance on the Receiving Packages task decreased among younger and older adults. Although younger adults also experienced a negative effect of workload on Dispatching Trucks performance, older adults did not demonstrate a significant effect. The compliance data showed that as workload increased, younger adults complied with the automation to a greater degree, and this was true regardless of whether the automation was correct or incorrect. Older adults did not demonstrate a reliable effect of workload on compliance behavior. Regarding reliance behavior, as workload increased, reliance on the automation increased, but this effect was only observed among older adults. Again, this was true regardless of whether the automation as correct or incorrect. The finding that individuals may be more likely to comply with or rely on faulty automation if they are in high workload state compared to a low workload state suggests that an operator's ability to detect automation errors may be compromised in high workload situations.
Overall, younger adults outperformed older adults on the task. Additionally, older adults complied with the system more than younger adults when the system erred, which may have contributed to their poorer performance. When older adults verified the instructions given by the automation, they spent longer doing so than younger adults, suggesting that older adults may experience a greater cost of verification. Further, older adults reported higher workload and greater trust in the system than younger adults, but both age groups perceived the reliability of the system quite accurately.
Understanding how workload and age influence automation use has implications for the way in which individuals are trained to interact with complex systems, as well as the situations in which automation implementation is determined to be appropriate.
|
74 |
Examination of the Effect of Child Abuse Case Characteristics on the Time a Caseworker Devotes to a CaseCard, Christopher J. 27 October 2010 (has links)
This study used an explanatory research model that determined the effect on caseworker time and therefore workload caused by specific characteristics of cases assigned after the child abuse investigation is complete. The purpose of this study was to explain the relationship between child protection case characteristics and the time an assigned caseworker devotes to a case. With this knowledge an informed methodology to assess the current workload of a caseworker could be used to assure that the caseworker is able to successfully complete the tasks required for each child assigned. Further, the knowledge of the amount of time spent on a case with specific characteristics allows supervisors to assess and properly assign cases. Utilizing focus groups and a secondary data analysis of the Florida State Automated Child Welfare Service Information System (SACWSIS) the case characteristics of race/ethnicity, living arrangement, placement, removal and prior removal were found to significantly affect caseworker time spent on a case. Additionally, the case characteristics of gender, age, type of maltreatment, and disability were not found to affect caseworker time spent on a case.
|
75 |
Workload modeling and prediction for resources provisioning in cloudMagalhães, Deborah Maria Vieira 23 February 2017 (has links)
MAGALHÃES, Deborah Maria Vieira. Workload modeling and prediction for resources provisioning in cloud. 2017. 100 f. Tese (Doutorado em Engenharia de Teleinformática)–Centro de Tecnologia, Universidade Federal do Ceará, Fortaleza, 2017. / Submitted by Hohana Sanders (hohanasanders@hotmail.com) on 2017-06-02T16:11:24Z
No. of bitstreams: 1
2017_tese_dmvmagalhães.pdf: 5119492 bytes, checksum: 581c09b1ba042cf8c653ca69d0aa0d57 (MD5) / Approved for entry into archive by Marlene Sousa (mmarlene@ufc.br) on 2017-06-02T16:18:39Z (GMT) No. of bitstreams: 1
2017_tese_dmvmagalhães.pdf: 5119492 bytes, checksum: 581c09b1ba042cf8c653ca69d0aa0d57 (MD5) / Made available in DSpace on 2017-06-02T16:18:39Z (GMT). No. of bitstreams: 1
2017_tese_dmvmagalhães.pdf: 5119492 bytes, checksum: 581c09b1ba042cf8c653ca69d0aa0d57 (MD5)
Previous issue date: 2017-02-23 / The evaluation of resource management policies in cloud environments is challenging since clouds are subject to varying demand coming from users with different profiles and Quality de Service (QoS) requirements. Factors as the virtualization layer overhead, insufficient trace logs available for analysis, and mixed workloads composed of a wide variety of applications in a heterogeneous environment frustrate the modeling and characterization of applications hosted in the cloud. In this context, workload modeling and characterization is a fundamental step on systematizing the analysis and simulation of the performance of computational resources management policies and a particularly useful strategy for the physical implementation of the clouds. In this doctoral thesis, we propose a methodology for workload modeling and characterization to create resource utilization profiles in Cloud. The workload behavior patterns are identified and modeled in the form of statistical distributions which are used by a predictive controller to establish the complex relationship between resource utilization and response time metric. To this end, the controller makes adjustments in the resource utilization to maintain the response time experienced by the user within an acceptable threshold. Hence, our proposal directly supports QoS-aware resource provisioning policies. The proposed methodology was validated through two different applications with distinct characteristics: a scientific application to pulmonary diseases diagnosis, and a web application that emulates an auction site. The performance models were compared with monitoring data through graphical and analytical methods to evaluate their accuracy, and all the models presented a percentage error of less than 10 %. The predictive controller was able to dynamically maintain the response time close to the expected trajectory without Service Level Agreement (SLA) violation with an Mean Absolute Percentage Error (MAPE) = 4.36%. / A avaliação de políticas de gerenciamento de recursos em nuvens computacionais é uma tarefa desafiadora, uma vez que tais ambientes estão sujeitos a demandas variáveis de usuários com diferentes perfis de comportamento e expectativas de Qualidade de Serviço (QoS). Fatores como overhead da camada de virtualização, indisponibilidade de dados e complexidade de cargas de trabalho altamente heterogêneas dificultam a modelagem e caracterização de aplicações hospedadas em nuvens. Neste contexto, caracterizar e modelar a carga de trabalho (ou simples- mente carga) é um passo importante na sistematização da análise e simulação do desempenho de políticas de gerenciamento dos recursos computacionais e uma estratégia particularmente útil antes da implantação física das nuvens. Nesta tese de doutorado, é proposta uma metodologia para modelagem e caracterização de carga visando criar perfis de utilização de recursos em Nuvem. Os padrões de comportamento das cargas são identificados e modelados sob a forma de distribuições estatísticas as quais são utilizadas por um controlador preditivo a fim de estabelecer a complexa relação entre a utilização dos recursos e a métrica de tempo de resposta. Desse modo, o controlador realiza ajustes no percentual de utilização do recursos a fim de manter o tempo de resposta observado pelo o usuário dentro de um limiar aceitável. Assim, nossa proposta apoia diretamente políticas de provisionamento de recursos cientes da Qualidade de Serviço (QoS). A metodologia proposta foi validada através de aplicações com características distintas: uma aplicação científica para o auxílio do diagnóstico de doenças pulmonares e uma aplicação Web que emula um site de leilões. Os modelos de desempenho computacional gerados foram confrontados com os dados reais através de métodos estatísticos gráficos e analíticos a fim de avaliar sua acurácia e todos os modelos apresentaram um percentual de erro inferior a 10%. A modelagem proposta para o controlador preditivo mostrou-se efetiva pois foi capaz de dinamicamente manter o tempo de resposta próximo ao valor esperado, com erro percentual absoluto médio (MAPE ) = 4.36% sem violação de SLA.
|
76 |
Performance management and academic workload in higher educationParsons, Philip Graham January 2000 (has links)
Thesis (MTech(Human Resource Management))--Cape Technikon, Cape Town, 2000 / This research project investigated the need for a method of determining an equitable
workload for academic staffing in higher education.
With the possibility of the introduction of a performance management system at the Cape
Technikon it became imperative that an agreed, objective and user-friendly method of
determining the workload of each academic member of staff be established.
The research project established the main parameters of the job of an academic staff member
and their dimensions that would influence both the quantity and quality of work produced.
They were established based on the views of a panel of educators drawn from a diverse range
of disciplines.
Using the identified dimensions an algorithm was developed and refined to reflect the
consensus views regarding the contributory weightings of each of the parameters'
dimensions. This algorithm was tested and refined using a base group of academic staff who
were identified by their colleagues as those whose workload could be considered a
benchmark for their discipline.
The most significant result of the research programme is the agreed algorithm that can form
the basis for a performance management system in higher education. The user interface that
was developed at the same time reflects the transparency of the system and allows for it to be
adapted to the needs of various groups of users or individuals within an organisation.
On the basis of this research it has been established that a system for determining an
equitable workload which encompasses an extensive range of parameters can be developed
using a participatory approach. Using a significant sample of academic staff as a basis, it
would appear that the system is valid, reliable, useful and acceptable to academic staff in the
context of a performance management system.
|
77 |
Individual Workload's Relation to Team Workload : An investigationWeilandt, Jacob January 2017 (has links)
There is an ongoing debate regarding the construct of team workload and a central point in that debate is team workload’s relation to individual workload. This study set out to investigate this relationship. To assess the participants workload a microworld called C3Fire was used to simulate a complex control situation in which teams had to cooperate to complete the task of fighting a forest fire. Twelve teams that consisted of four members in each team were recruited. In the microworld each member of the team took on one out of four separate roles and completed three different scenarios with varying degree of difficulty in C3Fire. After each scenario, a number of questionnaires aimed at gauging different aspects of the teams’ experience in the microworld was administered. The questionnaire in focus of the current study was the DATMA questionnaire, which was used to measure individual workload and team workload. To assert the relationship between the two constructs a multiple linear regression was conducted. The results provided showed that individual workload could be used as a significant predictor for modeling team workload. The study therefore concludes that there is evidence for a relationship in which each team members individual workload could be the parts of the total sum of team workload.
|
78 |
Workload-based optimization of integration processesBöhm, Matthias, Wloka, Uwe, Habich, Dirk, Lehner, Wolfgang 03 July 2023 (has links)
The efficient execution of integration processes between distributed, heterogeneous data sources and applications is a challenging research area of data management. These integration processes are an abstraction for workflow-based integration tasks, used in EAI servers and WfMS. The major problem are significant workload changes during runtime. The performance of integration processes strongly depends on those dynamic workload characteristics, and hence workload-based optimization is important. However, existing approaches of workflow optimization only address the rule-based optimization and disregard changing workload characteristics. To overcome the problem of inefficient process execution in the presence of workload shifts, here, we present an approach for the workload-based optimization of instance-based integration processes and show that significant execution time reductions are possible.
|
79 |
Assessing the Effectiveness of Workload Measures in the Nuclear DomainMercado, Joseph 01 January 2014 (has links)
An operator's performance and mental workload when interacting with a complex system, such as the main control room (MCR) of a nuclear power plant (NPP), are major concerns when seeking to accomplish safe and successful operations. The impact of performance on operator workload is one of the most widely researched areas in human factors science with over five hundred workload articles published since the 1960s (Brannick, Salas, & Prince, 1997; Meshkati & Hancock, 2011). Researchers have used specific workload measures across domains to assess the effects of taskload. However, research has not sufficiently assessed the psychometric properties, such as reliability, validity, and sensitivity, which delineates and limits the roles of these measures in workload assessment (Nygren, 1991). As a result, there is no sufficiently effective measure for indicating changes in workload for distinct tasks across multiple domains (Abich, 2013). Abich (2013) was the most recent to systematically test the subjective and objective workload measures for determining the universality and sensitivity of each alone or in combination. This systematic approach assessed taskload changes within three tasks in the context of a military intelligence, surveillance, and reconnaissance (ISR) missions. The purpose for the present experiment was to determine if certain workload measures are sufficiently effective across domains by taking the findings from one domain (military) and testing whether those results hold true in a different domain, that of nuclear. Results showed that only two measures (NASA-TLX frustration and fNIR) were sufficiently effective at indicating workload changes between the three task types in the nuclear domain, but many measures were statistically significant. The results of this research effort combined with the results from Abich (2013) highlight an alarming problem. The ability of subjective and physiological measures to indicate changes in workload varies across tasks (Abich, 2013) and across domain. A single measure is not able to measure the complex construct of workload across different tasks within the same domain or across domains. This research effort highlights the importance of proper methodology. As researchers, we have to identify the appropriate workload measure for all tasks regardless of the domain by investigating the effectiveness of each measure. The findings of the present study suggest that responsible science include evaluating workload measures before use, not relying on prior research or theory. In other words, results indicate that it is only acceptable to use a measure based on prior findings if research has tested that measure on the exact task and manipulations within that specific domain.
|
80 |
Quantifying cognitive workload and defining training time requirements using thermographyKang, Jihun 13 December 2008 (has links)
Effective mental workload measurement is critical because mental workload significantly affects human performance. A non-invasive and objective workload measurement tool is needed to overcome limitations of current mental workload measures. Further, training/learning increases mental workload during skill or knowledge acquisition, followed by a decreased mental workload, though sufficient training times are unknown. The objectives of this study were to: (1) investigate the efficacy of using thermography as a non-contact physiological measure to quantify mental workload, (2) quantify and describe the relationship between mental workload and learning/training, and, (3) introduce a method to determine a sufficient training time and an optimal human performance level for a novel task by using thermography. Three studies were conducted to address these objectives. The first study investigated the efficacy of using thermography to quantity the relationship between mental workload and facial temperature changes while learning an alpha-numeric task. Thermography measured and quantified the mental workload level successfully. Strong and significant correlations were found among thermography, performance, and subjective workload measures (MCH and SWAT ratings). The second study investigated the utility of using a psychophysical approach to determine workload levels that maximize performance on a cognitive task. The second study consisted of an adjustment session (participants adjusted their own workload levels) and work session (participants worked at the chosen workload level). Participants were found to fall into two performance groups (low and high performers by accuracy rate) and results were significantly different. Thermography demonstrated whether both group found their optimal workload level. The last study investigated efficacy of using thermography to quantify mental workload level in a complex training/learning environment. Experienced drivers’ performance data was used as criteria to indicate whether novice drivers mastered the driving skills. Strong and significant correlations were found among thermography, subjective workload measures, and performance measures in novice drivers. This study verified that thermography is a reliable and valid way to measure workload as a non-invasive and objective method. Also, thermography provided more practical results than subjective workload measures for simple and complex cognitive tasks. Thermography showed the capability to identify a sufficient training time for simple or complex cognitive tasks.
|
Page generated in 0.045 seconds