• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 333
  • 168
  • 120
  • 18
  • 16
  • 14
  • 11
  • 10
  • 7
  • 6
  • 5
  • 5
  • 3
  • 2
  • 1
  • Tagged with
  • 816
  • 156
  • 144
  • 141
  • 110
  • 109
  • 106
  • 99
  • 99
  • 95
  • 91
  • 89
  • 84
  • 77
  • 74
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Type of automation failure: the effects on trust and reliance in automation

Johnson, Jason D. 01 December 2004 (has links)
Past automation research has focused primarily on machine-related factors (e.g., automation reliability) and human-related factors (e.g., accountability). Other machine-related factors such as type of automation errors, misses or false alarms, have been noticeably overlooked. These two automation errors correspond to potential operator errors, omission (misses) and commission (false alarms), which have proven to directly affect operators trust in automation. This research examined how automation-error-type affects operator trust and reliance in and perceived reliability of automated decision aids. This present research confirmed that perceived reliability is often lower than actual system reliability and that false alarms significantly reduced operator trust in the automation more so than do misses. In addition, this study found that there does not appear to be an effect on the level of subjective trust within each experimental condition (i.e., type of automation error) based on age. There does, however, appear to be a significant difference in the reliance on automation between older and younger adult participants attributed to differences in perceived workload.
72

Generating and Analyzing Synthetic Workloads using Iterative Distillation

Kurmas, Zachary Alan 14 May 2004 (has links)
The exponential growth in computing capability and use has produced a high demand for large, high-performance storage systems. Unfortunately, advances in storage system research have been limited by (1) a lack of evaluation workloads, and (2) a limited understanding of the interactions between workloads and storage systems. We have developed a tool, the Distiller that helps address both limitations. Our thesis is as follows: Given a storage system and a workload for that system, one can automatically identify a set of workload characteristics that describes a set of synthetic workloads with the same performance as the workload they model. These representative synthetic workloads increase the number of available workloads with which storage systems can be evaluated. More importantly, the characteristics also identify those workload properties that affect disk array performance, thereby highlighting the interactions between workloads and storage systems. This dissertation presents the design and evaluation of the Distiller. Specifically, our contributions are as follows. (1) We demonstrate that the Distiller finds synthetic workloads with at most 10% error for six out of the eight workloads we tested. (2) We also find that all of the potential error metrics we use to compare workload performance have limitations. Additionally, although the internal threshold that determines which attributes the Distiller chooses has a small effect on the accuracy of the final synthetic workloads, it has a large effect on the Distiller's running time. Similarly, (3) we find that we can reduce the precision with which we measure attributes and only moderately reduce the resulting synthetic workload's accuracy. Finally, (4) we show how to use the information contained in the chosen attributes to predict the performance effects of modifying the storage system's prefetch length and stripe unit size.
73

The effect of workload and age on compliance with and reliance on an automated system

McBride, Sara E. 08 April 2010 (has links)
Automation provides the opportunity for many tasks to be done more effectively and with greater safety. However, these benefits are unlikely to be attained if an automated system is designed without the human user in mind. Many characteristics of the human and automation, such as trust and reliability, have been rigorously examined in the literature in an attempt to move towards a comprehensive understanding of the interaction between human and machine. However, workload has primarily been examined solely as an outcome variable, rather than as a predictor of compliance, reliance, and performance. This study was designed to gain a deeper understanding of whether workload experienced by human operators influences compliance with and reliance on an automated warehouse management system, as well to assess whether age-related differences exist in this interaction. As workload increased, performance on the Receiving Packages task decreased among younger and older adults. Although younger adults also experienced a negative effect of workload on Dispatching Trucks performance, older adults did not demonstrate a significant effect. The compliance data showed that as workload increased, younger adults complied with the automation to a greater degree, and this was true regardless of whether the automation was correct or incorrect. Older adults did not demonstrate a reliable effect of workload on compliance behavior. Regarding reliance behavior, as workload increased, reliance on the automation increased, but this effect was only observed among older adults. Again, this was true regardless of whether the automation as correct or incorrect. The finding that individuals may be more likely to comply with or rely on faulty automation if they are in high workload state compared to a low workload state suggests that an operator's ability to detect automation errors may be compromised in high workload situations. Overall, younger adults outperformed older adults on the task. Additionally, older adults complied with the system more than younger adults when the system erred, which may have contributed to their poorer performance. When older adults verified the instructions given by the automation, they spent longer doing so than younger adults, suggesting that older adults may experience a greater cost of verification. Further, older adults reported higher workload and greater trust in the system than younger adults, but both age groups perceived the reliability of the system quite accurately. Understanding how workload and age influence automation use has implications for the way in which individuals are trained to interact with complex systems, as well as the situations in which automation implementation is determined to be appropriate.
74

Examination of the Effect of Child Abuse Case Characteristics on the Time a Caseworker Devotes to a Case

Card, Christopher J. 27 October 2010 (has links)
This study used an explanatory research model that determined the effect on caseworker time and therefore workload caused by specific characteristics of cases assigned after the child abuse investigation is complete. The purpose of this study was to explain the relationship between child protection case characteristics and the time an assigned caseworker devotes to a case. With this knowledge an informed methodology to assess the current workload of a caseworker could be used to assure that the caseworker is able to successfully complete the tasks required for each child assigned. Further, the knowledge of the amount of time spent on a case with specific characteristics allows supervisors to assess and properly assign cases. Utilizing focus groups and a secondary data analysis of the Florida State Automated Child Welfare Service Information System (SACWSIS) the case characteristics of race/ethnicity, living arrangement, placement, removal and prior removal were found to significantly affect caseworker time spent on a case. Additionally, the case characteristics of gender, age, type of maltreatment, and disability were not found to affect caseworker time spent on a case.
75

Workload modeling and prediction for resources provisioning in cloud

Magalhães, Deborah Maria Vieira 23 February 2017 (has links)
MAGALHÃES, Deborah Maria Vieira. Workload modeling and prediction for resources provisioning in cloud. 2017. 100 f. Tese (Doutorado em Engenharia de Teleinformática)–Centro de Tecnologia, Universidade Federal do Ceará, Fortaleza, 2017. / Submitted by Hohana Sanders (hohanasanders@hotmail.com) on 2017-06-02T16:11:24Z No. of bitstreams: 1 2017_tese_dmvmagalhães.pdf: 5119492 bytes, checksum: 581c09b1ba042cf8c653ca69d0aa0d57 (MD5) / Approved for entry into archive by Marlene Sousa (mmarlene@ufc.br) on 2017-06-02T16:18:39Z (GMT) No. of bitstreams: 1 2017_tese_dmvmagalhães.pdf: 5119492 bytes, checksum: 581c09b1ba042cf8c653ca69d0aa0d57 (MD5) / Made available in DSpace on 2017-06-02T16:18:39Z (GMT). No. of bitstreams: 1 2017_tese_dmvmagalhães.pdf: 5119492 bytes, checksum: 581c09b1ba042cf8c653ca69d0aa0d57 (MD5) Previous issue date: 2017-02-23 / The evaluation of resource management policies in cloud environments is challenging since clouds are subject to varying demand coming from users with different profiles and Quality de Service (QoS) requirements. Factors as the virtualization layer overhead, insufficient trace logs available for analysis, and mixed workloads composed of a wide variety of applications in a heterogeneous environment frustrate the modeling and characterization of applications hosted in the cloud. In this context, workload modeling and characterization is a fundamental step on systematizing the analysis and simulation of the performance of computational resources management policies and a particularly useful strategy for the physical implementation of the clouds. In this doctoral thesis, we propose a methodology for workload modeling and characterization to create resource utilization profiles in Cloud. The workload behavior patterns are identified and modeled in the form of statistical distributions which are used by a predictive controller to establish the complex relationship between resource utilization and response time metric. To this end, the controller makes adjustments in the resource utilization to maintain the response time experienced by the user within an acceptable threshold. Hence, our proposal directly supports QoS-aware resource provisioning policies. The proposed methodology was validated through two different applications with distinct characteristics: a scientific application to pulmonary diseases diagnosis, and a web application that emulates an auction site. The performance models were compared with monitoring data through graphical and analytical methods to evaluate their accuracy, and all the models presented a percentage error of less than 10 %. The predictive controller was able to dynamically maintain the response time close to the expected trajectory without Service Level Agreement (SLA) violation with an Mean Absolute Percentage Error (MAPE) = 4.36%. / A avaliação de políticas de gerenciamento de recursos em nuvens computacionais é uma tarefa desafiadora, uma vez que tais ambientes estão sujeitos a demandas variáveis de usuários com diferentes perfis de comportamento e expectativas de Qualidade de Serviço (QoS). Fatores como overhead da camada de virtualização, indisponibilidade de dados e complexidade de cargas de trabalho altamente heterogêneas dificultam a modelagem e caracterização de aplicações hospedadas em nuvens. Neste contexto, caracterizar e modelar a carga de trabalho (ou simples- mente carga) é um passo importante na sistematização da análise e simulação do desempenho de políticas de gerenciamento dos recursos computacionais e uma estratégia particularmente útil antes da implantação física das nuvens. Nesta tese de doutorado, é proposta uma metodologia para modelagem e caracterização de carga visando criar perfis de utilização de recursos em Nuvem. Os padrões de comportamento das cargas são identificados e modelados sob a forma de distribuições estatísticas as quais são utilizadas por um controlador preditivo a fim de estabelecer a complexa relação entre a utilização dos recursos e a métrica de tempo de resposta. Desse modo, o controlador realiza ajustes no percentual de utilização do recursos a fim de manter o tempo de resposta observado pelo o usuário dentro de um limiar aceitável. Assim, nossa proposta apoia diretamente políticas de provisionamento de recursos cientes da Qualidade de Serviço (QoS). A metodologia proposta foi validada através de aplicações com características distintas: uma aplicação científica para o auxílio do diagnóstico de doenças pulmonares e uma aplicação Web que emula um site de leilões. Os modelos de desempenho computacional gerados foram confrontados com os dados reais através de métodos estatísticos gráficos e analíticos a fim de avaliar sua acurácia e todos os modelos apresentaram um percentual de erro inferior a 10%. A modelagem proposta para o controlador preditivo mostrou-se efetiva pois foi capaz de dinamicamente manter o tempo de resposta próximo ao valor esperado, com erro percentual absoluto médio (MAPE ) = 4.36% sem violação de SLA.
76

Performance management and academic workload in higher education

Parsons, Philip Graham January 2000 (has links)
Thesis (MTech(Human Resource Management))--Cape Technikon, Cape Town, 2000 / This research project investigated the need for a method of determining an equitable workload for academic staffing in higher education. With the possibility of the introduction of a performance management system at the Cape Technikon it became imperative that an agreed, objective and user-friendly method of determining the workload of each academic member of staff be established. The research project established the main parameters of the job of an academic staff member and their dimensions that would influence both the quantity and quality of work produced. They were established based on the views of a panel of educators drawn from a diverse range of disciplines. Using the identified dimensions an algorithm was developed and refined to reflect the consensus views regarding the contributory weightings of each of the parameters' dimensions. This algorithm was tested and refined using a base group of academic staff who were identified by their colleagues as those whose workload could be considered a benchmark for their discipline. The most significant result of the research programme is the agreed algorithm that can form the basis for a performance management system in higher education. The user interface that was developed at the same time reflects the transparency of the system and allows for it to be adapted to the needs of various groups of users or individuals within an organisation. On the basis of this research it has been established that a system for determining an equitable workload which encompasses an extensive range of parameters can be developed using a participatory approach. Using a significant sample of academic staff as a basis, it would appear that the system is valid, reliable, useful and acceptable to academic staff in the context of a performance management system.
77

Individual Workload's Relation to Team Workload : An investigation

Weilandt, Jacob January 2017 (has links)
There is an ongoing debate regarding the construct of team workload and a central point in that debate is team workload’s relation to individual workload. This study set out to investigate this relationship. To assess the participants workload a microworld called C3Fire was used to simulate a complex control situation in which teams had to cooperate to complete the task of fighting a forest fire. Twelve teams that consisted of four members in each team were recruited. In the microworld each member of the team took on one out of four separate roles and completed three different scenarios with varying degree of difficulty in C3Fire. After each scenario, a number of questionnaires aimed at gauging different aspects of the teams’ experience in the microworld was administered. The questionnaire in focus of the current study was the DATMA questionnaire, which was used to measure individual workload and team workload. To assert the relationship between the two constructs a multiple linear regression was conducted. The results provided showed that individual workload could be used as a significant predictor for modeling team workload. The study therefore concludes that there is evidence for a relationship in which each team members individual workload could be the parts of the total sum of team workload.
78

Workload-based optimization of integration processes

Böhm, Matthias, Wloka, Uwe, Habich, Dirk, Lehner, Wolfgang 03 July 2023 (has links)
The efficient execution of integration processes between distributed, heterogeneous data sources and applications is a challenging research area of data management. These integration processes are an abstraction for workflow-based integration tasks, used in EAI servers and WfMS. The major problem are significant workload changes during runtime. The performance of integration processes strongly depends on those dynamic workload characteristics, and hence workload-based optimization is important. However, existing approaches of workflow optimization only address the rule-based optimization and disregard changing workload characteristics. To overcome the problem of inefficient process execution in the presence of workload shifts, here, we present an approach for the workload-based optimization of instance-based integration processes and show that significant execution time reductions are possible.
79

Assessing the Effectiveness of Workload Measures in the Nuclear Domain

Mercado, Joseph 01 January 2014 (has links)
An operator's performance and mental workload when interacting with a complex system, such as the main control room (MCR) of a nuclear power plant (NPP), are major concerns when seeking to accomplish safe and successful operations. The impact of performance on operator workload is one of the most widely researched areas in human factors science with over five hundred workload articles published since the 1960s (Brannick, Salas, & Prince, 1997; Meshkati & Hancock, 2011). Researchers have used specific workload measures across domains to assess the effects of taskload. However, research has not sufficiently assessed the psychometric properties, such as reliability, validity, and sensitivity, which delineates and limits the roles of these measures in workload assessment (Nygren, 1991). As a result, there is no sufficiently effective measure for indicating changes in workload for distinct tasks across multiple domains (Abich, 2013). Abich (2013) was the most recent to systematically test the subjective and objective workload measures for determining the universality and sensitivity of each alone or in combination. This systematic approach assessed taskload changes within three tasks in the context of a military intelligence, surveillance, and reconnaissance (ISR) missions. The purpose for the present experiment was to determine if certain workload measures are sufficiently effective across domains by taking the findings from one domain (military) and testing whether those results hold true in a different domain, that of nuclear. Results showed that only two measures (NASA-TLX frustration and fNIR) were sufficiently effective at indicating workload changes between the three task types in the nuclear domain, but many measures were statistically significant. The results of this research effort combined with the results from Abich (2013) highlight an alarming problem. The ability of subjective and physiological measures to indicate changes in workload varies across tasks (Abich, 2013) and across domain. A single measure is not able to measure the complex construct of workload across different tasks within the same domain or across domains. This research effort highlights the importance of proper methodology. As researchers, we have to identify the appropriate workload measure for all tasks regardless of the domain by investigating the effectiveness of each measure. The findings of the present study suggest that responsible science include evaluating workload measures before use, not relying on prior research or theory. In other words, results indicate that it is only acceptable to use a measure based on prior findings if research has tested that measure on the exact task and manipulations within that specific domain.
80

On-demand re-optimization of integration flows

Böhm, Matthias, Habich, Dirk, Lehner, Wolfgang 04 July 2023 (has links)
Integration flows are used to propagate data between heterogeneous operational systems or to consolidate data into data warehouse infrastructures. In order to meet the increasing need of up-to-date information, many messages are exchanged over time. The efficiency of those integration flows is therefore crucial to handle the high load of messages and to reduce message latency. State-of-the-art strategies to address this performance bottleneck are based on incremental statistic maintenance and periodic cost-based re-optimization. This also achieves adaptation to unknown statistics and changing workload characteristics, which is important since integration flows are deployed for long time horizons. However, the major drawbacks of periodic re-optimization are many unnecessary re-optimization steps and missed optimization opportunities due to adaptation delays. In this paper, we therefore propose the novel concept of on-demand re-optimization. We exploit optimality conditions from the optimizer in order to (1) monitor optimality of the current plan, and (2) trigger directed re-optimization only if necessary. Furthermore, we introduce the PlanOptimalityTree as a compact representation of optimality conditions that enables efficient monitoring and exploitation of these conditions. As a result and in contrast to existing work, re-optimization is immediately triggered but only if a new plan is certain to be found. Our experiments show that we achieve near-optimal re-optimization overhead and fast workload adaptation.

Page generated in 0.0382 seconds