1 |
Risk Tolerance, and the Impact of Central Executive Abilities on Dual-task PerformanceCanella, David 21 November 2013 (has links)
Multiple Resource Theory (Wickens, 1980) has evolved over the past three decades into a four dimensional multiple resource model. Separately, central executive functioning has been investigated. Other research has examined the relationship between risk taking and behaviour. The research in this thesis aimed to address questions arising out of these theoretical approaches. An experiment was carried out to explore the impact of executive abilities, risk perception, and risk-taking behaviour on multitasking performance. Using a novel methodology it was found that executive functioning, and the way that information is presented, were each significantly related to task performance and eye gaze in a dual-task setting. Statistically significant relationships were also found between independently developed instruments of risk perception and of risky driving behaviour. The implications of these findings for theories of attentional resources, executive functions, and mental workload are discussed.
|
2 |
Risk Tolerance, and the Impact of Central Executive Abilities on Dual-task PerformanceCanella, David 21 November 2013 (has links)
Multiple Resource Theory (Wickens, 1980) has evolved over the past three decades into a four dimensional multiple resource model. Separately, central executive functioning has been investigated. Other research has examined the relationship between risk taking and behaviour. The research in this thesis aimed to address questions arising out of these theoretical approaches. An experiment was carried out to explore the impact of executive abilities, risk perception, and risk-taking behaviour on multitasking performance. Using a novel methodology it was found that executive functioning, and the way that information is presented, were each significantly related to task performance and eye gaze in a dual-task setting. Statistically significant relationships were also found between independently developed instruments of risk perception and of risky driving behaviour. The implications of these findings for theories of attentional resources, executive functions, and mental workload are discussed.
|
3 |
Modeling Uncertainty with Evolutionary Improved Fuzzy FunctionsCelikyilmaz, Fethiye Asli 30 July 2008 (has links)
Fuzzy system modeling (FSM)– meaning the construction of a representation of
fuzzy systems models–is a difficult task. It demands an identification of many
parameters. This thesis analyses fuzzy-modeling problems and different approaches to
cope with it. It focuses on a novel evolutionary FSM approach–the design of “Improved
Fuzzy Functions” system models with the use of evolutionary algorithms. In order to
promote this analysis, local structures are identified with a new improved fuzzy
clustering method and represented with novel “fuzzy functions”.
The central contribution of this work is the use of evolutionary algorithms – in
particular, genetic algorithms– to find uncertainty interval of parameters to improve
“Fuzzy Function” models. To replace the standard fuzzy rule bases (FRBs) with the new
“Improved Fuzzy Functions” succeeds in capturing essential relationships in structure
identification processes and overcomes limitations exhibited by earlier FRB methods
because there are abundance of fuzzy operations and hence the difficulty of the choice of
amongst the t-norms and co-norms.
Designing an autonomous and robust FSM and reasoning with it is the prime goal of
this approach. This new FSM approach implements higher-level fuzzy sets to identify the
uncertainties in: (1) the system parameters, and (2) the structure of “Fuzzy Functions”.
With the identification of these parameters, an interval valued fuzzy sets and “Fuzzy
Functions” are identified. Finally, an evolutionary computing approach with the proposed
uncertainty identification strategy is combined to build FSMs that can automatically
identify these uncertainty intervals.
After testing proposed FSM tool on various benchmark problems, the algorithms are
successfully applied to model decision processes in two real problem domains:
desulphurization process in steel making and stock price prediction activities. For both
problems, the proposed methods produce robust and high performance models, which are
comparable (if not better) than the best system modeling approaches known in current
literature. Several aspects of the proposed methodologies are thoroughly analyzed to
provide a deeper understanding. These analyses show consistency of the results. / Full thesis submitted in paper.
|
4 |
Modeling Uncertainty with Evolutionary Improved Fuzzy FunctionsCelikyilmaz, Fethiye Asli 30 July 2008 (has links)
Fuzzy system modeling (FSM)– meaning the construction of a representation of
fuzzy systems models–is a difficult task. It demands an identification of many
parameters. This thesis analyses fuzzy-modeling problems and different approaches to
cope with it. It focuses on a novel evolutionary FSM approach–the design of “Improved
Fuzzy Functions” system models with the use of evolutionary algorithms. In order to
promote this analysis, local structures are identified with a new improved fuzzy
clustering method and represented with novel “fuzzy functions”.
The central contribution of this work is the use of evolutionary algorithms – in
particular, genetic algorithms– to find uncertainty interval of parameters to improve
“Fuzzy Function” models. To replace the standard fuzzy rule bases (FRBs) with the new
“Improved Fuzzy Functions” succeeds in capturing essential relationships in structure
identification processes and overcomes limitations exhibited by earlier FRB methods
because there are abundance of fuzzy operations and hence the difficulty of the choice of
amongst the t-norms and co-norms.
Designing an autonomous and robust FSM and reasoning with it is the prime goal of
this approach. This new FSM approach implements higher-level fuzzy sets to identify the
uncertainties in: (1) the system parameters, and (2) the structure of “Fuzzy Functions”.
With the identification of these parameters, an interval valued fuzzy sets and “Fuzzy
Functions” are identified. Finally, an evolutionary computing approach with the proposed
uncertainty identification strategy is combined to build FSMs that can automatically
identify these uncertainty intervals.
After testing proposed FSM tool on various benchmark problems, the algorithms are
successfully applied to model decision processes in two real problem domains:
desulphurization process in steel making and stock price prediction activities. For both
problems, the proposed methods produce robust and high performance models, which are
comparable (if not better) than the best system modeling approaches known in current
literature. Several aspects of the proposed methodologies are thoroughly analyzed to
provide a deeper understanding. These analyses show consistency of the results. / Full thesis submitted in paper.
|
5 |
Acceleration of Iterative Methods for Markov Decision ProcessesShlakhter, Oleksandr 21 April 2010 (has links)
This research focuses on Markov Decision Processes (MDP). MDP is one of the most important and challenging areas of Operations Research. Every day people make many decisions: today's decisions impact
tomorrow's and tomorrow's will impact the ones made the day after. Problems in Engineering, Science, and Business often pose similar challenges: a large number of options and uncertainty about the
future. MDP is one of the most powerful tools for solving such problems.
There are several standard methods for finding optimal or approximately optimal policies for MDP. Approaches widely employed
to solve MDP problems include value iteration and policy iteration. Although simple to implement, these approaches are, nevertheless, limited in the size of problems that can be solved, due to excessive
computation required to find close-to-optimal solutions.
My thesis proposes a new value iteration and modified policy iteration methods for classes of the expected discounted MDPs and
average cost MDPs.
We establish a class of operators that can be integrated into value iteration and modified policy iteration algorithms for Markov Decision Processes, so as to speed up the convergence of the iterative search. Application of these operators requires a little additional computation per iteration but reduces the number of iterations significantly. The development of the acceleration operators relies on two key properties of Markov operator, namely
contraction mapping and monotonicity in a restricted region. Since Markov operators of the classical value iteration and modified
policy iteration methods for average cost MDPs do not possess the contraction mapping property, for these models we restrict our study to average cost problems that can be formulated as the stochastic shortest path problem.
The performance improvement is significant, while the implementation of the operator into the value iteration is trivial. Numerical studies show that the accelerated methods can be hundreds of times more efficient for solving MDP problems than the other known approaches. The computational savings can be significant especially
when the discount factor approaches 1 and the transition probability matrix becomes dense, in which case the standard iterative algorithms suffer from slow convergence.
|
6 |
Incorporating Ratios in DEA—Applications to Real DataSigaroudi, Sanaz 15 February 2010 (has links)
In the standard Data Envelopment Analysis (DEA), the strong disposability and convexity
axioms along with the variable/constant return to scale assumption provide a good
estimation of the production possibility set and the efficient frontier. However, when
data contains some or all measures represented by ratios, the standard DEA fails to
generate an accurate efficient frontier. This problem has been addressed by a number of
researchers and models have been proposed to solve the problem. This thesis proposes a
“Maximized Slack Model” as a second stage to an existing model. This work implements
a two phase modified model in MATLAB (since no existing DEA software can handle
ratios) and with this new tool, compares the results of our proposed model against the
results from two other standard DEA models for a real example with ratio and non-ratio
measures.
Then we propose different approaches to get a close approximation of the convex hull
of the production possibility set as well as the frontier when ratio variables are present
on the side of the desired orientation.
|
7 |
Incorporating Ratios in DEA—Applications to Real DataSigaroudi, Sanaz 15 February 2010 (has links)
In the standard Data Envelopment Analysis (DEA), the strong disposability and convexity
axioms along with the variable/constant return to scale assumption provide a good
estimation of the production possibility set and the efficient frontier. However, when
data contains some or all measures represented by ratios, the standard DEA fails to
generate an accurate efficient frontier. This problem has been addressed by a number of
researchers and models have been proposed to solve the problem. This thesis proposes a
“Maximized Slack Model” as a second stage to an existing model. This work implements
a two phase modified model in MATLAB (since no existing DEA software can handle
ratios) and with this new tool, compares the results of our proposed model against the
results from two other standard DEA models for a real example with ratio and non-ratio
measures.
Then we propose different approaches to get a close approximation of the convex hull
of the production possibility set as well as the frontier when ratio variables are present
on the side of the desired orientation.
|
8 |
Acceleration of Iterative Methods for Markov Decision ProcessesShlakhter, Oleksandr 21 April 2010 (has links)
This research focuses on Markov Decision Processes (MDP). MDP is one of the most important and challenging areas of Operations Research. Every day people make many decisions: today's decisions impact
tomorrow's and tomorrow's will impact the ones made the day after. Problems in Engineering, Science, and Business often pose similar challenges: a large number of options and uncertainty about the
future. MDP is one of the most powerful tools for solving such problems.
There are several standard methods for finding optimal or approximately optimal policies for MDP. Approaches widely employed
to solve MDP problems include value iteration and policy iteration. Although simple to implement, these approaches are, nevertheless, limited in the size of problems that can be solved, due to excessive
computation required to find close-to-optimal solutions.
My thesis proposes a new value iteration and modified policy iteration methods for classes of the expected discounted MDPs and
average cost MDPs.
We establish a class of operators that can be integrated into value iteration and modified policy iteration algorithms for Markov Decision Processes, so as to speed up the convergence of the iterative search. Application of these operators requires a little additional computation per iteration but reduces the number of iterations significantly. The development of the acceleration operators relies on two key properties of Markov operator, namely
contraction mapping and monotonicity in a restricted region. Since Markov operators of the classical value iteration and modified
policy iteration methods for average cost MDPs do not possess the contraction mapping property, for these models we restrict our study to average cost problems that can be formulated as the stochastic shortest path problem.
The performance improvement is significant, while the implementation of the operator into the value iteration is trivial. Numerical studies show that the accelerated methods can be hundreds of times more efficient for solving MDP problems than the other known approaches. The computational savings can be significant especially
when the discount factor approaches 1 and the transition probability matrix becomes dense, in which case the standard iterative algorithms suffer from slow convergence.
|
9 |
Local to Mobile Devices for Nuclear OperationsLee, Gladys 26 June 2014 (has links)
The AECL National Research Universal (NRU) reactor has implemented a new mobile device to help Operators collect instrument data. Instruments display readings through various analog and digital devices. Before the change, Operators collected readings via a paper and pen method. With the implementation of the mobile device, Operators scan a barcode and input the instrument’s reading using a mobile device. To understand the factors that influence Operator acceptance of the new technology, the Technology Acceptance Model (TAM) was used to identify determinants that have the biggest impact on the Operators’ intention to use the mobile device. The TAM results identified the determinants that were most influential and the mobile interface was redesigned, based on the TAM results. Once the interfaces were redesigned they were evaluated through a usability study that compared the original interface with the newly designed screens. The results did not show statistically significant differences between the screens.
|
10 |
Local to Mobile Devices for Nuclear OperationsLee, Gladys 26 June 2014 (has links)
The AECL National Research Universal (NRU) reactor has implemented a new mobile device to help Operators collect instrument data. Instruments display readings through various analog and digital devices. Before the change, Operators collected readings via a paper and pen method. With the implementation of the mobile device, Operators scan a barcode and input the instrument’s reading using a mobile device. To understand the factors that influence Operator acceptance of the new technology, the Technology Acceptance Model (TAM) was used to identify determinants that have the biggest impact on the Operators’ intention to use the mobile device. The TAM results identified the determinants that were most influential and the mobile interface was redesigned, based on the TAM results. Once the interfaces were redesigned they were evaluated through a usability study that compared the original interface with the newly designed screens. The results did not show statistically significant differences between the screens.
|
Page generated in 0.0268 seconds