• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 102
  • 54
  • 20
  • 7
  • 5
  • 4
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 239
  • 77
  • 31
  • 28
  • 28
  • 26
  • 26
  • 25
  • 22
  • 21
  • 17
  • 17
  • 16
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

The Effects of Mindfulness Meditation on Executive Functions, Moderated by Trait Anxiety

Baranski, Michael Francis Stephen 13 April 2020 (has links)
No description available.
152

Finite Element Model Correlation with Experiment Focusing on Descriptions of Clamped Boundary Condition and Damping

Jayakumar, Vignesh 16 June 2020 (has links)
No description available.
153

How do ice hockey players in different divisions compare on executive functions? : A comparative study of athletes

James, Calum January 2023 (has links)
Previous investigations into executive functions among athletes, particularly in team sports, have reported greater functioning for those in the highest level of competition. One such sport that has received less attention in research is ice hockey. The present study aimed to compare the performances of ice hockey players in different levels of competition on measures of inhibition, shifting, and updating. Furthermore, it hypothesised that athletes in higher competitive divisions would outperform those in lower divisions. With this aim in mind, 17 ice hockey players from different divisions of the Swedish national hockey league were recruited and completed a flanker task, local-global task, and 3-back task, as well as two alternate tasks featuring hockey-specific stimuli. Comparisons between high-skill and low-skill athletes were insignificant for inhibition and updating measures, while the comparison for shifting favoured low-skill athletes. This finding did not extend to a comparison between elite and non-elite players. The results do not support the given hypothesis, and future investigations should recruit a larger sample in order to further clarify the findings of low-skill outperforming high-skill athletes. / Tidigare undersökningar av exekutiva funktioner hos idrottare, särskilt inom lagsporter, har påvisat bättre förmågor hos deltagande i de högsta divisionerna. En lagsport där mindre fokus har lagts inom forskningen är ishockey. Denna studie syftade att jämföra presterande av ishockeyspelare inom olika tävlingsdivisioner på test av inhibering, kognitivt växlande, och uppdatering. Studien hade som hypotes att spelare i högre divisioner bör prestera bättre i jämförelse med de i lägre divisioner. 17 ishockeyspelare rekryterades från olika divisioner inom den svenska hockeyligan och de utförde ett flanker-test, ett local-global-test och ett 3-back-test. De utförde även två alternativa uppgifter med hockeyspecifika stimuli. Jämförelserna mellan ”high-skill” och ”low-skill”-spelare var icke-signifikanta för inhiberings- och uppdateringsmåtten och jämförelsen på kognitivt växlande påvisade att ”low-skill”-spelare presterade bättre. Detta påvisades dock inte för elitspelare mot icke-elitspelare. Resultaten ger inte stöd till hypotesen, och ytterligare studier med en större datainsamling bör vidare examinera om spelare i lägre divisioner presterar bättre i jämförelse med de i högre divisioner.
154

Essays in Decision Theory

Gu, Yuan 12 June 2023 (has links)
This dissertation studies decision theories for both individual and interactive choice problems. This thesis proposes three non-standard models that modify assumptions and settings of standard models. Chapter 1 provides an overview of this dissertation. In the second chapter I present a model of decision-making under uncertainty in which an agent is constrained in her cognitive ability to consider complex acts. The complexity of an act is identified by the corresponding partition of state space. The agent ranks acts according to the expected utility net of complexity cost. I introduce a new axiom called Aversion to Complexity, that depicts an agent's aversion to complex acts. This axiom, together with other modified classical expected utility axioms characterizes a Complexity Aversion Representation. In addition, I present applications to competitive markets with uncertainty and optimal contract design. The third Chapter discusses how a complexity averse agent measures the complexity cost of an act after she receives new information. I propose an updating rule for the complexity cost function called Minimal Complexity Updating. The idea is that if the agent is told that the true state must belong to a particular event, she needs not consider the complexity of an act outside of this event. The main result characterizes axiomatically the Minimal Complexity Aversion Representation. Lastly, I apply the idea of Minimal Complexity Updating to the theory of rational inattention. The last chapter deals with a variant model of fictitious play, in which each player has a perturbation term that measures to what extent his rival will stick to the rules of traditional fictitious play. I find that the empirical distribution can converge to a pure Nash equilibrium if the perturbation term is bounded. Furthermore, I introduce an updating rule for the perturbation term. I prove that if the perturbation term is updated in accordance with this rule, then play can converge to a pure Nash equilibrium. / Doctor of Philosophy / This dissertation studies decision theories for both individual and interactive choice problems. This thesis proposes three non-standard models that modify assumptions and settings of standard models. Chapter 1 provides an overview of this dissertation. In the second chapter I present a model of decision-making under uncertainty in which an agent is constrained in her cognitive ability to consider complex acts. The complexity of an act is identified by the corresponding partition of state space. The agent ranks acts according to the expected utility net of complexity cost. I introduce a new axiom called Aversion to Complexity, that depicts an agent's aversion to complex acts. This axiom, together with other modified classical expected utility axioms characterizes a Complexity Aversion Representation. In addition, I present applications to competitive markets with uncertainty and optimal contract design. The third Chapter discusses how a complexity averse agent measures the complexity cost of an act after she receives new information. I propose an updating rule for the complexity cost function called Minimal Complexity Updating. The idea is that if the agent is told that the true state must belong to a particular event, she needs not consider the complexity of an act outside of this event. The main result characterizes axiomatically the Minimal Complexity Aversion Representation. Lastly, I apply the idea of Minimal Complexity Updating to the theory of rational inattention. The last chapter deals with a variant model of fictitious play, in which each player has a perturbation term that measures to what extent his rival will stick to the rules of traditional fictitious play. I find that the empirical distribution can converge to a pure Nash equilibrium if the perturbation term is bounded. Furthermore, I introduce an updating rule for the perturbation term. I prove that if the perturbation term is updated in accordance with this rule, then play can converge to a pure Nash equilibrium.
155

Integrated Multiaxial Experimentation and Constitutive Modeling

Phillips, Peter Louis 24 May 2017 (has links)
No description available.
156

Working Memory Updating using Meaningful Trigraphs

Gamsby, Christopher William 03 May 2016 (has links)
No description available.
157

CHANGE DETECTION OF A SCENE FOLLOWING A VIEWPOINT CHANGE: MECHANISMS FOR THE REDUCED PERFORMANCE COST WHEN THE VIEWPOINT CHANGE IS CAUSED BY VIEWER LOCOMOTION

Comishen, Michael A. 10 1900 (has links)
<p>When an observer detects changes in a scene from a viewpoint that is different from the learned viewpoint, viewpoint change caused by observer’s locomotion would lead to better recognition performance compared to a situation where the viewpoint change is caused by equivalent movement of the scene. While such benefit of observer locomotion could be caused by spatial updating through body-based information (Simons and Wang 1998), or knowledge of change of reference direction gained through locomotion (Mou et al, 2009). The effect of such reference direction information have been demonstrated through the effect of a visual cue (e.g., a chopstick) presented during the testing phase indicating the original learning viewpoint (Mou et al, 2009).</p> <p>In the current study, we re-examined the mechanisms of such benefit of observer locomotion. Six experiments were performed using a similar change detection paradigm. Experiment 1 & 2 adopted the design as that in Mou et al. (2009). The results were inconsistent with the results from Mou et al (2009) in that even with the visual indicator, the performance (accuracy and response time) in the table rotation condition was still significantly worse than that in the observer locomotion condition. In Experiments 3-5, we compared performance in the normal walking condition with conditions where the body-based information may not be reliable (disorientation or walking over a long path). The results again showed a lack of benefit with the visual indicator. Experiment 6 introduced a more salient and intrinsic reference direction: coherent object orientations. Unlike the previous experiments, performance in the scene rotation condition was similar to that in the observer locomotion condition.</p> <p>Overall we showed that the body-based information in observer locomotion may be the most prominent information. The knowledge of the reference direction could be useful but might only be effective in limited scenarios such as a scene with a dominant orientation.</p> / Master of Science (MSc)
158

Inverse Problems in Structural Mechanics

Li, Jing 29 December 2005 (has links)
This dissertation deals with the solution of three inverse problems in structural mechanics. The first one is load updating for finite element models (FEMs). A least squares fitting is used to identify the load parameters. The basic studies are made for geometrically linear and nonlinear FEMs of beams or frames by using a four-noded curved beam element, which, for a given precision, may significantly solve the ill-posed problem by reducing the overall number of degrees of freedom (DOF) of the system, especially the number of the unknown variables to obtain an overdetermined system. For the basic studies, the unknown applied load within an element is represented by a linear combination of integrated Legendre polynomials, the coefficients of which are the parameters to be extracted using measured displacements or strains. The optimizer L-BFGS-B is used to solve the least squares problem. The second problem is the placement optimization of a distributed sensing fiber optic sensor for a smart bed using Genetic Algorithms (GA), where the sensor performance is maximized. The sensing fiber optic cable is represented by a Non-uniform Rational B-Splines (NURBS) curve, which changes the placement of a set of infinite number of the infinitesimal sensors to the placement of a set of finite number of the control points. The sensor performance is simplified as the integration of the absolute curvature change of the fiber optic cable with respect to a perturbation due to the body movement of a patient. The smart bed is modeled as an elastic mattress core, which supports a fiber optic sensor cable. The initial and deformed geometries of the bed due to the body weight of the patient are calculated using MSC/NASTRAN for a given body pressure. The deformation of the fiber optic cable can be extracted from the deformation of the mattress. The performance of the fiber optic sensor for any given placement is further calculated for any given perturbation. The third application is stiffened panel optimization, including the size and placement optimization for the blade stiffeners, subject to buckling and stress constraints. The present work uses NURBS for the panel and stiffener representation. The mesh for the panel is generated using DistMesh, a triangulation algorithm in MATLAB. A NASTRAN/MATLAB interface is developed to automatically transfer the data between the analysis and optimization processes respectively. The optimization consists of minimizing the weight of the stiffened panel with design variables being the thickness of the plate and height and width of the stiffener as well as the placement of the stiffeners subjected to buckling and stress constraints under in-plane normal/shear and out-plane pressure loading conditions. / Ph. D.
159

Recycling Krylov Subspaces and Preconditioners

Ahuja, Kapil 15 November 2011 (has links)
Science and engineering problems frequently require solving a sequence of single linear systems or a sequence of dual linear systems. We develop algorithms that recycle Krylov subspaces and preconditioners from one system (or pair of systems) in the sequence to the next, leading to efficient solutions. Besides the benefit of only having to store few Lanczos vectors, using BiConjugate Gradients (BiCG) to solve dual linear systems may have application-specific advantages. For example, using BiCG to solve the dual linear systems arising in interpolatory model reduction provides a backward error formulation in the model reduction framework. Using BiCG to evaluate bilinear forms -- for example, in the variational Monte Carlo (VMC) algorithm for electronic structure calculations -- leads to a quadratic error bound. Since one of our focus areas is sequences of dual linear systems, we introduce recycling BiCG, a BiCG method that recycles two Krylov subspaces from one pair of dual linear systems to the next pair. The derivation of recycling BiCG also builds the foundation for developing recycling variants of other bi-Lanczos based methods like CGS, BiCGSTAB, BiCGSTAB2, BiCGSTAB(l), QMR, and TFQMR. We develop a generalized bi-Lanczos algorithm, where the two matrices of the bi-Lanczos procedure are not each other's conjugate transpose but satisfy this relation over the generated Krylov subspaces. This is sufficient for a short term recurrence. Next, we derive an augmented bi-Lanczos algorithm with recycling and show that this algorithm is a special case of generalized bi-Lanczos. The Petrov-Galerkin approximation that includes recycling in the iteration leads to modified two-term recurrences for the solution and residual updates. We generalize and extend the framework of our recycling BiCG to CGS, BiCGSTAB and BiCGSTAB2. We perform extensive numerical experiments and analyze the generated recycle space. We test all of our recycling algorithms on a discretized partial differential equation (PDE) of convection-diffusion type. This PDE problem provides well-known test cases that are easy to analyze further. We use recycling BiCG in the Iterative Rational Krylov Algorithm (IRKA) for interpolatory model reduction and in the VMC algorithm. For a model reduction problem, we show up to 70% savings in iterations, and we also demonstrate that solving the problem without recycling leads to (about) a 50% increase in runtime. Experiments with recycling BiCG for VMC gives promising results. We also present an algorithm that recycles preconditioners, leading to a dramatic reduction in the cost of VMC for large(r) systems. The main cost of the VMC method is in constructing a sequence of Slater matrices and computing the ratios of determinants for successive Slater matrices. Recent work has improved the scaling of constructing Slater matrices for insulators, so that the cost of constructing Slater matrices in these systems is now linear in the number of particles. However, the cost of computing determinant ratios remains cubic in the number of particles. With the long term aim of simulating much larger systems, we improve the scaling of computing determinant ratios in the VMC method for simulating insulators by using preconditioned iterative solvers. The main contribution here is the development of a method to efficiently compute for the Slater matrices a sequence of preconditioners that make the iterative solver converge rapidly. This involves cheap preconditioner updates, an effective reordering strategy, and a cheap method to monitor instability of ILUTP preconditioners. Using the resulting preconditioned iterative solvers to compute determinant ratios of consecutive Slater matrices reduces the scaling of the VMC algorithm from O(n^3) per sweep to roughly O(n^2), where n is the number of particles, and a sweep is a sequence of n steps, each attempting to move a distinct particle. We demonstrate experimentally that we can achieve the improved scaling without increasing statistical errors. / Ph. D.
160

Konsistente und konsequente dynamische Risikomaße und das Problem der Aktualisierung

Tutsch, Sina 16 February 2007 (has links)
Mit der vorliegenden Dissertation wollen wir einen Beitrag zur Theorie der konvexen Risikomaße und ihrer Dynamik leisten. Im Kapitel 1 beschäftigen wir uns zunächst mit unbedingten konvexen Risikomaßen. Wir erläutern die Eigenschaften dieser Funktionale und geben einen Überblick über die Möglichkeiten ihrer Darstellung. Anschließend diskutieren wir das Problem ihrer Fortsetzbarkeit. Im Kapitel 2 erklären wir, wie sich die Darstellungssätze auf bedingte konvexe Risikomaße übertragen lassen, und untersuchen, unter welchen Voraussetzungen eine reguläre bedingte Darstellung existiert. Auf polnischen Räumen beweisen wir die Existenz auf den Klassen der halbstetigen Funktionen. Für das bedingte Average Value at Risk zeigen wir, dass durch eine zugehörige Familie von unbedingten AVaR-Risikomaßen eine reguläre bedingte Darstellung sogar auf der Klasse aller beschränkten Auszahlungsprofile gegeben ist. Im Kapitel 3 untersuchen wir die intertemporale Struktur von dynamischen konvexen Risikomaßen. Zunächst analysieren wir verschiedene Formen der Akzeptanz- und Ablehnungskonsistenz, welche einem zeitlich rückwärts gerichteten Ansatz der Risikobewertung entsprechen und in der Regel zur Konstruktion von dynamischen konvexen Risikomaßen zu einer vorgegebenen Filtration verwendet werden. Als Alternative formulieren wir einen vorwärts gerichteten Ansatz, bei dem jedes bedingte konvexe Risikomaß als eine Konsequenz aus der vorherigen Risikobewertung und der eingehenden Information konstruiert wird. Dann diskutieren wir Aktualisierungsvorschriften für konvexe Risikomaße. Wir überprüfen, inwieweit die vorgestellten Bedingungen der zeitlichen Konsistenz in ihrer starken und schwachen Form oder die Bedingung der Konsequenz als Aktualisierungskriterium geeignet sind. In diesem Zusammenhang diskutieren wir abschließend auch das Problem der Unsicherheitsreduzierung nach dem Erhalt von Zusatzinformation. / This thesis is a contribution to the theory convex risk measures and their dynamics. In chapter 1 we consider unconditional convex risk measures. At first, we explain the properties of these functionals and present different possibilities of their representation. Then we discuss the extension problem for convex risk measures. In chapter 2 we study conditional convex risk measures and their representations. We also analyze under which conditions these functionals admit a regular conditional representation. On polish spaces we prove existence on the classes of semicontinuous functions. For the conditional Average Value at Risk, we show that a regular conditional representation is given by a corresponding family of unconditional AVaR risk measures on the class of all bounded payoff functions. In Chapter 3 we investigate the intertemporal structure of dynamic convex risk measures. We begin by considering different conditions of acceptance and rejection consistency which correspond to a backward approach of dynamic risk evaluation and which are used for the construction of dynamic convex risk measures with respect to some given filtration. We also introduce an alternative forward approach where each conditional convex risk measure is constructed as a consequence of the initial risk evaluation and the incoming information. Then we discuss update rules for convex risk measures. We analyze whether the conditions of strong and weak consistency and the condition of consecutivity are appropriate update criteria. In this context, we finally discuss how uncertainty may be reduced after receiving some additional information.

Page generated in 0.4997 seconds