701 |
Student Satisfaction Surveys and Nonresponse: Ignorable Survey, Ignorable NonresponseBoyer, Luc January 2009 (has links)
With an increasing reliance on satisfaction exit surveys to measure how university alumni qualify their experiences during their degree program, it is uncertain whether satisfaction is sufficiently salient, for some alumni, to generate distinguishable satisfaction scores between respondents and nonrespondents.
This thesis explores whether, to what extent, and why nonresponse to student satisfaction surveys makes any difference to our understanding of student university experiences. A modified version of Michalos’ multiple discrepancies theory was utilized as the conceptual framework to ascertain which aspects of the student experience are likely to be nonignorable, and which are likely to be ignorable. In recognition of the hierarchical structure of educational organizations, the thesis explores the impact of alumnus and departmental characteristics on nonresponse error. The impact of survey protocols on nonresponse error is also explored.
Nonignorable nonresponse was investigated using a multi-method approach. Quantitative analyses were based on a combined dataset gathered by the Graduate Student Exit Survey, conducted at each convocation over a period of three years. These data were compared against basic enrolment variables, departmental characteristics, and the public version of Statistic Canada’s National Graduate Survey. Analyses were conducted to ascertain whether nonresponse is nonignorable at the descriptive and analytical levels (form resistant hypothesis). Qualitative analyses were based on nine cognitive interviews from both recent and soon-to-be alumni.
Results were severely weakened by external and internal validity issues, and are therefore indicative but not conclusive. The findings suggest that nonrespondents are different from respondents, satisfaction intensity is weakly related to response rate, and that the ensuing nonresponse error in the marginals can be classified, albeit not fully, as missing at random. The form resistant hypothesis remains unaffected for variations in response rates. Cognitive interviews confirmed the presence of measurement errors which further weakens the case for nonignorability. An inadvertent methodological alignment of response pool homogeneity, a misspecified conceptual model, measurement error (dilution), and a non-salient, bureaucratically-inspired, survey topic are proposed as the likely reasons for the findings of ignorability. Methodological and organizational implications of the results are also discussed.
|
702 |
On single-crystal solid-state NMR based quantum information processingMoussa, Osama January 2010 (has links)
Quantum information processing devices promise to solve some problems more efficiently than their classical counterparts. The source of the speedup is the structure of quantum theory itself. In that sense, the physical units that are the building blocks of such devices are its power. The quest then is to find or manufacture a system that behaves according to quantum theory, and yet is controllable in such a way that the desired algorithms can be implemented. Candidate systems are benchmarked against general criteria to evaluate their success. In this thesis, I advance a particular system and present the progress made towards each of these criteria. The system is a three-qubit 13C solid-state nuclear magnetic resonance (NMR) based quantum processor. I report results concerning system characterization and control, pseudopure state preparation, and quantum error correction. I also report on using the system to test a central question in the foundation of quantum mechanics.
|
703 |
'Oops! I can't believe I did that!' Inducing Errors in a Routine Action SequenceClark, Amanda January 2010 (has links)
‘What was I thinking ?!?’ – No matter age, intelligence or social status, we all experience moments like these. Perhaps it is walking into a room and forgetting what you went there to do or maybe failing to add sugar to your coffee due to an interruption. Regardless, even though many of our daily activities are accomplished through routines that require very little conscious effort, errors of attention or slips of action do occur. This collection of studies was designed with three main questions in mind: 1) can action slips be induced in a laboratory-based task (Slip Induction Task; SIT), 2) how well do currently established theories of action slips explain the errors that are induced within the SIT, and 3) what insight can be gained about preventing such errors?
The first experiment was developed to replicate previous findings regarding the effectiveness of the SIT, as well as to determine the extent to which SIT performance correlates with other measures of attention failure. The study discussed in Chapter 3 expands on those results by investigating the effects of healthy aging on slip induction and finds that while older adults were better able to avoid action slips, they appear to sacrifice speed for accurate performance. The goal of the subsequent study was to determine whether young adult participants would also enjoy increased accuracy if they completed the task at a slower pace. Finally, the study discussed in Chapter 5 looks at whether changing the goal of the SIT would alter participants’ ability to inhibit unexpected cue information.
|
704 |
Error correction model estimation of the Canada-US real exchange rateYe, Dongmei 18 January 2008 (has links)
Using the error correction model, we link the long-run behavior of the Canada-US real exchange rate to its short-run dynamics. The equilibrium real exchange rate is determined by the energy and non-energy commodity prices over the period 1973Q1-1992Q1. However such a single long-run relationship does not hold when the sample period is extended to 2004Q4. This breakdown can be explained by the break point which we find at 1993Q3. At the break point, the effect of the energy price shocks on Canadas real exchange rate turns from negative to positive while the effect of the non-energy commodity price shocks is constantly positive. We find that after one year 40.03% of the gap between the actual and equilibrium real exchange rate is closed. The Canada-US interest rate differential affects the real exchange rate temporarily. The Canadas real exchange rate depreciates immediately after a decrease in Canadas interest rate and appreciates next quarter but not by as much as it has depreciated.
|
705 |
Hardware implementation of daubechies wavelet transforms using folded AIQ mappingIslam, Md Ashraful 22 September 2010 (has links)
The Discrete Wavelet Transform (DWT) is a popular tool in the field of image and video compression applications. Because of its multi-resolution representation capability, the DWT has been used effectively in applications such as transient signal analysis, computer vision, texture analysis, cell detection, and image compression. Daubechies wavelets are one of the popular transforms in the wavelet family. Daubechies filters provide excellent spatial and spectral locality-properties which make them useful in image compression.<p>
In this thesis, we present an efficient implementation of a shared hardware core to compute two 8-point Daubechies wavelet transforms. The architecture is based on a new two-level folded mapping technique, an improved version of the Algebraic Integer Quantization (AIQ). The scheme is developed on the factorization and decomposition of the transform coefficients that exploits the symmetrical and wrapping structure of the matrices. The proposed architecture is parallel, pipelined, and multiplexed. Compared to existing designs, the proposed scheme reduces significantly the hardware cost, critical path delay and power consumption with a higher throughput rate.<p>
Later, we have briefly presented a new mapping scheme to error-freely compute the Daubechies-8 tap wavelet transform, which is the next transform of Daubechies-6 in the Daubechies wavelet series. The multidimensional technique maps the irrational transformation basis coefficients with integers and results in considerable reduction in hardware and power consumption, and significant improvement in image reconstruction quality.
|
706 |
Die Bedeutung des Thalamus für das menschliche Handlungsüberwachungssystem im fronto-striato-thalamo-corticalen NetzwerkSeifert, Sebastian 26 November 2012 (has links) (PDF)
Für das zielgerichtete Verhalten des Menschen ist ein funktionierendes Handlungsüberwachungssystem eine wichtige Voraussetzung. Somit können Fehlhandlungen registriert und verarbeitet werden, um dann anschließend das Verhalten an die entsprechende Situation besser anzupassen. Ein wichtiges neuroanatomisches Korrelat dieses Handlungsüberwachungssystems ist der anteriore Anteil des mittleren cingulären Cortex (anterior midcingulate cortex, aMCC), der in der Funktion der Fehlerverarbeitung eng mit den Basalganglien und dem lateralen präfrontalen Cortex verknüpft ist. In der vorliegenden Arbeit wurde die Bedeutung des Thalamus im Netzwerk der Fehlerverarbeitung genauer untersucht. Es konnte mittels diffusionsgewichteter Traktografie bei 16 gesunden Probanden gezeigt werden, dass speziell der Nucleus ventralis anterior (VA) und der Nucleus ventralis lateralis anterior (VLa) quantitativ stärkere Faserverbindungen mit dem aMCC aufweisen, als die restlichen Thalamuskerne. Desweiteren zeigten 15 Patienten mit Läsionen im Thalamus im Vergleich zur gesunden Kontrollgruppe im Eriksen Flanker Task fehlerspezifische Verhaltensunterschiede. Obwohl die Fehlerrate zwischen diesen Patienten und den Kontrollprobanden nahezu identisch war, konnten die Patienten ihre Fehler als solche signifikant schlechter detektieren und ihr Verhalten nach einem Fehler daher auch schlechter anpassen. Die EEG Daten zeigten für die Patientengruppe eine in der Amplitude signifikant verminderte error-related negativity (ERN – ein ereignis-korreliertes Hirnpotential, ausgelöst durch Fehlhandlungen, z.B. in Flankierreizaufgaben) im Vergleich zur Kontrollgruppe. Bei 6 Patienten mit Läsionen der VA und VLa Kerngruppe war die ERN nahezu komplett erloschen, wohingegen bei den 9 Patienten, deren Läsionen nicht VA und VLa betrafen, die ERN lediglich vermindert war. / Performance monitoring is an essential prerequisite of successful goal-directed behavior. Research of the last two decades implicates the anterior midcingulate cortex (aMCC) in the human medial frontal cortex and frontostriatal basal ganglia circuits in this function. Here, we addressed the function of the thalamus in detecting errors and adjusting behavior accordingly. Using diffusion-based tractography we found that, among the thalamic nuclei, the ventral anterior and ventral lateral anterior nuclei (VA, VLa) have the relatively strongest connectivity with the RCZ. Patients with focal thalamic lesions showed diminished error-related negativity, behavioral error detection, and post-error adjustments. When the lesions specifically affected the thalamic VA/VLa nuclei these effects were significantly pronounced, which was reflected by complete absence of the error-related negativity. These results reveal that the thalamus, particularly its VA/VLa region, is a necessary constituent of the performance-monitoring network, anatomically well connected and functionally closely interacting with the aMCC.
|
707 |
Spatially Adaptive Augmented RealityCoelho, Enylton Machado 28 November 2005 (has links)
One of the most important problems in real-time, mobile augmented reality is *registration error* -- the misalignment between the computer generated graphics and the physical world the application is trying to augment. Such misalignment may either cause the information presented by the application to be misleading to the user or make the augmentation meaningless.
In this work, we question the implied assumption that registration error must be eliminated for AR to be useful. Instead, we take the position that registration error will never be eliminated and that application developers can build useful AR applications if they have an estimate of registration error. We present a novel approach to AR application design: *Spatially Adaptive Augmented Reality* (i.e., applications that change their displays based on the quality of the alignment between the physical and virtual world). The computations used to change the display are based on real-time estimates of the registration error. The application developer uses these estimates to build applications that function under a variety of conditions independent of specific tracking technologies.
In addition to introducing Spatially Adaptive AR, this research establishes a theoretical model for AR. These theoretical contributions are manifested in a toolkit that supports the design of Spatially Adaptive AR applications: OSGAR.
This work describes OSGAR in detail and presents examples that demonstrate how to use this novel approach to create adaptable augmentations as well as how to support user interaction in the presence of uncertainty.
|
708 |
Management of Reference Frames in Simulation and its ApplicationsKalaver, Satchidanand Anil 04 April 2006 (has links)
The choice of reference frames used in simulations is typically fixed in dynamic models based on modeling decisions made early during their development, restricting model fidelity, numerical accuracy and integration into large-scale simulations. Individual simulation components typically need to model the transformations between multiple reference frames in order to interact with other components, resulting in additional development effort, time and cost.
This dissertation describes the methods for defining and managing different reference frames in a simulation, thereby creating a shared simulation environment that can provide reference frame transformations, comprising of kinematics and rotations, to all simulation components through a Reference Frame Manager. Simulation components can use this Reference Frame Manager to handle all kinematics and rotations when interacting with components using different reference frames, improving the interoperability of simulation components, especially in parallel and distributed simulation, while reducing their development time, effort and cost. The Reference Frame Manager also facilitates the development of Generic Dynamic Models that encapsulate the core service of dynamic model, enabling the rapid development of dynamic models that can be reused and reconfigured for different simulation scenarios and requirements. The Reference Frame Manager can also be used to introduce Intermediate Frames that bound the magnitudes of vehicle states, reducing roundoff error and improving numerical accuracy.
|
709 |
Characterization of a vertical two axis latheLeclerc, Michael Edward 14 April 2005 (has links)
The primary barrier to the production of better machined parts is machine tool error. Present day applications are requiring closer machine part tolerances. The errors in dimensional part accuracy derive from the machine, in this case, a vertical two axis CNC lathe. A two axis vertical lathe can be utilized to produce a variety of parts ranging from cylindrical features to spherical features. A vertical lathe requires a spindle to rotate the work at speeds reaching 3000rpm, while simultaneously requiring the machine tool to be positioned in such a manner to remove material and produce an accurate part. For this to be possible, the machine tool must be precisely controlled in order to produce the correct contours on the part. There are many sources of errors to be considered in the two axis vertical lathe. Each axis of importance contains six degrees of freedom. The machine has linear displacement, angular, spindle thermal drift, straightness, parallelism, orthogonal, machine tool offset and roundness error. These error components must be measured in order to determine the resultant error.
The characterization of the machine addresses thermal behavior and geometric errors. This thesis presents the approach of determining the machine tool errors and using these errors to transform the actual tool path closer to the nominal tool path via compensation schemes. One of these schemes uses a laser interferometer in conjunction with a homogenous transformation matrix to construct the compensated path for a circular arc, facing and turning. The other scheme uses a ball bar system to directly construct the compensated tool path for a circular arc. Test parts were created to verify the improvement of the part accuracy using the compensated tool paths.
|
710 |
A study of the different understanding of the equal sign and error types of quadratic equation of one variableLiu, Pei-chi 23 June 2010 (has links)
The main purpose of this study is to investigate eighth-grade students¡¦ understanding of the equal sign and analyzed error types of quadratic equation in one variable. To achieve this purpose, the investigator did a survey and development instruments. Participants were 215 eighth-grade students who formed a convenient sample. There are three results. First, participants with a relational definition of the equal sign added to about 80% of the sample. Second, the performance of students with relational definitions is higher than the performance of students with operational definitions. Third, students¡¦ understanding of the equal sign was related to their respective problem-solving performance on quadratic equation in one variable. In this study, participants with an operational definition of the equal sign tended to guess randomly or leave a blank. Problem-solving performance of participants with a relational definition of the equal sign involved multiple strategies. The researcher suggested that teachers should strengthen students¡¦ understanding of equal sign and related students¡¦ prior algebraic knowledge.
|
Page generated in 0.039 seconds