31 |
Nurses' Experiences with the Disclosure of Errors to PatientsGreene, Debbie 28 September 2009 (has links)
The 1999 Institute of Medicine report, To Err is Human, raised awareness about the multitude of errors that occur in healthcare. Frequently, errors are not disclosed to patients or their families. While several studies have examined patient and physician perspectives on disclosure, limited research on nurse perspectives exist. In hospitals, nurses are often the last line of defense before errors reach the patient. Because nurses are often present when errors occur, nurses’ experiences with disclosure are integral to understanding the issues that surround the disclosure of errors. The purpose of this study was to gain an understanding of nurse experiences with both disclosure and non-disclosure of errors to patients. An interpretive approach was used to guide the study, combined with a feminist perspective to illuminate the issues of power and gender. Registered nurses (n=17) employed in hospitals and caring for adult medical/surgical patients participated in semi-structured interviews. After the audio-recorded interviews were transcribed, they were reviewed for accuracy by participants. Analysis consisted of an eight-step process including use of a research team and peer debriefing. Three major themes and 6 sub-themes were identified. Major themes were: (a) disclosing errors, (b) perceiving expectations for disclosure, and (c) not disclosing errors. Some nurses provided constant information to the patient, so a disclosure decision was not necessary when errors occurred. Many of these nurses felt that full disclosure was the right thing to do. Other nurses based disclosure decisions on their perceptions of the culture or policies of the work environment. Disclosing events, but not errors was a method used to vaguely disclose while others overtly concealed errors. Some nurses felt that disclosure was a professional responsibility, while others felt that nurses should align themselves with institutional expectations. Still others indicated that disclosure should be determined on a case-by-case basis depending on the context. This study contributes to nursing science by illuminating the experiences of nurses with disclosure, describing nurses’ ways of being truthful when errors occur, and examining the contextual factors that surround nurses’ practices of disclosure. Recommendations of the study for nursing practice, education and research were identified.
|
32 |
Cognitive analysis of students' errors and misconceptions in variables, equations, and functionsLi, Xiaobao 15 May 2009 (has links)
The fundamental goal of this study is to explore why so many students have
difficulty learning mathematics. To achieve this goal, this study focuses on why so many
students keep making the same errors over a long period of time. To explore such issues,
three basic algebra concepts - variable, equation, and function – are used to analyze
students’ errors, possible buggy algorithms, and the conceptual basis of these errors:
misconceptions. Through the research on these three basic concepts, it is expected that a
more general principle of understanding and the corresponding learning difficulties can
be illustrated by the case studies.
Although students’ errors varied to a great extent, certain types of errors related to
students’ misconceptions occurred frequently and repeatedly after one year of additional
instruction. Thus, it is possible to identify students’ misconceptions through working on
students’ systematic errors. The causes of students’ robust misconceptions were explored
by comparing high-achieving and low-achieving students’ understanding of these three
concepts at the object (structural) or process (operational) levels. In addition, high achieving students were found to prefer using object (structural) thinking to solve
problems even if the problems could be solved through both algebra and arithmetic
approaches. It was also found that the relationship between students’ misconception and
object-process thinking explained why some misconceptions were particularly difficult
to change. Students’ understanding of concepts at either of two stages (process and
object) interacted with either of two aspects (correct conception and misconception).
When students had understood a concept as a process with misconception, such
misconception was particularly hard to change.
In addition, other concerns, such as rethinking the misconception of the “equal
sign,” the influence of prior experience on students’ learning, misconceptions and
recycling curriculum, and developing teachers’ initial subject knowledge were also
discussed. The findings of this study demonstrated that the fundamental reason of
misconception of “equal sign” was the misunderstanding of either side of equation as a
process rather than as an object. Due to the existence of robust misconceptions as stated
in this study, the use of recycling curriculum may have negative effect on students’
understanding of mathematics.
|
33 |
A Study of Problem-Solving Strategies and Errors in Inequalities for Junior High School StudentsChen, Ying-kuei 09 June 2007 (has links)
A Study of Problem-Solving Strategies and Errors in Inequalities for Junior High School Students
The aim of this study is to investigate students in learning in inequalities with one unknown, as well as to collect corresponding strategies and errors in problem solving. The subjects of this study were nine-grade students from junior high school. Six classes were selected from three schools with total of 204 students.
This investigator used a paper-and-pencil test in first round data collection. In the second round, some students were interviewed, to further understand students¡¦ way of thinking and reasons in errors produced in problem-solving procedures. Hopefully, results can be used as reference for junior high school math teacher to plan future teaching and to prepare teaching materials.
The results of the study are three: students solved linear inequalities by using 12 different strategies; students¡¦ errors can be divided into 11 types; and, the reasons for errors are mainly understanding and transforming information from problems and the determination on solutions. The students also found it difficult to understand negative fractions and negative decimals relationships (no matter in word problems or in calculation problems).
In this study, those who fail to solve problems involving inequalities with one unknown are those who cannot translate algebraic expressions or keywords. They produced errors 5 typical cases: determining objectives, integrating mathematics knowledge, using a problem solving method, calculating process, and, determining solution.
|
34 |
Why do we mipsell the middle of words? exploring the role of orthographic texture in the serial position effect /Jones, Angela C. January 2009 (has links)
Thesis (Ph.D.)--Kent State University, 2009. / Title from PDF t.p. (viewed Jan. 26, 2010). Advisor: Jocelyn R. Folk. Keywords: spelling; orthography; serial position. Includes bibliographical references (p. 53-60)
|
35 |
Data mining medication administration incident data to identify opportunities for improving patient safetyGray, Michael David. Thomas, Robert Evans. January 2009 (has links)
Dissertation (Ph.D.)--Auburn University, 2009. / Abstract. Vita. Includes bibliographic references.
|
36 |
Communication over channels with symbol synchronization errorsMercier, Hugues 05 1900 (has links)
Synchronization is a problem of fundamental importance for a wide range of practical communication systems including reading media, multi-user optical channels, synchronous digital communication systems, packet-switched communication networks, distributed computing systems, etc. In this thesis I study various aspects of communication over channels with symbol synchronization errors.
Symbol synchronization errors are harder to model than erasures or substitution errors caused by additive noise because they introduce uncertainties in timing. Consequently, the capacity of channels subjected to synchronization errors is a very challenging problem, even when considering the simplest channels for which only deletion errors occur. I improve on the best existing lower and upper bounds for the capacity of the deletion channel using convex and stochastic optimization techniques. I also show that simply finding closed-form expressions for the number of subsequences when deleting symbols from a string is computationally prohibitive.
Constructing efficient synchronization error-correcting codes is also a challenging task. The main result of the thesis is the design of a new family of codes able to correct several types of synchronization errors. The codes use trellis and modified versions of the Viterbi decoding algorithm, and therefore have very low encoding and decoding complexities. They also have high data rates and work for reasonably noisy channels, which makes them one of the first synchronization-correcting codes that have any chance of being used in practical systems.
In the last part of the thesis, I show that a synchronization approach can solve the opportunistic spectrum access problem in cognitive radio, where cognitive users want to communicate in presence of legacy users to whom the bandwidth has been licensed. I also consider the amount of communication required to solve a large class of distributed problems where synchronization errors can occur. More precisely, I study how allowing the parties to solve the problems incorrectly with small probability can reduce the total amount of communication or the number of messages that need to be exchanged.
|
37 |
Communication over channels with symbol synchronization errorsMercier, Hugues 05 1900 (has links)
Synchronization is a problem of fundamental importance for a wide range of practical communication systems including reading media, multi-user optical channels, synchronous digital communication systems, packet-switched communication networks, distributed computing systems, etc. In this thesis I study various aspects of communication over channels with symbol synchronization errors.
Symbol synchronization errors are harder to model than erasures or substitution errors caused by additive noise because they introduce uncertainties in timing. Consequently, the capacity of channels subjected to synchronization errors is a very challenging problem, even when considering the simplest channels for which only deletion errors occur. I improve on the best existing lower and upper bounds for the capacity of the deletion channel using convex and stochastic optimization techniques. I also show that simply finding closed-form expressions for the number of subsequences when deleting symbols from a string is computationally prohibitive.
Constructing efficient synchronization error-correcting codes is also a challenging task. The main result of the thesis is the design of a new family of codes able to correct several types of synchronization errors. The codes use trellis and modified versions of the Viterbi decoding algorithm, and therefore have very low encoding and decoding complexities. They also have high data rates and work for reasonably noisy channels, which makes them one of the first synchronization-correcting codes that have any chance of being used in practical systems.
In the last part of the thesis, I show that a synchronization approach can solve the opportunistic spectrum access problem in cognitive radio, where cognitive users want to communicate in presence of legacy users to whom the bandwidth has been licensed. I also consider the amount of communication required to solve a large class of distributed problems where synchronization errors can occur. More precisely, I study how allowing the parties to solve the problems incorrectly with small probability can reduce the total amount of communication or the number of messages that need to be exchanged.
|
38 |
The structure of the background errors in a global wave model.Greenslade, Diana J. M. January 2004 (has links)
Title page, table of contents and abstract only. The complete thesis in print form is available from the University of Adelaide Library. / One of the main limitations to current wave data assimilation systems is the lack of an accurate representation of the structure of the background errors. For example, the current operational wave data assimilation system at the Australian Bureau of Meteorology (BoM) prescribes globally uniform background error correlations of Gaussian shape with a length scale of 300 km and the error variance of both the background and observation errors is defined to be 0.25 m². This thesis describes an investigation into the determination of the background errors in a global wave model. There are two methods that are commonly used to determine background errors: the observational method and the 'NMC method'. The observational method is the main tool used in this thesis, although the 'NMC method' is considered also. The observational method considers correlations of the differences between observations and the background, in this case, the modelled Significant Wave Height (SWH) field. The observations used are satellite altimter estimates of SWH. Before applying the method, the effect of the irregular satellite sampling pattern is examined. This is achieved by constructing a set of anomaly correlations from modelled wave fields. The modelled wave fields are then sampled at the locations of the altimeter observations and the anomaly correlations are recalculated from the simulated altimeter data. The results are compared to the original anomaly correlations. It is found that in general, the altimeter sampling pattern underpredicts the spatial scale of the anomaly correlation. Observations of SWH from the ERS-2 altimeter are used in this thesis. To ensure that the observations used are of the highest quality possible, a validation of the European Remote Sensing Satellite 2 (ERS-2) SWH observations is performed. The altimeter data are compared to waverider buoy observations over a time period of approximately 4.5 years. With a set of 2823 co-located SWH estimates, it is found that in general, the altimeter overestimates low SWH and underestimates high SWH. A two-branched linear correction to the altimeter data is found, which reduces the overall rms error in SWH to approximately 0.2 m. Results from the previous sections are then used to calculate the background error correlations. Specifically, correlations of the differences between modelled SWH and the bias-corrected ERS-2 data are calculated. The irregular sampling pattern of the altimeter is accounted for by adjusting the correlation length scales according to latitude and the calculated length scale. The results show that the length scale of the background errors varies significantly over the globe, with the largest scales at low latitudes and shortest scales at high latitudes. Very little seasonal or year-to-year variability is detected. Conversely, the magnitude of the background error variance is found to have considerable seasonal and year-to-year variability. By separating the altimeter ground tracks into ascending and descending tracks, it is possible to examine, to a limited extent, whether any anisotropy exists in the background errors. Some of the areas on the globe that exhibit the most anisotropy are the Great Australian Bight and the North Atlantic Ocean. The background error correlations are also briefly examined via the 'NMC method', i.e., by considering differences between SWH forecasts of different ranges valid at the same time. It is found that the global distribution of the length scale of the error correlation is similar to that found using the observational method. It is also shown that the size of the correlation length scale increases as the forecast period increases. The new background error structure that has been developed is incorporated into a data assimilation system and evaluated over two month-long time periods. Compared to the current operational system at the BoM, it is found that this new structure improves the skill of the wave model by approximately 10%, with considerable geographical variability in the amount of improvement. / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1113813 / Thesis (Ph.D.) -- University of Adelaide, School of Mathematical Sciences, 2004
|
39 |
The structure of the background errors in a global wave model.Greenslade, Diana J. M. January 2004 (has links)
Title page, table of contents and abstract only. The complete thesis in print form is available from the University of Adelaide Library. / One of the main limitations to current wave data assimilation systems is the lack of an accurate representation of the structure of the background errors. For example, the current operational wave data assimilation system at the Australian Bureau of Meteorology (BoM) prescribes globally uniform background error correlations of Gaussian shape with a length scale of 300 km and the error variance of both the background and observation errors is defined to be 0.25 m². This thesis describes an investigation into the determination of the background errors in a global wave model. There are two methods that are commonly used to determine background errors: the observational method and the 'NMC method'. The observational method is the main tool used in this thesis, although the 'NMC method' is considered also. The observational method considers correlations of the differences between observations and the background, in this case, the modelled Significant Wave Height (SWH) field. The observations used are satellite altimter estimates of SWH. Before applying the method, the effect of the irregular satellite sampling pattern is examined. This is achieved by constructing a set of anomaly correlations from modelled wave fields. The modelled wave fields are then sampled at the locations of the altimeter observations and the anomaly correlations are recalculated from the simulated altimeter data. The results are compared to the original anomaly correlations. It is found that in general, the altimeter sampling pattern underpredicts the spatial scale of the anomaly correlation. Observations of SWH from the ERS-2 altimeter are used in this thesis. To ensure that the observations used are of the highest quality possible, a validation of the European Remote Sensing Satellite 2 (ERS-2) SWH observations is performed. The altimeter data are compared to waverider buoy observations over a time period of approximately 4.5 years. With a set of 2823 co-located SWH estimates, it is found that in general, the altimeter overestimates low SWH and underestimates high SWH. A two-branched linear correction to the altimeter data is found, which reduces the overall rms error in SWH to approximately 0.2 m. Results from the previous sections are then used to calculate the background error correlations. Specifically, correlations of the differences between modelled SWH and the bias-corrected ERS-2 data are calculated. The irregular sampling pattern of the altimeter is accounted for by adjusting the correlation length scales according to latitude and the calculated length scale. The results show that the length scale of the background errors varies significantly over the globe, with the largest scales at low latitudes and shortest scales at high latitudes. Very little seasonal or year-to-year variability is detected. Conversely, the magnitude of the background error variance is found to have considerable seasonal and year-to-year variability. By separating the altimeter ground tracks into ascending and descending tracks, it is possible to examine, to a limited extent, whether any anisotropy exists in the background errors. Some of the areas on the globe that exhibit the most anisotropy are the Great Australian Bight and the North Atlantic Ocean. The background error correlations are also briefly examined via the 'NMC method', i.e., by considering differences between SWH forecasts of different ranges valid at the same time. It is found that the global distribution of the length scale of the error correlation is similar to that found using the observational method. It is also shown that the size of the correlation length scale increases as the forecast period increases. The new background error structure that has been developed is incorporated into a data assimilation system and evaluated over two month-long time periods. Compared to the current operational system at the BoM, it is found that this new structure improves the skill of the wave model by approximately 10%, with considerable geographical variability in the amount of improvement. / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1113813 / Thesis (Ph.D.) -- University of Adelaide, School of Mathematical Sciences, 2004
|
40 |
Acoustic signals as visual biofeedback in the speech training of hearing impaired childrenCrawford, Elizabeth January 2007 (has links)
This study investigated the effectiveness of utilizing acoustic measures as an objective tool in monitoring speech errors and providing visual feedback to enhance speech training and aural rehabilitation of children with hearing impairment. The first part of the study included a comprehensive description of the acoustic characteristics related to the speech deficits of a hearing impaired child. Results of a series of t-tests performed on the experimental measures showed that vowel length and the loci of formant frequencies were most relevant in differentiating between correctly and incorrectly produced vowels, while voice onset time along with measures of Moment 1 (mean) and Moment 3 (skewness) obtained from speech moment analysis, were related to consonant accuracy. These findings, especially the finding of an abnormal sound frequency distribution shown in the hearing impaired child's consonant production, suggest a link between perceptual deficits and speech production errors and provide clues to the type of compensatory feedback needed for aural rehabilitation. The second part of the study involved a multiple baseline design across behaviours with replication across three hearing impaired children to assess the efficacy of treatment with acoustic signals as visual feedback. Participants' speech articulations following traditional speech training and training using spectrographic and RMS displays as visual feedback (referred to as 'visual treatment') were compared, with traditional non-visual treatment followed by visual treatment on one or two targets in a time-staggered fashion. Although no statistically significant difference on the experimental measures was found between the two training approaches based on perceptual assessment, some objective acoustic measures revealed more subtle changes toward normal speech patterns with visual treatment as compared to a traditional approach. Further acoustic-perceptual studies with a larger sample size and longer experimental period are needed to better understand the general and long-term effectiveness of visual treatment.
|
Page generated in 0.0298 seconds