401 |
Investigating the EEG Error-Related Negativity in College Students with ADHD, Anxiety, and DepressionCanini, Mariacristina, Jones, Marissa R, Sawyer, Ben, Ashworth, Ethan, Sellers, Eric W 12 April 2019 (has links)
Error-related Negativity (ERN) is an event-related potential elicited by the commission of errors. It appears as a negative deflection peaking between 50ms and 100ms after an erroneous response. Previous literature demonstrated that individuals who suffer from either anxiety or depression display a higher ERN amplitude compared to a control group. It has also been shown that people with ADHD display a lower ERN amplitude, suggesting that traits of this disorder, such as impulsivity, impair sensitivity to errors. Based on these findings, we investigated which disorder has more weight on the variance in amplitude of the ERN. We recruited thirty-eight students at East Tennessee State University and gathered data on their level of anxiety, depression, and ADHD through completion of three surveys: the Beck Anxiety Inventory, Beck Depression Inventory, and the ADHD self-report scale. Subsequently, participants were asked to perform a modified Flanker task while their EEG neural activity was collected through a 32-channel EEG cap. ERN amplitude for error responses was significantly higher than ERN amplitude for correct responses. In addition, error responses produced a large P300 component of the event-related potential.
|
402 |
User accessibility to refractive error correction services in selected Zambian hospitalsKapatamoyo, Esnart 10 June 2022 (has links)
Background: Uncorrected Refractive Errors (UREs) are the most common cause of vision loss globally. The burden is particularly worse in low- and middle-income countries like Zambia, where access to Refractive Error Correction Services (RECS) is limited. This study aimed to assess the user's accessibility to RECS in selected Zambian Hospitals. Methods: Twenty (20) public health facilities offering RECS were conveniently selected using a crosssectional design. These represented 20 districts in eight provinces of Zambia. A questionnaire-based on access to health care services framework was administered. The framework assessed service accessibility in terms of availability, geographical accessibility, and affordability. Facility managers completed and submitted the questionnaire via email. Results: Completed questionnaires were received from 20 facilities. Nineteen facilities were located in rural areas whilst one facility was located in an urban area. Most facilities (84%) had the Ministry of Health recommended equipment, though essential equipment such as tonometers were lacking in most facilities (70%). Fifteen facilities (75%) reported having Optometry Technologists as the main staff offering services. Only two facilities (10%) had an Ophthalmologist each and no facility had an Optometrist. School-based programmes were not carried out in all facilities. Only one (5%) facility was able to dispense spectacles soon after refraction as it had a spectacle manufacturing workshop. For some facilities (60%), a poor road network posed a challenge to geographical accessibility. Insufficient funding limited access to RECSs. Facility representative stated that not all patients could meet the cost of services in all the facilities. Conclusion: Access to refractive error correction services in the 20 facilities was limited due to a combination of eye health programme deficiencies and general challenges typical in low- and middle-income countries. Funding, human resources and equipment were insufficient. Inadequate road network and infrastructure undermined service delivery. The accessibility shortcomings identified should be used to improve user accessibility of refractive services.
|
403 |
Techniques to improve iterative decoding of linear block codesGenga, Yuval Odhiambo 10 1900 (has links)
A Thesis submitted in fulfilment of the requirements for the degree of Doctor of Philosophy
in the Centre for Telecommunications Access and Services, School of Electrical and Information Engineering, October 2019 / In the field of forward error correction, the development of decoding algorithms
with a high error correction performance and tolerable complexity has been of
great interest for the reliable transmission of data through a noisy channel. The
focus of the work done in this thesis is to exploit techniques used in forward error
correction in the development of an iterative soft-decision decoding approach
that yields a high performance in terms of error correction and a tolerable computational
complexity cost when compared to existing decoding algorithms. The
decoding technique developed in this research takes advantage of the systematic
structure exhibited by linear block codes to implement an information set decoding
approach to correct errors in the received vector outputted from the channel. The
proposed decoding approach improves the iterative performance of the algorithm
as the decoder is only required to detect and correct a subset of the symbols from
the received vector. These symbols are referred to as the information set. The
information set, which matches the length of the message, is then used decode the
entire codeword.
The decoding approach presented in the thesis is tested on both Reed Solomon
and Low Density Parity Check codes. The implementation of the decoder varies
for both the linear block codes due to the different structural properties of the
codes.
Reed Solomon codes have the advantage of having a row rank inverse property
which enables the construction of a partial systematic structure using any set of
columns in the parity check matrix. This property provides a more direct implementation
for finding the information set required by the decoder based on the soft
reliability information. However, the dense structure of the parity check matrix of
Reed Solomon codes presents challenges in terms of error detection and correction
for the proposed decoding approach. To counter this problem, a bit-level implementation
of the decoding technique for Reed Solomon codes is presented in the
thesis.
The presentation of the parity check matrix extension technique is also proposed
in the thesis. This technique involves the addition of low weight codewords from
the dual code, that match the minimum distance of the code, to the parity check
matrix during the decoding process. This helps add sparsity to the symbol-level
implementation of the proposed decoder. This sparsity helps with the efficient
exchange of the soft information during the message passing stage of the proposed
decoder.
Most high performance Low Density Parity Check codes proposed in literature
lack a systematic structure. This presents a challenge for the proposed decoding
approach in obtaining the information set. A systematic construction for a
Quasi-Cyclic Low Density Parity Check code is also presented in this thesis so as
to allow for the information set decoding. The proposed construction is able to
match the error correction performance of a high performance Quasi-Cyclic Low
Density Parity Check matrix design, while having the benefit of a low complexity
construction for the encoder.
In addition, this thesis also proposes a stopping condition for iterative decoding
algorithms based on the information set decoding technique. This stopping condition
is applied to other high performance iterative decoding algorithms for both
Reed Solomon codes and Low Density Parity Check codes so as to improve the
iterative performance. This improves on the overall efficiency of the decoding algorithms. / PH2020
|
404 |
Elever måste få göra fel för att göra rätt : Hur korrigerar lärare uttalsvariationer i engelsk språkinlärning i årskurs 4 - 6?PETTERSSON, JENNY January 2021 (has links)
Syftet med denna litteraturstudie är att ta reda på vad den senaste forskningen säger om lärares korrigering av uttal hos elever i engelskundervisningen. Detta är viktigt då eleverna behöver ett funktionellt tal för att göra sig förstådda på målspråket och vissa uttalsfel kan försvåra begripligheten. Studien är baserad på sju artiklar som valts ut genom sökning i databaser. Artiklarna har systematiskt analyserats genom att ett kategoriseringsschema skapats med olika begrepp för att kunna identifiera mönster. Förr i tiden skulle alla fel alltid korrigeras eftersom målet var att uppnå ett modersmålsliknande uttal. På grund av alla olika modersmål och dialekter är detta i stort sett omöjligt och därför har man kommit fram till att det viktigaste är att eleverna förvärvar ett begripligt, funktionellt tal. Det ställs höga krav på lärare då de ofta ställs inför svåra dilemman. Lärare måste många gånger fatta snabba beslut angående vilken typ av feedback de ska ge sina elever vid uttalsfel i engelska. Det kan vara svårt att veta hur mycket och när lärare ska korrigera elevernas fel och vad är det egentligen som ska korrigeras. Resultatet påvisade många strategier för korrigering men att alla inte är så effektiva. Det bästa för eleverna är en trygg lärmiljö och att försöka framkalla självkorrigering genom att läraren ställer frågor och uppmanar eleverna till att tänka själva och komma fram till rätt svar. Detta kräver att eleverna har goda kunskaper om språkregler vilket även ställer krav på lärare att introducera dessa i de tidiga åldrarna. Lärare måste också analysera vilka fel eleverna gör för att kunna korrigera dem så effektivt som möjligt. Eleverna kan göra fel men också misstag. Fel ska hanteras direkt när de uppstår, det krävs även då en förklaring för att eleverna ska förstå språkreglerna. Misstag kan ignoreras då det oftast handlar om felsägningar. Det framkom även i studien att många lärare känner en osäkerhet om intonation och betoning men som är mycket viktigt för att ord ska få rätt betydelse. Att ge feedback och korrigera elevernas uttal är inte enkelt och kan vara väldigt känsligt. Därför är det viktigt att lärare är medvetna om hur effektiva deras korrigeringstekniker är och hur dessa tas emot av eleverna.
|
405 |
QUANTUM ERROR CORRECTION FOR GENERAL NOISEGonzales, Alvin Rafer 01 June 2021 (has links)
Large quantum computers have the potential to vastly outperform any classical computer. The biggest obstacle to building quantum computers of such size is noise. For example, state of the art superconducting quantum computers have average decoherence (loss of information) times of just microseconds. Thus, the field of quantum error correction is especially crucial to progress in the development of quantum technologies. In this research, we study quantum error correction for general noise, which is given by a linear Hermitian map. In standard quantum error correction, the usual assumption is to constrain the errors to completely positive maps, which is a special case of linear Hermitian maps. We establish constraints and sufficient conditions for the possible error correcting codes that can be used for linear Hermitian maps. Afterwards, we expand these sufficient conditions to cover a large class of general errors. These conditions lead to currently known conditions in the limit that the error map becomes completely positive. The later chapters give general results for quantum evolution maps: a set of weak repeated projective measurements that never break entanglement and the asymmetric depolarizing map composed with a not completely positive map that gives a completely positive composition. Finally, we give examples.
|
406 |
The Uniqueness of Software Errors and Their Impact on Global PolicyGotterbarn, Donald 01 January 1998 (has links)
The types of errors that emerge in the development and maintenance of sofhvare are essentially different from the types of errors that emerge in the development and maintenance of engineered hardware products. There is a set of standard responses to actual and potential hardware errors, including: engineering ethics codes, engineering practices, corporate policies and laws. The essential characteristics of software errors require new ethical, policy, and legal approaches to the development ofsoftvare in the global arena.
|
407 |
Optimal portfolio performance constrained by tracking errorGunning, Wade Michael 20 October 2020 (has links)
Maximising investment returns is the primary goal of asset management but managing and mitigating portfolio risk also plays a significant role. Successful active investing requires outperformance of a benchmark through skilful stock selection and market timing, but these bets necessarily foster risk. Active investment managers are constrained by investment mandates such as component asset weight restrictions, prohibited investments (e.g. no fixed
income instruments below investment grade) and minimum weights in certain securities (e.g. at least 𝑥�% in cash or foreign equities). Such strategies' portfolio risk is measured relative to a benchmark (termed the tracking error (TE)) – usually a market index or fixed weight mix of securities – and investment mandates usually confine TEs to be lower than prescribed values to limit excessive risk taking. The locus of possible portfolio risks and returns, constrained by a TE relative to a benchmark, is an ellipse in return/risk space, and the sign and magnitude of this ellipse's main axis slope varies under different market conditions. How these variations affect portfolio performance is explored for the first time. Changes in main axis slope (magnitude and sign) acts as an early indicator of portfolio performance and could therefore be used as another risk management tool.
The mean-variance framework coupled with the Sharpe ratio identifies optimal portfolios under the passive investment style. Optimal portfolio identification under active investment approaches, where performance is measured relative to a benchmark, is less well-known. Active portfolios subject to TE constraints lie on distorted elliptical frontiers in return/risk space. Identifying optimal active portfolios, however defined, have only recently begun to be explored. The Ω ratio considers both down and upside portfolio potential. Recent work has established a technique to determine optimal Ω ratio portfolios under the passive investment approach. The identification of optimal Ω ratio portfolios is applied to the active arena (i.e. to portfolios constrained by a TE) and it is found that while passive managers should always invest in maximum Ω ratio portfolios, active managers should first establish market conditions which determine the sign of the main axis slope of the constant TE frontier) and then invest in maximum Sharpe ratio portfolios when this slope is > 0 and maximum Ω ratios when the slope is < 0. / Dissertation (MSc (Financial Engineering)--University of Pretoria, 2020. / Mathematics and Applied Mathematics
|
408 |
Accuracy of Antiretroviral Prescribing in a Community Teaching Hospital: A Medication Use EvaluationLines, Jacob, Lewis, Paul 01 February 2021 (has links)
Background: Medication errors account for nearly 250 000 deaths in the United States annually, with approximately 60% of errors occurring during transitions of care. Previous studies demonstrated that almost 80% of participants with human immunodeficiency virus (HIV) have experienced a medication error related to their antiretroviral therapy (ART). Objective: This retrospective chart review examines propensity and type of ART-related errors and further seeks to identify risk factors associated with higher error rates. Methods: Participants were identified as hospitalized adults ≥18 years old with preexisting HIV diagnosis receiving home ART from July 2015 to June 2017. Medication error categories included delays in therapy, dosing errors, scheduling conflicts, and miscellaneous errors. Logistic regression was used to examine risk factors for medication errors. Results: Mean age was 49 years, 76.5% were men, and 72.1% used hospital-supplied medication. For the primary outcome, 60.3% (41/68) of participants had at least 1 error, with 31.3% attributed to delays in therapy. Logistic regression demonstrated multiple tablet regimens (odds ratio [OR]: 3.40, 95% confidence interval [CI]: 1.22-9.48, P =.019) and serum creatinine (SCr) ≥1.5 mg/dL (OR: 8.87, 95% CI: 1.07-73.45, P =.043) were predictive for risk of medication errors. Regimens with significant drug–drug interactions (eg, cobicistat-containing regimens) were not significantly associated with increased risk of medication errors. Conclusions and Relevance: ART-related medication error rates remain prevalent and exceeded 60%. Independent risk factors for medication errors include use of multiple tablet regimens and SCr ≥1.5 mg/dL.
|
409 |
Begging the question : permanent income and social mobilityMuller, Seán Mfundza January 2007 (has links)
Includes abstract.
Includes bibliographical references (p. 35-37).
|
410 |
Error in persona vel objecto : En analys av relationen mellan uppsåtstäckning, uppsåtsbegrepp och skuld / Error in persona vel objecto : An analysis of the relationship between the principle of correspondence, the construction of intent, and theories of faultHolmgren, Isak January 2022 (has links)
No description available.
|
Page generated in 0.023 seconds