151 |
Dependent Berkson errors in linear and nonlinear modelsAlthubaiti, Alaa Mohammed A. January 2011 (has links)
Often predictor variables in regression models are measured with errors. This is known as an errors-in-variables (EIV) problem. The statistical analysis of the data ignoring the EIV is called naive analysis. As a result, the variance of the errors is underestimated. This affects any statistical inference that may subsequently be made about the model parameter estimates or the response prediction. In some cases (e.g. quadratic polynomial models) the parameter estimates and the model prediction is biased. The errors can occur in different ways. These errors are mainly classified into classical (i.e. occur in observational studies) or Berkson type (i.e. occur in designed experiments). This thesis addresses the problem of the Berkson EIV and their effect on the statistical analysis of data fitted using linear and nonlinear models. In particular, the case when the errors are dependent and have heterogeneous variance is studied. Both analytical and empirical tools have been used to develop new approaches for dealing with this type of errors. Two different scenarios are considered: mixture experiments where the model to be estimated is linear in the parameters and the EIV are correlated; and bioassay dose-response studies where the model to be estimated is nonlinear. EIV following Gaussian distribution, as well as the much less investigated non-Gaussian distribution are examined. When the errors occur in mixture experiments both analytical and empirical results showed that the naive analysis produces biased and inefficient estimators for the model parameters. The magnitude of the bias depends on the variances of the EIV for the mixture components, the model and its parameters. First and second Scheffé polynomials are used to fit the response. To adjust for the EIV, four different approaches of corrections are proposed. The statistical properties of the estimators are investigated, and compared with the naive analysis estimators. Analytical and empirical weighted regression calibration methods are found to give the most accurate and efficient results. The approaches require the error variance to be known prior to the analysis. The robustness of the adjusted approaches for misspecified variance was also examined. Different error scenarios of EIV in the settings of concentrations in bioassay dose-response studies are studied (i.e. dependent and independent errors). The scenarios are motivated by real-life examples. Comparisons between the effects of the errors are illustrated using the 4-prameter Hill model. The results show that when the errors are non-Gaussian, the nonlinear least squares approach produces biased and inefficient estimators. An extension of the well-known simulation-extrapolation (SIMEX) method is developed for the case when the EIV lead to biased model parameters estimators, and is called Berkson simulation-extrapolation (BSIMEX). BSIMEX requires the error variance to be known. The robustness of the adjusted approach for misspecified variance is examined. Moreover, it is shown that BSIMEX performs better than the regression calibration methods when the EIV are dependent, while the regression calibration methods are preferable when the EIV are independent.
|
152 |
Implementation of an Electronic Prescription System and its Effect on Perceived Error Rates, Efficiency, and Difficulty of UseMorales, Armando, Nguyen, Lily, Ruddy, Tyler, Velasquez, Ronald January 2017 (has links)
Class of 2017 Abstract / Objectives: To evaluate the perceptions of the pharmacy staff on prescription errors, efficiency, and difficulty of use before and after implementation of a new pharmacy computer system.
Subjects: Employees of El Rio Community Health Center outpatient pharmacies located at the Congress, Northwest, and El Pueblo Clinics.
Methods: This study was of a retrospective pre-post design. A 5-question survey on error rates and workflow efficiency was distributed to pharmacists and technicians 6 months after a new computer system had been implemented. Participants of the study included employees of El Rio Community Health Center outpatient pharmacies who were employed with El Rio during the time of transition between the old and new computer systems.
Results: Questionnaire responses were completed by 10 (41.7%) technicians and 6 (66.7%) pharmacists at three El Rio Clinics. There was an increase in perceived efficiency between the new (Liberty) (n=17, 94.4%) and old (QS1) (n=11, 61.1%) computer systems (p<0.05). There were no significant differences in perceived difficulty of use, most common types of errors, error rates, and time to fix detected errors.
Conclusions: While there were no significant differences between Liberty and QS1 in perceived difficulty of use, most common types of errors, error rates, and time to correct detected errors, there was a significant difference in the perceived efficiency, which may have beneficial implications.
|
153 |
Error behaviour in optical networksJames, Laura Bryony January 2005 (has links)
Optical fibre communications are now widely used in many applications, including local area computer networks. I postulate that many future optical LANs will be required to operate with limited optical power budgets for a variety of reasons, including increased system complexity and link speed, low cost components and minimal increases in transmit power. Some developers will wish to run links with reduced power budget margins, and the received data in these systems will be more susceptible to errors than has been the case previously. The errors observed in optical systems are investigated using the particular case of Gigabit Ethernet on fibre as an example. Gigabit Ethernet is one of three popular optical local area interconnects which use 8B/10B line coding, along with Fibre Channel and Infiniband, and is widely deployed. This line encoding is also used by packet switched optical LANs currently under development. A probabilistic analysis follows the effects of a single channel error in a frame, through the line coding scheme and the MAC layer frame error detection mechanisms. Empirical data is used to enhance this original analysis, making it directly relevant to deployed systems. Experiments using Gigabit Ethernet on fibre with reduced power levels at the receiver to simulate the effect of limited power margins are described. It is found that channel bit error rate and packet loss rate have only a weakly deterministic relationship, due to interactions between a number of non-uniform error characteristics at various network sub-layers. Some data payloads suffer from high bit error rates and low packet loss rates, compared to others with lower bit error rates and yet higher packet losses. Experiments using real Internet traffic contribute to the development of a novel model linking packet loss, the payload damage rate, and channel bit error rate. The observed error behaviours at various points in the physical and data link layers are detailed. These include data-dependent channel errors; this error hot- spotting is in contrast to the failure modes observed in a copper-based system. It is also found that both multiple channel errors within a single code-group, and multiple error instances within a frame, occur more frequently than might be expected. The overall effects of these error characteristics on the ability of cyclic redundancy checks (CRCs) to detect errors, and on the performance of higher layers in the network, is considered. This dissertation contributes to the discussion of layer interactions, which may lead to un-foreseen performance issues at higher levels of the network stack, and extends it by considering the physical and data link layers for a common form of optical link. The increased risk of errors in future optical networks, and my findings for 8B/10B encoded optical links, demonstrate the need for a cross-layer understanding of error characteristics in such systems. The development of these new networks should take error performance into account in light of the particular requirements of the application in question.
|
154 |
A study of errors made in paragraphs written by Grade 12 students on the June, 1953, English 40 (Language) University Entrance examination, British Columbia Department of Education.Matheson, Hugh Naismith January 1960 (has links)
The purpose of this study was to determine the frequency of errors in English usage, punctuation, and spelling made by grade 12 students in the two paragraphs that each student wrote on the June, 1953, English 40 (Language) University Entrance examination in British Columbia. The errors were classified within each of fourteen major categories. These categories were further divided to give a total of seventy-four classes. In order to record specific errors some of the seventy-four classes were further subdivided to increase the number of classes to 104, excluding spelling errors. Furthermore an attempt was made to discover a relationship between the incidence of errors in English and certain factors that possibly may have been associated with such errors. These factors were: the student's (a) intelligence (scholastic ability); (b) sex; (c) socioeconomic status as determined by the father's occupation; (d) interest in English as determined by the student's choice of major subjects; (e) choice of topics on which the student wrote his paragraphs, and (f) choice of high school program: University or General. Furthermore, in order to determine the extent to which the number of words in the paragraphs might have influenced the number of errors, this writer found a relationship between the number of errors students made and (a) the number of words written on the two paragraphs on the examination, and (b) the number of words written on (i) the expository paragraph and (ii) the descriptive or narrative paragraph. By discovering the extent of the relationship between errors made in the paragraphs and the marks that teachers gave to the paragraphs, this investigator attempted to find out the degree to which markers took into consideration mechanical errors in English.
On examining 599 paragraphs written by 300 grade 12 students, this writer found the number of words written and errors in usage, punctuation, and spelling as summarized in the table below.
(Tables omitted)
Students wrote the mean number of words and made the mean number of errors as shown in the following table.
(Tables omitted)
When one considers the fourteen main categories of errors, he finds that spelling and punctuation account for slightly more than two-thirds of the errors. If four other categories (capitalization, the apostrophe, omissions, misuse of quotation marks) are added to the punctuation and spelling, one finds that non-usage errors account for nearly 80 per cent of the total number of errors. Those errors ranking 1-7 account for nearly 93 per cent of all errors. Ten kinds of errors in punctuation accounted for 89.9 per cent of all such errors. By applying appropriate statistical analyses, this investigator attempted to determine the relationship between errors and the elements mentioned in the first paragraph. The writer found that the coefficient of correlation between errors and scholastic ability was .304. On both paragraphs boys made a mean of 16.73 errors and girls 13.41; t was found to be 3.12. For 293 degrees of freedom t is 2.59 at the 1 per cent level or less. Consequently, for t = 3.12, the hypothesis of no difference in the means can be rejected. The writer found that students whose fathers were in the professional, semi-professional, and managerial vocations made a mean of 11.17 errors, and students whose fathers were in the skilled, semi-skilled and unskilled vocations made 14.8. For 98 degrees of freedom t is 1.984 at th 5 per cent level. But for the means just given t is 2.062. Therefore the hypothesis of no difference can be rejected.
If choice of majors is used as a criterion of interest, students who are primarily interested in English make fewer errors than those who are not. The former made a mean of 12.61 errors on both paragraphs; the latter, 14.00. For 267 degrees of freedom t is 1.969 at the 5 per cent level of significance; therefore the hypothesis of no difference in the means can be rejected. Turning to a consideration of errors made by University Programme students and those made by General Programme students, one finds that the former made a mean of 12.35 errors; the latter, 17.55. For 287 degrees of freedom t is 2.592 at the 1 per cent level of significance. One can therefore reject with considerable confidence the hypothesis of no difference in the means. That the number of errors on a paragraph does not increase directly as the number of words written is shown by the fact that the coefficient of correlation between the number of errors and the number of words written is .574. Consequently the use of the paragraph as a unit on which to base the numbers of errors need not invalidate the statistical analyses and inferences previously made. Finally, examiners probably took errors into consideration when they marked the paragraphs as the coefficient of correlation between errors and the marks the examiners gave the paragraphs was - .202, which is significant at the 1 per cent level. / Education, Faculty of / Graduate
|
155 |
The effect of idioms on children's reading and understanding of proseEdwards, Peter January 1972 (has links)
A survey of related literature showed that, although many educational researchers have stressed the importance of idioms in the English language, very few experimental studies have been carried out to ascertain the role played by idioms in the reading process. The author conducted a study to determine whether idioms cause difficulty for children in the reading and understanding of prose.
A pilot study was performed to facilitate the selection of test items and to establish testing procedures.
The experimental study consisted of four randomly chosen groups in each of two schools. Randomly assigned children in each group were given one of the four reading tests as follows: Non Literal 1 (N.L. 1), which contained idioms in all eighteen test items; Non Literal 2 (N.L. 2), which contained idioms in twelve of the eighteen test items; Non Literal 3 (N.L. 3), which contained idioms in six of the eighteen test items; Literal, which did not contain idioms in any of the eighteen test items. The children read their assigned test and answered comprehension questions by selecting one of the four multiple choice alternatives for each test item. The following statistical results were obtained: the treatment effect was highly significant; the means increased steadily, with the highest scores associated with the Literal test and the lowest scores associated with Non Literal 1 test. There was no significant difference between the performance of girls and boys in the tests; there was no linear or curvilinear interaction with I.Q. and treatment, nor was there a sex by treatment
interaction. An analysis of the four treatment groups showed that there were significant differences between the means of all groups except Non Literal 1 and Non Literal 2, the two groups containing the greatest number of idioms in the test items.
The results of the study raised several implications which necessitate further research. Several questions are concerned with the incidence and type of idiomatic language used in books and the best method of teaching idioms to school children. Another raises the possibility of having to allow for idioms when compiling readability formulae. A further implication is that there may be a need for strictly literal reading materials which would serve as a transitional link between the multiplicity of dialects existing in society today, and the need to read and understand written Standard English. / Education, Faculty of / Graduate
|
156 |
Analysis of radiation induced errors in transistors in memory elementsMasani, Deekshitha 01 December 2020 (has links)
From the first integrated circuit which has 16-transistor chip built by Heiman and Steven Hofstein in 1962 to the latest 39.54 billion MOSFET’s using 7nm FinFET technology as of 2019 the scaling of transistors is still challenging. The scaling always needs to satisfy the minimal power constraint, minimal area constraint and high speed as possible. As of 2020, the worlds smallest transistor is 1nm long build by a team at Lawrence Berkeley National Laboratory. Looking at the latest trends of 14nm, 7nm technologies present where a single die holds more than a billion transistors on it. Thinking of it, it is more challenging for dyeing a 1nm technology. The scaling keeps going on and if silicon does not satisfy the requirement, they switch to carbon nanotubes and molybdenum disulfide or some newer materials. The transistor sizing is reducing but the pressure of radiation effects on transistor is in quench of more and more efficient circuits to tolerate errors. The radiation errors which are of higher voltage are capable of hitting a node and flipping its value. However, it is not possible to have a perfect material to satisfy no error requirement for a circuit. But it is possible to maintain the value before causing the error and retain the value even after occurrence of the error. In the advanced technologies due to transistor scaling multiple simultaneous radiation induced errors are the issue. Different latch designs are proposed to fix this problem. Using the CMOS 90nm technology different latch designs are proposed which will recover the value even after the error strikes the latch. Initially the errors are generally Single event upsets (SEUs) which when the high radiation particle strikes only one transistor. Since the era of scaling, the multiple simultaneous radiation errors are common. The general errors are Double Node Upset (DNU) which occurs when the high radiation particle strikes the two transistors due to replacing one transistor by more than one after scaling. Existing designs of SEUs and DNUs accurately determine the error rates in a circuit. However, with reference to the dissertation of Dr. Adam Watkins, proposed HRDNUT latch in the paper “Analysis and mitigation of multiple radiation induced errors in modern circuits”, the circuits can retain its error value in 2.13ps. Two circuits are introduced to increase the speed in retaining the error value after the high energy particle strikes the node. Upon the evaluation of the past designs how the error is introduced inside the circuit is not clear. Some designs used a pass gate to actually introduce the error logic value but not in terms of voltage. The current thesis introduces a method to introduce error with reduced power and delay overhead compared to the previous circuits. Introducing the error in the circuits from the literature survey and comparing the delay and power with and without introducing the error is shown. Introducing the errors in the two new circuits are also shown and compared with when no errors are injected.
|
157 |
The effects of test result and diagnosticity on physicians' revisions of probability of disease in medical diagnosisSinclair, Ann Elizabeth 01 January 1987 (has links)
This study examined the effects of sensitivity, specificity and result of diagnostic tests on the uses which physicians make of those results. These were compared with the Bayesian model of probability adjustment, which is generally accepted for medical diagnosis. Ninety six active members of the Oregon Academy of Family Physicians were interviewed by telephone, using a case scenario describing a patient with a newly discovered breast lump. Subjects estimated prior probability of malignancy, based on history and physical findings, and then estimated posterior probability following results of a mammogram. Mammograms varied by result (positive or negative) and by high and low values for sensitivity and specificity. Subjects were asked to indicate their confidence in each probability estimate. About one third of the subjects were also asked for their treatment threshold -- that point at which they would change from a policy of watchful waiting to one of taking some action, which was usually biopsy of the lesion.
|
158 |
On the numerical solution of Fisher's and FitzHugh-Nagumo equations using some nite di erence methodsAgbavon, Koffi Messan January 2020 (has links)
In this thesis, we make use of numerical schemes in order to solve Fisher’s and FitzHugh-Nagumo equations with specified initial conditions. The thesis is made up of six chapters.
Chapter 1 gives some literatures on partial differential equations and chapter 2 provides some concepts on finite difference methods, nonstandard finite difference methods and their proper-ties, reaction-diffusion equations and singularly perturbed equations.
In chapter 3, we obtain the numerical solution of Fisher’s equation when the coefficient of diffu-sion term is much smaller than the coefficient of reaction (Li et al., 1998). Li et al. (1998) used the Moving Mesh Partial Differential Equation (MMPDE) method to solve a scaled Fisher’s equation with coefficient of reaction being 104 and coefficient of diffusion equal to one and the initial condition consisted of an exponential function. The problem considered is quite challeng-ing and the results obtained by Li et al. (1998) are not accurate due to the fact that MMPDE is based on familiar arc-length or curvature monitor function. Qiu and Sloan (1998) constructed a suitable monitor function called modified monitor function and used it with the Moving Mesh Differential Algebraic Equation (MMDAE) method in order to solve the same problem as Li et al. (1998) and better result were obtained. However, each problem has its own choice of monitor function which makes the choice of the monitor function an open question. We use the Forward in Time Central Space (FTCS) scheme and the Nonstandard Finite Difference (NSFD) to solve the scaled Fisher’s equation and we find that the temporal step size must be very small in order to obtain accurate results and comparable to Qiu and Sloan (1998). This causes the computational time to be long if the domain is large. We use two techniques to modify these two schemes either by introducing artificial viscosity or using the approach of Ruxun et al. (1999). These techniques are efficient and give accurate results with a larger temporal step size. We prove that these four methods are consistent with the partial differential equation and we also obtain the region of stability.
Chapter 4 is an improvement and extension of the work from Namjoo and Zibaei (2018) whereby the standard FitzHugh-Nagumo equation with specified initial and boundary conditions is solved. Namjoo and Zibaei (2018) constructed two versions of nonstandard finite difference (NSFD1, NSFD2) and also derived two schemes (one explicit and the other implicit) constructed from the exact solution. However, they presented results using the nonstandard finite difference schemes only. We showed that one of the nonstandard finite difference schemes (NSFD1) has convergence issues and we obtain an improvement for NSFD1 which we call NSFD3. We per-form a stability analysis of the schemes constructed from the exact solution and found that the explicit scheme is not stable for this problem. We study some properties of the five methods (NSFD1, NSFD2, NSFD3, two schemes obtained using the exact solution) such as stability, positivity and boundedness. The performance of the five methods is compared by computing L1, L∞ errors and the rate of convergence for two values of the threshold of Affect effect, γ namely; 0.001 and 0.5 for small and large spatial domains at time, T = 1.0. Tests on rate of convergence are important here as we are dealing with nonlinear partial differential equations and therefore the Lax-Equivalence theorem cannot be used.
In chapter 5, we consider FitzHugh-Nagumo equation with the parameter β referred to as in-trinsic growth rate. We chose a numerical experiment which is quite challenging for simulation due to shock-like profiles. We construct four versions of nonstandard finite difference schemes and compared the performance by computing L1, L∞ errors, rate of convergence with respect to time and CPU time at given time, T = 0.5 using three values of the intrinsic growth rate, β namely; β = 0.5, 1.0, 2.0.
Chapter 6 highlights the salient features of this work. / Thesis (PhD)--University of Pretoria, 2020. / South African DST/NRF SARChI / Mathematics and Applied Mathematics / PhD / Unrestricted
|
159 |
Intragenic complementation in methylmalonyl CoA mutaseFarah, Rita S. January 1994 (has links)
No description available.
|
160 |
The molecular characterization of mutations at the methylmalonyl CoA mutase locus involved in interallelic complementation /Qureshi, Amber A. (Amber Ateef) January 1993 (has links)
No description available.
|
Page generated in 0.0469 seconds