• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 567
  • 133
  • 100
  • 52
  • 40
  • 37
  • 17
  • 16
  • 14
  • 13
  • 9
  • 5
  • 5
  • 5
  • 5
  • Tagged with
  • 1213
  • 302
  • 126
  • 119
  • 112
  • 106
  • 93
  • 92
  • 91
  • 87
  • 79
  • 76
  • 67
  • 62
  • 58
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Nonparametric Confidence Intervals for the Reliability of Real Systems Calculated from Component Data

Spooner, Jean 01 May 1987 (has links)
A methodology which calculates a point estimate and confidence intervals for system reliability directly from component failure data is proposed and evaluated. This is a nonparametric approach which does not require the component time to failures to follow a known reliability distribution. The proposed methods have similar accuracy to the traditional parametric approaches, can be used when the distribution of component reliability is unknown or there is a limited amount of sample component data, are simpler to compute, and use less computer resources. Depuy et al. (1982) studied several parametric approaches to calculating confidence intervals on system reliability. The test systems employed by them are utilized for comparison with published results. Four systems with sample sizes per component of 10, 50, and 100 were studied. The test systems were complex systems made up of I components, each component has n observed (or estimated) times to failure. An efficient method for calculating a point estimate of system reliability is developed based on counting minimum cut sets that cause system failures. Five nonparametric approaches to calculate the confidence intervals on system reliability from one test sample of components were proposed and evaluated. Four of these were based on the binomial theory and the Kolomogorov empirical cumulative distribution theory. 600 Monte Carlo simulations generated 600 new sets of component failure data from the population with corresponding point estimates of system reliability and confidence intervals. Accuracy of these confidence intervals was determined by determining the fraction that included the true system reliability. The bootstrap method was also studied to calculate confidence interval from one sample. The bootstrap method is computer intensive and involves generating many sets of component samples using only the failure data from the initial sample. The empirical cumulative distribution function of 600 bootstrapped point estimates were examined to calculate the confidence intervals for 68, 80, 90 95 and 99 percent confidence levels. The accuracy of the bootstrap confidence intervals was determined by comparison with the distribution of 600 point estimates of system reliability generated from the Monte Carlo simulations. The confidence intervals calculated from the Kolomogorov empirical distribution function and the bootstrap method were very accurate. Sample sizes of 10 were not always sufficient for systems with reliabilities close to one.
252

TACtická inteligence: Přerušení cyklu teroristických útoků analýzou zpravodajských operací teroristů / TACtical intelligence: Disrupting the terrorist attack cycle by analysing terrorists' intelligence operations

Dorak, Olivia January 2021 (has links)
TACtical Intelligence: Disrupting the Terrorist Attack Cycle by Analysing Terrorists' Intelligence Operations Keywords: terrorism, intelligence, confidence, intelligence competition, violent non-state actors Abstract: Commensurate with prevailing Realist influence in military and security studies,the majority of academic literature on topics of intelligence are from state-centric perspectives, failing to sufficiently address other actors who are taking on greater and more salient roles on the international security stage. In particular, the use of intelligence by violent non-state actors is a premature subject matter in the academic discourse, as literature at the intersection of the two disciplines tends to evaluate the ways in which state intelligence succeeds or fails with regards to, or acts upon violent non-state actors. Rarely are violent non-state actors perceived of as intelligence actors of their own respect. Nevertheless, an intelligence competition persists between the rivals. The intelligence competition between terrorist organisations, seeking to instigate attacks, and state agencies, seeking to thwart them, is underdeveloped in both terrorism and intelligence studies. This study finds terrorist organisations engage in an intelligence competition with their state adversaries-a pursuit to...
253

Role sentimentu podniků v transmisi měnové politiky: zjištění pro eurozónu / The Role of Business Confidence in the Monetary Policy Transmission Mechanism: Evidence from the Euro Area

Liu, Zhaozhi January 2021 (has links)
Traditional macroeconomics believes that confidence is not the main cause of economic fluctuations, but when faced with financial crises, monetary authorities still emphasize the role of stabilizing confidence. Although people generally agree that confidence is an important part of the transmission of macro-policies to micro- individuals, there is neither empirical evidence support nor corresponding mechanism research. This thesis attempts to answer the following questions: Does business confidence affect the effectiveness of monetary policy? Does business confidence have the same impact on monetary policy in different economic periods? This thesis first constructed a structural vector auto-regression (SVAR) model to test the role of business confidence in the transmission of monetary policy in the euro area. The empirical results show that expansionary monetary policy can effectively boost business confidence while stimulating output growth. In addition, this thesis extends the model by introducing share prices and exchange rates to investigate the role of these two important to the monetary transmission mechanism, concluding that business confidence plays a strong role in interest rate transmission and a weaker role in the transmission of asset prices and exchange rates. Subsequently, in order to...
254

The Effect of Doubt and Working Memory Load on Evidence Accumulation: A Neuropsychological Investigation

Turkelson, Lynley 19 November 2019 (has links)
No description available.
255

Local Distance Correlation: An Extension of Local Gaussian Correlation

Hamdi, Walaa Ahmed 06 August 2020 (has links)
No description available.
256

Peripheral IV Insertion Competence and Confidence in Medical/Surgical Nurses

Jacobs, Lisa 08 May 2020 (has links)
No description available.
257

Improved confidence intervals for a small area mean under the Fay-Herriot model

Shiferaw, Yegnanew Alem January 2016 (has links)
A thesis submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the Degree of Doctor of Philosophy. Johannesburg, August 2016. / There is a growing demand for small area estimates for policy and decision making, local planning and fund distribution. Surveys are generally designed to give representative estimates at national or regional level, but estimates of variables of interest are often also needed at the small area levels. These cannot be reliably obtained from the survey data as the sample sizes at these levels are too small. This problem is addressed by using small area estimation techniques. The main aim of this thesis is to develop confidence intervals (CIs) which are accurate to terms O(m–3/2 ) under the FH model using the Taylor series expansion. Rao (2003a), among others, notes that there is a situation in mixed model estimation that the estimates of the variance component of the random effect, A, can take negative values. In this case, Prasad and Rao (1990) consider ˆA = 0. Under this situation, the contribution of the mean squared error (MSE) estimate, assuming all parameters are known, becomes zero. As a solution, Rao (2003a) among others proposed a weighted estimator with fixed weights (i.e., wi = 12 ). In addition, if the MSE estimate is negative, we cannot construct CIs based on the empirical best linear unbiased predictor (EBLUP) estimates. Datta, Kubokawa, Molina and Rao (2011) derived the MSE estimator for the weighted estimator with fixed weights which is always positive. We use their MSE estimator to derive CIs based on this estimator to overcome the above difficulties. The other criticism of the MSE estimator is that it is not area-specific since it does not involve the direct estimator in its expression. Following Rao (2001), we propose area specific MSE estimators and use them to construct CIs. The performance of the proposed CIs are investigated via simulation studies and compared with the Cox (1975) and Prasad and Rao (1990) methods. Our simulation results show that the proposed CIs have higher coverage probabilities. These methods are applied to standard poverty and percentage of food expenditure measures estimated from the 2010/11 Household Consumption Expenditure survey and the 2007 census data sets. Keywords: Small area estimation, Weighted estimator with fixed weights, EBLUP, FH model, MSE, CI, Poverty, percentage of food expenditure / LG2017
258

Lessons from a Pandemic: Comparing the Competence and Confidence of Pre-Service Teachers between Blended Learning and Blended Online Learning of an Educational Technology Course

Bruno, Wilber Alexander 01 December 2021 (has links) (PDF)
Blended Online Learning (BOL) combines synchronous and asynchronous online learning in ways that potentially can overcome limitations of fully asynchronous online. Although BOL has been an emergent modality for decades, research on the experiences, benefits and challenges of its implementation has been limited. However, the Covid-19 pandemic forced many college courses to go fully online, including courses with hands-on learning components assumed to require face-to-face instruction to support learners. For this study, the pandemic disruption offered an authentic learning setting to investigate the learning and experiences of pre-service teachers in a technology course that was forced into a fully online BOL modality. Previously, the technology course was delivered in a Blended Learning modality (BL) that combined face-to-face computer lab meetings with asynchronous online materials and activities using a Learning Management System (LMS). BOL replaced face-to-face meeting with synchronous online (e.g., Zoom) meetings.The purpose of this study was to explore if BL and BOL course modalities would generate different student outcomes in terms of rubric scores obtained on a final project (competence), along with student-written reflections on the final project (confidence/self-efficacy) that covered topics and skills such as digital audio, digital video, and PowerPoint. The study showed that students enrolled in the BL modality obtained higher scores on the final project as compared to students engaged in the BOL modality. On the other hand, BOL students made a higher number of problem-solving statements in their written reflections about the final project, displaying an antifragile disposition. This study contributes to the existing body of research on online learning modalities by exploring the dimension of competency and self-efficacy of students enrolled in blended and blended online versions of a course with concentration on learning technology. The findings of this study can inform decisions of teacher education administrators and faculty about how they are going to integrate educational technology into Teacher Education Programs. Further, the study has implications for adopting BOL modality in a range of higher education courses in which fully online delivery has been resisted because of students’ assumed needs for face-to-face support in skills learning.
259

Image Segmentation Evaluation Based on Fuzzy Connectedness

Ren, Qide 10 October 2013 (has links)
No description available.
260

Performance of bootstrap confidence intervals for L-moments and ratios of L-moments.

Glass, Suzanne 06 May 2000 (has links) (PDF)
L-moments are defined as linear combinations of expected values of order statistics of a variable.(Hosking 1990) L-moments are estimated from samples using functions of weighted means of order statistics. The advantages of L-moments over classical moments are: able to characterize a wider range of distributions; L-moments are more robust to the presence of outliers in the data when estimated from a sample; and L-moments are less subject to bias in estimation and approximate their asymptotic normal distribution more closely. Hosking (1990) obtained an asymptotic result specifying the sample L-moments have a multivariate normal distribution as n approaches infinity. The standard deviations of the estimators depend however on the distribution of the variable. So in order to be able to build confidence intervals we would need to know the distribution of the variable. Bootstrapping is a resampling method that takes samples of size n with replacement from a sample of size n. The idea is to use the empirical distribution obtained with the subsamples as a substitute of the true distribution of the statistic, which we ignore. The most common application of bootstrapping is building confidence intervals without knowing the distribution of the statistic. The research question dealt with in this work was: How well do bootstrapping confidence intervals behave in terms of coverage and average width for estimating L-moments and ratios of L-moments? Since Hosking's results about the normality of the estimators of L-moments are asymptotic, we are particularly interested in knowing how well bootstrap confidence intervals behave for small samples. There are several ways of building confidence intervals using bootstrapping. The most simple are the standard and percentile confidence intervals. The standard confidence interval assumes normality for the statistic and only uses bootstrapping to estimate the standard error of the statistic. The percentile methods work with the (α/2)th and (1-α/2)th percentiles of the empirical sampling distribution. Comparing the performance of the three methods was of interest in this work. The research question was answered by doing simulations in Gauss. The true coverage of the nominal 95% confidence interval for the L-moments and ratios of L-moments were found by simulations.

Page generated in 0.037 seconds