• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2233
  • 498
  • 378
  • 210
  • 133
  • 87
  • 47
  • 44
  • 35
  • 31
  • 19
  • 18
  • 18
  • 18
  • 18
  • Tagged with
  • 4780
  • 2326
  • 1795
  • 1165
  • 1152
  • 968
  • 665
  • 648
  • 536
  • 444
  • 381
  • 380
  • 373
  • 345
  • 322
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

“Reliability Analysis of Oriented Strand Board’s Strength with a Simulation Study of the Median Censored Method for Estimating of Lower Percentile Strength

Wang, Yang 01 August 2007 (has links)
Oriented Strand Board (OSB), an engineered wood product, has gained increased market acceptance as a construction material. Because of its growing market, the product’s manufacturing and performance have become the focus of much research. Internal Bond (IB) and Parallel and Perpendicular Elasticity Indices (EI), are important strength metrics of OSB and are analyzed in this thesis using statistical reliability methods.The data for this thesis consists of 529 destructive tests of OSB panels. They were tested from July 2005 to January 2006. These OSB panels came from a modern OSB manufacture in the Southeastern United States with the wood furnish being primarily Southern Pine (Pinus spp.). The 529 records are for 7/16” thickness OSB strength, which is rated for roof sheathing (i.e., 7/16” RS).Descriptive statistics of IB and EI are summarized including mean, median, standard deviation, Interquartile range, skewness etc. Visual tools such as histograms and box plots are utilized to identify outliers and improve the understanding of the data. Survival plots or Kaplan-Meier curves are important methods for conducting nonparametric analyses of life (or strength) reliability data and are used in this thesis to estimate the strength survival function of the IB and EI of OSB. Probability Plots and Information Criteria are used to determine the best underlying distribution or probability density function. The OSB data used in this thesis fit the lognormal distribution best for both IB and EI. One outlier is excluded for the IB data and six outliers are excluded for the EI data.Estimation of lower percentiles is very important for quality assurance. In many reliability studies, there is great interest in estimating the lower percentiles of life or strength. In OSB, the lower percentiles of strength may result in catastrophic failures during installation of OSB panels. Catastrophic failure of 7/16” RS OSB, which is used primarily for residential construction of roofs, may result in severe injury or death of construction workers. The liability and risk to OSB manufacturers from severe injury or death to construction workers from an OSB panel failure during construction can result in extreme loss of market share and significant financial losses.In reliability data, “multiple failure modes” is common. Simulated data of mixed distribution of the two two-parameter Weibull distribution is produced to mimic the multiple failure modes for the reliability data A forced median censored method is adopted to estimate lower percentiles of the simulated data. Results of the simulation indicate that the estimated lower percentiles median censored method is relatively close to the true parametric percentiles when compared to not using the median censored method. I conclude that the median censoring method is a useful tool for improving estimation of the lower percentiles for OSB panel failure.
32

EFFECTS OF SEQUENTIAL PROPERTIES ON STRATEGY CHOICE IN PROBABILITY MATCHING

Rosenthal, Renate Hoosmann, 1946- January 1975 (has links)
No description available.
33

Investigating students' understandings of probability : a study of a grade 7 classroom

Abu-Bakare, Veda 11 1900 (has links)
This research study probes students’ understandings of various aspects of probability in a 3-week Probability unit in a Grade 7 classroom. Informing this study are the perspectives of constructivism and sociocultural theory which underpin the contemporary reform in mathematics education as codified in the NCTM standards and orient much of the teaching and learning of mathematics in today’s classrooms. Elements of culturally responsive pedagogy were also adopted within the research design. The study was carried out in an urban school where I collaborated with the teacher and students as co-teacher and researcher. As the population of this school was predominantly Aboriginal, the lessons included discussion of the tradition and significance of Aboriginal games of chance and an activity based on one of these games. Data sources included the responses in the pre- and post-tests, fleidnotes of the lessons, and audiotapes of student interviews. The key findings of the study are that the students had some understanding of formal probability theory with strongly-held persistent alternative thinking, some of which did not fit the informal conceptions of probability noted in the literature such as the outcome approach and the gambler’s fallacy. This has led to the proposal of a Personal Probability model in which the determination of a probability or a probability decision is a weighting of components such as experience, intuition and judgment, some normative thinicing, and personal choice, beliefs and attitudes. Though the alternative understandings were explored in interviews and resolved to some degree, the study finds that the probability understandings of students in this study are still fragile and inconsistent. Students demonstrated marked interest in tasks that combined mathematics, culture and community. This study presents evidence that the current prescribed learning outcomes in the elementary grades are too ambitious and best left to the higher grades. The difficulties in the teaching and learning of the subject induced by the nuances and challenges of the subject as well as the dearth of time that is needed for an adequate treatment further direct that instructional resources at this level be focused on deepening and strengthening the basic ideas.
34

Behavioral Modeling of Botnet Populations Viewed through Internet Protocol Address Space

Weaver, Rhiannon 01 May 2012 (has links)
A botnet is a collection of computers infected by a shared set of malicious software, that maintain communications to a single human administrator or small organized group. Botnets are indirectly observable populations; cyber-analysts often measure a botnet’s threat in terms of its size, but size is derived from a count of the observable network touchpoints through which infected machines communicate. Activity is often a count of packets or connection attempts, representing logins to command and control servers, spam messages sent, peer-to-peer communications, or other discrete network behavior. Front line analysts use sandbox testing of a botnet’s malicious software to discover signatures for detecting an infected computer and shutting it down, but there is less focus on modeling the botnet population as a collection of machines obscured by the kaleidoscope view of Internet Protocol (IP) address space. This research presents a Bayesian model for generic modeling of a botnet due to its observable activity across a network. A generation-allocation model is proposed, that separates observable network activity at time t into the counts yt generated by the malicious software, and the network’s allocation of these counts among available IP addresses. As a first step, the framework outlines how to develop a directly observable behavioral model informed by sandbox tests and day-to-day user activity, and then how to use this model as a basis for population estimation in settings using proxies or Network Address Translation (NAT) in which only the aggregate sum of all machine activity is observed. The model is explored via a case study using the Conficker-C botnet that emerged in March of 2009.
35

High-Dimensional Adaptive Basis Density Estimation

Buchman, Susan 01 May 2011 (has links)
In the realm of high-dimensional statistics, regression and classification have received much attention, while density estimation has lagged behind. Yet there are compelling scientific questions which can only be addressed via density estimation using high-dimensional data, such as the paths of North Atlantic tropical cyclones. If we cast each track as a single high-dimensional data point, density estimation allows us to answer such questions via integration or Monte Carlo methods. In this dissertation, I present three new methods for estimating densities and intensities for high-dimensional data, all of which rely on a technique called diffusion maps. This technique constructs a mapping for high-dimensional, complex data into a low-dimensional space, providing a new basis that can be used in conjunction with traditional density estimation methods. Furthermore, I propose a reordering of importance sampling in the high-dimensional setting. Traditional importance sampling estimates high-dimensional integrals with the aid of an instrumental distribution chosen specifically to minimize the variance of the estimator. In many applications, the integral of interest is with respect to an estimated density. I argue that in the high-dimensional realm, performance can be improved by reversing the procedure: instead of estimating a density and then selecting an appropriate instrumental distribution, begin with the instrumental distribution and estimate the density with respect to it directly. The variance reduction follows from the improved density estimate. Lastly, I present some initial results in using climatic predictors such as sea surface temperature as spatial covariates in point process estimation.
36

The Short Time Fourier Transform and Local Signals

Okamura, Shuhei 01 June 2011 (has links)
In this thesis, I examine the theoretical properties of the short time discrete Fourier transform (STFT). The STFT is obtained by applying the Fourier transform by a fixed-sized, moving window to input series. We move the window by one time point at a time, so we have overlapping windows. I present several theoretical properties of the STFT, applied to various types of complex-valued, univariate time series inputs, and their outputs in closed forms. In particular, just like the discrete Fourier transform, the STFT’s modulus time series takes large positive values when the input is a periodic signal. One main point is that a white noise time series input results in the STFT output being a complex-valued stationary time series and we can derive the time and time-frequency dependency structure such as the cross- covariance functions. Our primary focus is the detection of local periodic signals. I present a method to detect local signals by computing the probability that the squared modulus STFT time series has consecutive large values exceeding some threshold after one exceeding observation following one observation less than the threshold. We discuss a method to reduce the computation of such probabilities by the Box-Cox transformation and the delta method, and show that it works well in comparison to the Monte Carlo simulation method.
37

Mixed Membership Distributions with Applications to Modeling Multiple Strategy Usage

Galyardt, April 01 July 2012 (has links)
This dissertation examines two related questions. How do mixed membership models work? and Can mixed membership be used to model how students use multiple strategies to solve problems? Mixed membership models have been used in thousands of applications from text and image processing to genetic microarray analysis. Yet these models are crafted on a case-by-case basis because we do not yet understand the larger class of mixed membership models. The work presented here addresses this gap and examines two different aspects of the general class of models. First I establish that categorical data is a special case, and allows for a different interpretation of mixed membership than in the general case. Second, I present a new identifiability result that characterizes equivalence classes of mixed membership models which produce the same distribution of data. These results provide a strong foundation for building a model that captures how students use multiple strategies. How to assess which strategies students use, is an open question. Most psychometric models either do not model strategies at all, or they assume that each student uses a single strategy on all problems, even if they allow different students to use different strategies. The problem is, that’s not what students do. Students switch strategies. Even on the very simplest of arithmetic problems, students use different strategies on different problems, and experts use a different mixture of strategies than novices do. Assessing which strategies students use is an important part of assessing student knowledge, yet the concept of ‘strategy’ can be ill-defined. I use the Knowledge- Learning-Instruction framework to define a strategy as a particular type of integrative knowledge component. I then look at two different ways to model how students use multiple strategies. I combine cognitive diagnosis models with mixed membership models to create a multiple strategies model. This new model allows for students to switch strategies from problem to problem, and allows us to estimate both the strategies that students are using and how often each student uses each strategy. I demonstrate this model on a modestly sized assessment of least common multiples. Lastly, I present an analysis of the different strategies that students use to estimate numerical magnitude. Three smaller results come out of this analysis. First, this illustrates the limits of the general mixed membership model. The properties of mixed membership models developed in this dissertation show that without serious changes to the model, it cannot describe the variation between students that is present in this data set. Second, I develop a exploratory data analysis method for summarizing functional data. Finally, this analysis demonstrates that existing psychological theory for how children estimate numerical magnitude is incomplete. There is more variation between students than is captured by current theoretical models.
38

Learning Spatio-Temporal Dynamics: Nonparametric Methods for Optimal Forecasting and Automated Pattern Discovery

Goerg, Georg Matthias 01 December 2012 (has links)
Many important scientific and data-driven problems involve quantities that vary over space and time. Examples include functional magnetic resonance imaging (fMRI), climate data, or experimental studies in physics, chemistry, and biology. Principal goals of many methods in statistics, machine learning, and signal processing are to use this data and i) extract informative structures and remove noisy, uninformative parts; ii) understand and reconstruct underlying spatio-temporal dynamics that govern these systems; and iii) forecast the data, i.e., describe the system in the future. Being data-driven problems, it is important to have methods and algorithms that work well in practice for a wide range of spatio-temporal processes as well as various data types. In this thesis I present such generally applicable statistical methods that address all three problems in a unifying manner. I introduce two new techniques for optimal nonparametric forecasting of spatiotemporal data: hard and mixed LICORS (Light Cone Reconstruction of States). Hard LICORS is a consistent predictive state estimator and extends previous work from Shalizi (2003); Shalizi, Haslinger, Rouquier, Klinkner, and Moore (2006); Shalizi, Klinkner, and Haslinger (2004) to continuous-valued spatio-temporal fields. Mixed LICORS builds on a new, fully probabilistic model of light cones and predictive states mappings, and is an EM-like version of hard LICORS. Simulations show that it has much better finite sample properties than hard LICORS. I also propose a sparse variant of mixed LICORS, which improves out-of-sample forecasts even further. Both methods can then be used to estimate local statistical complexity (LSC) (Shalizi, 2003), a fully automatic technique for pattern discovery in dynamical systems. Simulations and applications to fMRI data demonstrate that the proposed methods work well and give useful results in very general scientific settings. Lastly, I made most methods publicly available as R (R Development Core Team, 2010) or Python (Van Rossum, 2003) packages, so researchers can use these methods and better understand, forecast, and discover patterns in the data they study.
39

Statistical Modeling and Analysis of Breast Cancer and Pancreatic Cancer

Kottabi, Zahra 01 January 2012 (has links)
Abstract The object of the present study is to apply statistical modeling and estimate the mean of optimism of breast cancer patients as function of attribute variables; delay, education and age for each race of breast cancer patients. Moreover, to investigate the nonlinear association between optimism, education, age and delay with respect to each race and both. Furthermore, to develop differential equations that will characterize the behavior of the pancreatic cancer tumor size as a function of time. Having such differential equations, the mean solution of which once plotted will identify the rate of change of tumor size as a function of age. The structures of the differential equations characterize the growth of pancreatic cancer tumor. Once we have developed the differential equations and their solutions, and the object of the present study is to probabilistically evaluate commonly used methods to perform survival analysis of medical patients to validate the quality of the differential system and discuss its usefulness. In the last study, a comparison of parametric, semi-parametric and nonparametric analysis of probability survival time models. The first part of the evaluation of survival time by applying the statistical tests will guide us to how precede the actual cancer data and second part, identifying the parametric survival time function for each race and both. Moreover, we will evaluate the Kernel density, the popular Kaplan-Meier (KM) and the Cox Proportional Hazard (Cox PH) models by using actual pancreatic cancer data. As expected, the parametric survival analysis when applicable gives the best results followed by the not commonly used nonparametric Kernel density approach for evaluations actual cancer data.
40

Investigating students' understandings of probability : a study of a grade 7 classroom

Abu-Bakare, Veda 11 1900 (has links)
This research study probes students’ understandings of various aspects of probability in a 3-week Probability unit in a Grade 7 classroom. Informing this study are the perspectives of constructivism and sociocultural theory which underpin the contemporary reform in mathematics education as codified in the NCTM standards and orient much of the teaching and learning of mathematics in today’s classrooms. Elements of culturally responsive pedagogy were also adopted within the research design. The study was carried out in an urban school where I collaborated with the teacher and students as co-teacher and researcher. As the population of this school was predominantly Aboriginal, the lessons included discussion of the tradition and significance of Aboriginal games of chance and an activity based on one of these games. Data sources included the responses in the pre- and post-tests, fleidnotes of the lessons, and audiotapes of student interviews. The key findings of the study are that the students had some understanding of formal probability theory with strongly-held persistent alternative thinking, some of which did not fit the informal conceptions of probability noted in the literature such as the outcome approach and the gambler’s fallacy. This has led to the proposal of a Personal Probability model in which the determination of a probability or a probability decision is a weighting of components such as experience, intuition and judgment, some normative thinicing, and personal choice, beliefs and attitudes. Though the alternative understandings were explored in interviews and resolved to some degree, the study finds that the probability understandings of students in this study are still fragile and inconsistent. Students demonstrated marked interest in tasks that combined mathematics, culture and community. This study presents evidence that the current prescribed learning outcomes in the elementary grades are too ambitious and best left to the higher grades. The difficulties in the teaching and learning of the subject induced by the nuances and challenges of the subject as well as the dearth of time that is needed for an adequate treatment further direct that instructional resources at this level be focused on deepening and strengthening the basic ideas. / Education, Faculty of / Curriculum and Pedagogy (EDCP), Department of / Graduate

Page generated in 0.0648 seconds