• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 299
  • 61
  • 39
  • 34
  • 23
  • 12
  • 9
  • 8
  • 8
  • 6
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 601
  • 289
  • 120
  • 118
  • 96
  • 80
  • 60
  • 50
  • 49
  • 44
  • 36
  • 34
  • 33
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

A search for supersymmetry with jets and missing transverse energy at the Large Hadron Collider, and the performance of the ATLAS missing transverse energy trigger

Pinder, Alexander Vincent January 2012 (has links)
Attempting to find evidence for supersymmetry (SUSY) is one of the key aims of the ATLAS experiment at the Large Hadron Collider. This thesis is concerned with searching for supersymmetry in final states with 2-4 hadronic jets, missing transverse energy and no electrons or muons. In the first part, a search strategy is developed using 1.04 fb−1 of data from the first half of 2011. No excess over the Standard Model expectation is observed, so the data are used to set limits on two SUSY simplified models, in which pair-produced squarks or gluinos decay directly to neutralinos and jets. Good limits are achieved for scenarios where the neutralino is nearly as massive as the squark/gluino, compared to an earlier ATLAS analysis using the same dataset. For example, for pair-production of squarks decaying directly to neutralinos, all neutralino masses below 200 GeV are excluded at 95% confidence level when the squark mass is 300 GeV. Similarly, for pair-produced gluinos, neutralino masses below 300 GeV are excluded when the gluino mass is 400 GeV. The equivalent neutralino mass limits in the earlier analysis are 130 GeV and 240 GeV respectively. In the second part, the performance of the ATLAS missing transverse energy trigger is studied, and its suitability for use in the SUSY search is evaluated. The behaviour is found to be consistent with expectations, and the trigger strategy for 2010 data-taking is described.
82

"Lighting his way home" : pastoral conversations with a missing child's mother

Brink, Anna Margaretha 30 November 2003 (has links)
Missing children is one of the horrors that we are confronted with in today's society. The case study method, a feminist co-search methodology, is used to give a missing child's mother the opportunity to tell and re-tell the painful story. During this co-search process the following aspects of doing ethics and pastoral care and counselling with the mother are constantly negotiated. The term "missing child" is defined and the relevance between the distinction of "missing children" and "run-away children" is discussed. Furthermore, this study explores the many diverse practices of narrative pastoral care and counselling with parents of missing children within an economically disadvantaged community. The conceptualisations regarding loss, hope and meaning-making and how these are utilised in the life of a missing child's mother is discussed. / Practical Theology / M.Th.
83

Real-Time Estimation of Aerodynamic Parameters

Larsson Cahlin, Sofia January 2016 (has links)
Extensive testing is performed when a new aircraft is developed. Flight testing is costly and time consuming but there are aspects of the process that can be made more efficient. A program that estimates aerodynamic parameters during flight could be used as a tool when deciding to continue or abort a flight from a safety or data collecting perspective. The algorithm of such a program must function in real time, which for this application would mean a maximum delay of a couple of seconds, and it must handle telemetric data, which might have missing samples in the data stream. Here, a conceptual program for real-time estimation of aerodynamic parameters is developed. Two estimation methods and four methods for handling of missing data are compared. The comparisons are performed using both simulated data and real flight test data. The first estimation method uses the least squares algorithm in the frequency domain and is based on the chirp z-transform. The second estimation method is created by adding boundary terms in the frequency domain differentiation and instrumental variables to the first method. The added boundary terms result in better estimates at the beginning of the excitation and the instrumental variables result in a smaller bias when the noise levels are high. The second method is therefore chosen in the algorithm of the conceptual program as it is judged to have a better performance than the first. The sequential property of the transform ensures functionality in real-time and the program has a maximum delay of just above one second. The four compared methods for handling missing data are to discard the missing data, hold the previous value, use linear interpolation or regard the missing samples as variations in the sample time. The linear interpolation method performs best on analytical data and is compared to the variable sample time method using simulated data. The results of the comparison using simulated data varies depending on the other implementation choices but neither method is found to give unbiased results. In the conceptual program, the variable sample time method is chosen as it gives a lower variance and is preferable from an implementational point of view.
84

Quantifying Power and Bias in Cluster Randomized Trials Using Mixed Models vs. Cluster-Level Analysis in the Presence of Missing Data: A Simulation Study

Vincent, Brenda January 2016 (has links)
In cluster randomized trials (CRTs), groups are randomized to treatment arms rather than individuals while the outcome is assessed on the individuals within each cluster. Individuals within clusters tend to be more similar than in a randomly selected sample, which poses issues with dependence, which may lead to underestimated standard errors if ignored. To adjust for the correlation between individuals within clusters, two main approaches are used to analyze CRTs: cluster-level and individual-level analysis. In a cluster-level analysis summary measures are obtained for each cluster and then the two sets of cluster-specific measures are compared, such as with a t-test of the cluster means. A mixed model which takes into account cluster membership is an example of an individual-level analysis. We used a simulation study to quantify and compare power and bias of these two methods. We further take into account the effect of missing data. Complete datasets were generated and then data were deleted to simulate missing completely at random (MCAR) and missing at random (MAR) data. A balanced design, with two treatment groups and two time points was assumed. Cluster size, variance components (including within-subject, within-cluster and between-cluster variance) and proportion of missingness were varied to simulate common scenarios seen in practice. For each combination of parameters, 1,000 datasets were generated and analyzed. Results of our simulation study indicate that cluster-level analysis resulted in substantial loss of power when data were MAR. Individual-level analysis had higher power and remained unbiased, even with a small number of clusters.
85

A study on some missing value estimation algorithms for DNA microarraydata

Tai, Ching-wan., 戴青雲. January 2006 (has links)
published_or_final_version / abstract / Mathematics / Master / Master of Philosophy
86

What Spins Away

Irwin, Keith 05 1900 (has links)
What Spins Away is a novel about a man named Caleb who, in the process, of searching for a brother who has been missing for ten years, discovers that his inability to commit to a job or his primary relationships is both the result of his history with that older missing brother, and his own misconceptions about the meaning of that history. On a formal level, the novel explores the ability of traditional narrative structures to carry postmodern themes. The theme, in this case, is the struggle for a stable identity when there is no stable community against which or in relationship to an identity might be defined.
87

The effectiveness of missing data techniques in principal component analysis

Maartens, Huibrecht Elizabeth January 2015 (has links)
A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of requirements for the degree of Master of Science. Johannesburg, 2015. / Exploratory data analysis (EDA) methods such as Principal Component Analysis (PCA) play an important role in statistical analysis. The analysis assumes that a complete dataset is observed. If the underlying data contains missing observations, the analysis cannot be completed immediately as a method to handle these missing observations must first be implemented. Missing data are a problem in any area of research, but researchers tend to ignore the problem, even though the missing observations can lead to incorrect conclusions and results. Many methods exist in the statistical literature for handling missing data. There are many methods in the context of PCA with missing data, but few studies have focused on a comparison of these methods in order to determine the most effective method. In this study the effectiveness of the Expectation Maximisation (EM) algorithm and the iterative PCA (iPCA) algorithm are assessed and compared against the well-known yet flawed methods of case-wise deletion (CW) and mean imputation. Two techniques for the application of the multiple imputation (MI) method of Markov Chain Monte Carlo (MCMC) with the EM algorithm in a PCA context are suggested and their effectiveness is evaluated compared to the other methods. The analysis is based on a simulated dataset and the effectiveness of the methods analysed using the sum of squared deviations (SSD) and the Rv coefficient, a measure of similarity between two datasets. The results show that the MI technique applying PCA in the calculation of the final imputed values and the iPCA algorithm are the most effective techniques, compared to the other techniques in the analysis.
88

Predicting HIV Status Using Neural Networks and Demographic Factors

Tim, Taryn Nicole Ho 15 February 2007 (has links)
Student Number : 0006036T - MSc(Eng) project report - School of Electrical and Information Engineering - Faculty of Engineering and the Built Environment / Demographic and medical history information obtained from annual South African antenatal surveys is used to estimate the risk of acquiring HIV. The estimation system consists of a classifier: a neural network trained to perform binary classification, using supervised learning with the survey data. The survey information contains discrete variables such as age, gravidity and parity, as well as the quantitative variables race and location, making up the input to the neural network. HIV status is the output. A multilayer perceptron with a logistic function is trained with a cross entropy error function, providing a probabilistic interpretation of the output. Predictive and classification performance is measured, and the sensitivity and specificity are illustrated on the Receiver Operating Characteristic. An auto-associative neural network is trained on complete datasets, and when presented with partial data, global optimisation methods are used to approximate the missing entries. The effect of the imputed data on the network prediction is investigated.
89

Computational intelligence techniques for missing data imputation

Nelwamondo, Fulufhelo Vincent 14 August 2008 (has links)
Despite considerable advances in missing data imputation techniques over the last three decades, the problem of missing data remains largely unsolved. Many techniques have emerged in the literature as candidate solutions, including the Expectation Maximisation (EM), and the combination of autoassociative neural networks and genetic algorithms (NN-GA). The merits of both these techniques have been discussed at length in the literature, but have never been compared to each other. This thesis contributes to knowledge by firstly, conducting a comparative study of these two techniques.. The significance of the difference in performance of the methods is presented. Secondly, predictive analysis methods suitable for the missing data problem are presented. The predictive analysis in this problem is aimed at determining if data in question are predictable and hence, to help in choosing the estimation techniques accordingly. Thirdly, a novel treatment of missing data for online condition monitoring problems is presented. An ensemble of three autoencoders together with hybrid Genetic Algorithms (GA) and fast simulated annealing was used to approximate missing data. Several significant insights were deduced from the simulation results. It was deduced that for the problem of missing data using computational intelligence approaches, the choice of optimisation methods plays a significant role in prediction. Although, it was observed that hybrid GA and Fast Simulated Annealing (FSA) can converge to the same search space and to almost the same values they differ significantly in duration. This unique contribution has demonstrated that a particular interest has to be paid to the choice of optimisation techniques and their decision boundaries. iii Another unique contribution of this work was not only to demonstrate that a dynamic programming is applicable in the problem of missing data, but to also show that it is efficient in addressing the problem of missing data. An NN-GA model was built to impute missing data, using the principle of dynamic programing. This approach makes it possible to modularise the problem of missing data, for maximum efficiency. With the advancements in parallel computing, various modules of the problem could be solved by different processors, working together in parallel. Furthermore, a method for imputing missing data in non-stationary time series data that learns incrementally even when there is a concept drift is proposed. This method works by measuring the heteroskedasticity to detect concept drift and explores an online learning technique. New direction for research, where missing data can be estimated for nonstationary applications are opened by the introduction of this novel method. Thus, this thesis has uniquely opened the doors of research to this area. Many other methods need to be developed so that they can be compared to the unique existing approach proposed in this thesis. Another novel technique for dealing with missing data for on-line condition monitoring problem was also presented and studied. The problem of classifying in the presence of missing data was addressed, where no attempts are made to recover the missing values. The problem domain was then extended to regression. The proposed technique performs better than the NN-GA approach, both in accuracy and time efficiency during testing. The advantage of the proposed technique is that it eliminates the need for finding the best estimate of the data, and hence, saves time. Lastly, instead of using complicated techniques to estimate missing values, an imputation approach based on rough sets is explored. Empirical results obtained using both real and synthetic data are given and they provide a valuable and promising insight to the problem of missing data. The work, has significantly confirmed that rough sets can be reliable for missing data estimation in larger and real databases.
90

A Monte-Carlo comparison of methods in analyzing structural equation models with incomplete data.

January 1991 (has links)
by Siu-fung Chan. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1991. / Bibliography: leaves 38-41. / Chapter Chapter 1 --- Introduction --- p.1 / Chapter Chapter 2 --- Analysis of the Structural Equation Model with Continuous Data --- p.6 / Chapter §2.1 --- The Model --- p.6 / Chapter §2.2 --- Mehtods of Handling Incomplete Data --- p.8 / Chapter §2.3 --- Design of the Monte-Carlo Study --- p.12 / Chapter §2.4 --- Results of the Monte-Carlo Study --- p.15 / Chapter Chapter 3 --- Analysis of the Structural Equation Model with Polytomous Data --- p.24 / Chapter §3.1 --- The Model --- p.24 / Chapter §3.2 --- Methods of Handling Incomplete Data --- p.25 / Chapter §3.3 --- Design of the Monte-Carlo Study --- p.27 / Chapter §3.4 --- Results of the Monte-Carlo Study --- p.31 / Chapter Chapter 4 --- Summary and Discussion --- p.36 / References --- p.38 / Tables --- p.42 / Figures --- p.78

Page generated in 0.0452 seconds