• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 839
  • 412
  • 156
  • 83
  • 79
  • 33
  • 24
  • 16
  • 16
  • 14
  • 13
  • 10
  • 9
  • 8
  • 8
  • Tagged with
  • 2045
  • 2045
  • 545
  • 431
  • 430
  • 382
  • 380
  • 197
  • 180
  • 163
  • 155
  • 154
  • 147
  • 141
  • 140
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Analysis of surface ships engineering readiness and training

Landreth, Brant T. 03 1900 (has links)
Approved for public release, distribution is unlimited / This thesis analyzes engineering readiness and training onboard United States Navy surface ships. On the west coast, the major contributor to training is the Afloat Training Group, Pacific (ATGPAC). The primary objective is to determine whether the readiness standards provide pertinent insight to the surface force Commander and generate alternatives that may assist in better characterization of force-wide engineering readiness. The Type Commander has many questions that should be answered. Some of these are addressed with Poisson and binomial models. The results include: first, age of a ship has no association with performance of drills and that the number of discrepancies is associated with the performance of drills; second, drill performance decreased from the first initial assessment (IA) to the second IA; third, on average, the number of material discrepancies decreases from the IA to the underway demonstration (UD) for ships observed over two cycles; fourth, good ships do well on four programs; finally, training is effective. A table characterizing ships as above average, average, or below average in drill effectiveness at the IA and UD is supplied. / Lieutenant, United States Navy
102

Quantifying Environmental and Line-of-sight Effects in Models of Strong Gravitational Lens Systems

McCully, Curtis, Keeton, Charles R., Wong, Kenneth C., Zabludoff, Ann I. 14 February 2017 (has links)
Matter near a gravitational lens galaxy or projected along the line of sight (LOS) can affect strong lensing observables by more than contemporary measurement errors. We simulate lens fields with realistic threedimensional mass configurations (self-consistently including voids), and then fit mock lensing observables with increasingly complex lens models to quantify biases and uncertainties associated with different ways of treating the lens environment (ENV) and LOS. We identify the combination of mass, projected offset, and redshift that determines the importance of a perturbing galaxy for lensing. Foreground structures have a stronger effect on the lens potential than background structures, due to nonlinear effects in the foreground and downweighting in the background. There is dramatic variation in the net strength of ENV/LOS effects across different lens fields; modeling fields individually yields stronger priors for H-0 than ray tracing through N-body simulations. Models that ignore mass outside the lens yield poor fits and biased results. Adding external shear can account for tidal stretching from galaxies at redshifts z >= z(lens), but it requires corrections for external convergence and cannot reproduce nonlinear effects from foreground galaxies. Using the tidal approximation is reasonable for most perturbers as long as nonlinear redshift effects are included. Even then, the scatter in H0 is limited by the lens profile degeneracy. Asymmetric image configurations produced by highly elliptical lens galaxies are less sensitive to the lens profile degeneracy, so they offer appealing targets for precision lensing analyses in future surveys like LSST and Euclid.
103

THE 3D-HST SURVEY: HUBBLE SPACE TELESCOPE WFC3/G141 GRISM SPECTRA, REDSHIFTS, AND EMISSION LINE MEASUREMENTS FOR ∼100,000 GALAXIES

Momcheva, Ivelina G., Brammer, Gabriel B., van Dokkum, Pieter G., Skelton, Rosalind E., Whitaker, Katherine E., Nelson, Erica J., Fumagalli, Mattia, Maseda, Michael V., Leja, Joel, Franx, Marijn, Rix, Hans-Walter, Bezanson, Rachel, Cunha, Elisabete Da, Dickey, Claire, Schreiber, Natascha M. Förster, Illingworth, Garth, Kriek, Mariska, Labbé, Ivo, Lange, Johannes Ulf, Lundgren, Britt F., Magee, Daniel, Marchesini, Danilo, Oesch, Pascal, Pacifici, Camilla, Patel, Shannon G., Price, Sedona, Tal, Tomer, Wake, David A., van der Wel, Arjen, Wuyts, Stijn 11 August 2016 (has links)
We present reduced data and data products from the 3D-HST survey, a 248-orbit HST Treasury program. The survey obtained WFC3 G141 grism spectroscopy in four of the five CANDELS fields: AEGIS, COSMOS, GOODS-S, and UDS, along with WFC3 H-140 imaging, parallel ACS G800L spectroscopy, and parallel I-814 imaging. In a previous paper, we presented photometric catalogs in these four fields and in GOODS-N, the fifth CANDELS field. Here we describe and present the WFC3 G141 spectroscopic data, again augmented with data from GO-1600 in GOODS-N (PI: B. Weiner). We developed software to automatically and optimally extract interlaced two-dimensional (2D) and one-dimensional (1D) spectra for all objects in the Skelton et al. (2014) photometric catalogs. The 2D spectra and the multi-band photometry were fit simultaneously to determine redshifts and emission line strengths, taking the morphology of the galaxies explicitly into account. The resulting catalog has redshifts and line strengths (where available) for 22,548 unique objects down to JH(IR) <= 24 (79,609 unique objects down to JH(IR) <= 26). Of these, 5459 galaxies are at z > 1.5 and 9621 are at 0.7 < z < 1.5, where Ha falls in the G141 wavelength coverage. The typical redshift error for JH(IR) <= 24 galaxies is sigma(z) approximate to 0.003 x (1 + z), i.e., one native WFC3 pixel. The 3 sigma limit for emission line fluxes of point sources is 2.1 x 10(-17) erg s(-1) cm(-2). All 2D and 1D spectra, as well as redshifts, line fluxes, and other derived parameters, are publicly available.(18)
104

The World Heritage on Öland : An investigation into the Motivations of Chinese Travelers toTravel abroad

Zhou, Chuanhui, Yu, Anqi January 2016 (has links)
The aim of this research is to explore how Öland could attract Chinese tourists to sustain its development. This study is conducted on the basis of group interviews among 20 respondents selected from Chinese tourists. This research reveals that learning and experiencing, building a social relationship and enjoying natural landscape are the major reason for Chinese tourists traveling abroad, among which, the main motivations for Chinese tourists visiting Öland is the attractive spot. The research finds that not many Chinese tourists have been to Öland before. However, among those who have been, they said that Borgholm Castle was the most attractive tourist spot. The major approaches for Chinese tourists accessing information of Öland are travel agencies, travel apps, TV shows, movies and the internet , whereas little marketing strategies such as advertising (in Chinese) and cooperating with local travel agencies has been utilized by the government. An analysis of the key motivations of Chinese tourist reveals one challenge in attracting Chinese tourists facing the government: Öland is not well recognized as a World Heritage among Chinese tourists compared to other popular travel destinations. The strength and weakness of Öland tourism indicates that Öland need to take more active marketing strategies to brand their tourism targeting Chinese tourists.
105

TOWARD A NETWORK OF FAINT DA WHITE DWARFS AS HIGH-PRECISION SPECTROPHOTOMETRIC STANDARDS

Narayan, G., Axelrod, T., Holberg, J. B., Matheson, T., Saha, A., Olszewski, E., Claver, J., Stubbs, C. W., Bohlin, R. C., Deustua, S., Rest, A. 05 May 2016 (has links)
We present the initial results from a program aimed at establishing a network of hot DA white dwarfs to serve as spectrophotometric standards for present and future wide-field surveys. These stars span the equatorial zone and are faint enough to be conveniently observed throughout the year with large-aperture telescopes. The spectra of these white dwarfs are analyzed in order to generate a non-local-thermodynamic-equilibrium model atmosphere normalized to Hubble Space Telescope colors, including adjustments for wavelength-dependent interstellar extinction. Once established, this standard star network will serve ground-based observatories in both hemispheres as well as space-based instrumentation from the UV to the near IR. We demonstrate the effectiveness of this concept and show how two different approaches to the problem using somewhat different assumptions produce equivalent results. We discuss the lessons learned and the resulting corrective actions applied to our program.
106

A Comparison of Two Differential Item Functioning Detection Methods: Logistic Regression and an Analysis of Variance Approach Using Rasch Estimation

Whitmore, Marjorie Lee Threet 08 1900 (has links)
Differential item functioning (DIF) detection rates were examined for the logistic regression and analysis of variance (ANOVA) DIF detection methods. The methods were applied to simulated data sets of varying test length (20, 40, and 60 items) and sample size (200, 400, and 600 examinees) for both equal and unequal underlying ability between groups as well as for both fixed and varying item discrimination parameters. Each test contained 5% uniform DIF items, 5% non-uniform DIF items, and 5% combination DIF (simultaneous uniform and non-uniform DIF) items. The factors were completely crossed, and each experiment was replicated 100 times. For both methods and all DIF types, a test length of 20 was sufficient for satisfactory DIF detection. The detection rate increased significantly with sample size for each method. With the ANOVA DIF method and uniform DIF, there was a difference in detection rates between discrimination parameter types, which favored varying discrimination and decreased with increased sample size. The detection rate of non-uniform DIF using the ANOVA DIF method was higher with fixed discrimination parameters than with varying discrimination parameters when relative underlying ability was unequal. In the combination DIF case, there was a three-way interaction among the experimental factors discrimination type, relative ability, and sample size for both detection methods. The error rate for the ANOVA DIF detection method decreased as test length increased and increased as sample size increased. For both methods, the error rate was slightly higher with varying discrimination parameters than with fixed. For logistic regression, the error rate increased with sample size when relative underlying ability was unequal between groups. The logistic regression method detected uniform and non-uniform DIF at a higher rate than the ANOVA DIF method. Because the type of DIF present in real data is rarely known, the logistic regression method is recommended for most cases.
107

Reliability analysis of the 4.5 roller bearing

Muller, Cole 06 1900
Approved for public release, distribution is unlimited / The J-52 engine used in the EA-6B Prowler has been found to have a faulty design which has led to in-flight engine failures due to the degradation of the 4.5 roller bearing. Because of cost constraints, the Navy developed a policy of maintaining rather than replacing the faulty engine with a re-designed engine. With an increase in Prowler crashes related to the failure of this bearing, the Navy has begun to re-evaluate this policy. This thesis analyzed the problem using methods in reliability statistics to develop policy recommendations for the Navy. One method analyzed the individual times to failure of the bearings and fit the data to a known distribution. Using this distribution, we estimated lower confidence bounds for the time which 0.0001% of the bearings are expected to fail, finding it was below fifty hours. Such calculations can be used to form maintenance and replacement policies. Another approach analyzed oil samples taken from the J-52 engine. The oil samples contain particles of different metals that compose the 4.5 roller bearing. Linear regression, classification and regression trees, and discriminant analysis were used to determine that molybdenum and vanadium levels are good indicators of when a bearing is near failure. / http://hdl.handle.net/10945/945 / Ensign, United States Navy
108

DNA Microarray Data Analysis and Mining: Affymetrix Software Package and In-House Complementary Packages

Xu, Lizhe 19 December 2003 (has links)
Data management and analysis represent a major challenge for microarray studies. In this study, Affymetrix software was used to analyze an HIV-infection data. The microarray analysis shows remarkably different results when using different parameters provided by the software. This highlights the fact that a standardized analysis tool, incorporating biological information about the genes is needed in order to better interpret the microarray study. To address the data management problem, in-house programs, including scripts and a database, were designed. The in-house programs were also used to overcome problems and inconveniences discovered during the data analysis, including management of the gene lists. The database provides rapid connection to many online public databases, as well as the integration of the original microarray data, relevant publications and other useful information. The in-house programs allow investigators to process and analyze the full Affymetrix microarray data in a speedy manner.
109

The effectiveness of missing data techniques in principal component analysis

Maartens, Huibrecht Elizabeth January 2015 (has links)
A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of requirements for the degree of Master of Science. Johannesburg, 2015. / Exploratory data analysis (EDA) methods such as Principal Component Analysis (PCA) play an important role in statistical analysis. The analysis assumes that a complete dataset is observed. If the underlying data contains missing observations, the analysis cannot be completed immediately as a method to handle these missing observations must first be implemented. Missing data are a problem in any area of research, but researchers tend to ignore the problem, even though the missing observations can lead to incorrect conclusions and results. Many methods exist in the statistical literature for handling missing data. There are many methods in the context of PCA with missing data, but few studies have focused on a comparison of these methods in order to determine the most effective method. In this study the effectiveness of the Expectation Maximisation (EM) algorithm and the iterative PCA (iPCA) algorithm are assessed and compared against the well-known yet flawed methods of case-wise deletion (CW) and mean imputation. Two techniques for the application of the multiple imputation (MI) method of Markov Chain Monte Carlo (MCMC) with the EM algorithm in a PCA context are suggested and their effectiveness is evaluated compared to the other methods. The analysis is based on a simulated dataset and the effectiveness of the methods analysed using the sum of squared deviations (SSD) and the Rv coefficient, a measure of similarity between two datasets. The results show that the MI technique applying PCA in the calculation of the final imputed values and the iPCA algorithm are the most effective techniques, compared to the other techniques in the analysis.
110

Problemas inversos em física experimental: a secção de choque fotonuclear e radiação de aniquilação elétron-pósitron / Inverse problems in experimental physics: a section of the fotonuclear shock and radiation of electron-positron annihilation.

Takiya, Carlos 27 June 2003 (has links)
Os métodos para resolução de Problemas Inversos aplicados a dados experimentais (Regularização, Bayesianos e Máxima Entropia) foram revistos. O procedimento de Mínimos Quadrados com Regularização por Mínima Variância (MQ-RMV) foi desenvolvido para resolução de Problemas Inversos lineares, tendo sido aplicado em: a) espectros unidimensionais simulados; b)determinação da secção de choque ANTPOT.34 S (, xn) a partir de yield de bremsstrahlung; c)análise da radiação de aniquilação elétron-pósitron em alumínio de experimento de coincidência com dois detetores semicondutores. Os resultados são comparados aos obtidos por outros métodos. / The methods used to solve inverse problems applied to experimental data (Regularization, Bayesian and Maximum Entropy) were revised. The Least-Squares procedure with Minimum Variance Regularization (LS-MVR) was developed to solve linear inverse problems, being applied to: a)simulated one-dimensional histograms; b) 34S (, xn) cross-section determination radiation in Aluminum from coincidence experiments with two semiconductor detectors. The results were compared to that obtained by other methods.

Page generated in 0.109 seconds