• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 864
  • 412
  • 156
  • 84
  • 79
  • 35
  • 27
  • 16
  • 16
  • 14
  • 13
  • 10
  • 9
  • 8
  • 8
  • Tagged with
  • 2082
  • 2082
  • 547
  • 431
  • 430
  • 382
  • 380
  • 202
  • 190
  • 165
  • 162
  • 157
  • 150
  • 147
  • 146
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Nonparametric and semiparametric methods for interval-censored failure time data

Zhu, Chao, January 2006 (has links)
Thesis (Ph.D.)--University of Missouri-Columbia, 2006. / The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (May 2, 2007) Vita. Includes bibliographical references.
62

Multi-dataset electron density analysis methods for X-ray crystallography

Pearce, Nicholas M. January 2016 (has links)
X-ray crystallography is extensively deployed to determine the structure of proteins, both unbound and bound to different molecules. Crystallography has the power to visually reveal the binding of small molecules, assisting in their development in structure-based lead design. Currently, however, the methods used to detect binding, and the subjectivity of inexperienced modellers, are a weak-point in the field. Existing methods for ligand identification are fundamentally flawed when identifying partially-occupied states in crystallographic datasets; the ambiguity of conventional electron density maps, which present a superposition of multiple states, prevents robust ligand identification. In this thesis, I present novel methods to clearly identify bound ligands and other changed states in the case where multiple crystallographic datasets are available, such as in crystallographic fragment screening experiments. By applying statistical methods to signal identification, more crystallographic binders are detected than by state-of-the-art conventional approaches. Standard modelling practice is further challenged regarding the modelling of multiple chemical states in crystallography. The pervading modelling approach is to model only the bound state of the protein; I show that modelling an ensemble of bound and unbound states leads to better models. I conclude with a discussion of possible future applications of multi-datasets methods in X-ray crystallography, including the robust identification of conformational heterogeneity in protein structures.
63

Dissimilarity fuctions analysis based on dynamic clustering for symbolic data

Cléa Gomes da Silva, Alzennyr January 2005 (has links)
Made available in DSpace on 2014-06-12T16:01:14Z (GMT). No. of bitstreams: 2 arquivo7274_1.pdf: 1733810 bytes, checksum: 2d9eb7a4489382e5afbf1790810474a0 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2005 / A análise de dados simbólicos (Symbolic Data Analysis) é um novo domínio na área de descoberta automática de conhecimento que visa desenvolver métodos para dados descritos por variáveis que podem assumir como valor conjuntos de categorias, intervalos ou distribuições de probabilidade. Essas novas variáveis permitem levar em conta a variabilidade e/ou a incerteza presente nos dados. O tratamento de dados simbólicos através de técnicas estatísticas e de aprendizagem de máquina necessita da introdução de medidas de distância capazes de manipular tal tipo de dado. Com esse objetivo, diversas funções de dissimilaridade têm sido propostas na literatura. Entretanto, nenhum estudo comparativo acerca do desempenho de tais funções em problemas que envolvem simultaneamente dados simbólicos booleanos e modais foi realizado. A principal contribuição dessa dissertação é realizar uma análise comparativa e uma avaliação empírica sobre funções de dissimilaridade para dados simbólicos, uma vez que esse tipo de estudo, apesar de muito relevante, é quase inexistente na literatura. Além disso, este trabalho também introduz novas funções de dissimilaridade que podem ser usadas no agrupamento dinâmico de dados simbólicos. Os algoritmos de agrupamento dinâmico consistem em obter, simultaneamente, uma partição em um número fixo de classes e a identificação de um representante para cada classe, minimizando localmente um critério que mede a adequação entre as classes e os seus representantes. Para validar esse estudo, foram realizados experimentos com bases de dados de referência na literatura e dois conjuntos de dados artificiais de intervalos com diferentes graus de dificuldade de classificação, objetivando a comparação das funções avaliadas. A precisão dos resultados foi mensurada por um índice externo de agrupamento aplicado na validação cruzada não supervisionada, para as bases de dados reais, e também no quadro de uma experiência Monte Carlo, para as bases de dados artificiais. Com os resultados alcançados é possível verificar a adequação das diversas funções de dissimilaridade aos diferentes tipos de dados simbólicos (multivalorado, multivalorado ordinal, intervalar, e modal de mesmo suporte e de suportes diferentes), bem como identificar as melhores configurações de funções. Testes estatísticos validam as conclusões
64

A Comparison of Maps and Power Spectra Determined from South Pole Telescope and Planck Data

Hou, Z., Aylor, K., Benson, B. A., Bleem, L. E., Carlstrom, J. E., Chang, C. L., Cho, H-M., Chown, R., Crawford, T. M., Crites, A. T., de Haan, T., Dobbs, M. A., Everett, W. B., Follin, B., George, E. M., Halverson, N. W., Harrington, N. L., Holder, G. P., Holzapfel, W. L., Hrubes, J. D., Keisler, R., Knox, L., Lee, A. T., Leitch, E. M., Luong-Van, D., Marrone, D. P., McMahon, J. J., Meyer, S. S., Millea, M., Mocanu, L. M., Mohr, J. J., Natoli, T., Omori, Y., Padin, S., Pryke, C., Reichardt, C. L., Ruhl, J. E., Sayre, J. T., Schaffer, K. K., Shirokoff, E., Staniszewski, Z., Stark, A. A., Story, K. T., Vanderlinde, K., Vieira, J. D., Williamson, R. 17 January 2018 (has links)
We study the consistency of 150 GHz data from the South Pole Telescope (SPT) and 143 GHz data from the Planck satellite over the patch of sky covered by the SPT-SZ survey. We first visually compare the maps and find that the residuals appear consistent with noise after accounting for differences in angular resolution and filtering. We then calculate (1) the cross-spectrum between two independent halves of SPT data, (2) the cross-spectrum between two independent halves of Planck data, and (3) the cross-spectrum between SPT and Planck data. We find that the three cross-spectra are well fit (PTE = 0.30) by the null hypothesis in which both experiments have measured the same sky map up to a single free calibration parameter-i.e., we find no evidence for systematic errors in either data set. As a by-product, we improve the precision of the SPT calibration by nearly an order of magnitude, from 2.6% to 0.3% in power. Finally, we compare all three cross-spectra to the full-sky Planck power spectrum and find marginal evidence for differences between the power spectra from the SPT-SZ footprint and the full sky. We model these differences as a power law in spherical harmonic multipole number. The best-fit value of this tilt is consistent among the three cross-spectra in the SPT-SZ footprint, implying that the source of this tilt is a sample variance fluctuation in the SPT-SZ region relative to the full sky. The consistency of cosmological parameters derived from these data sets is discussed in a companion paper.
65

What can a CAQDAS analysis reveal about university textual identity?

Dickinson, Mary J. January 2002 (has links)
This thesis argues that changes in the 'idea' of the university can be identified through an analysis of the textual identities of institutions utilising Computer Assisted Qualitative Data Analysis Software (CAQDAS). The historical review at the beginning of the work identifies four key, perennial aspects of university identity and function: (i) transmitting knowledge and producing cultured students; (ii) research; (iii) training for employment; and (iv) a wider duty to society. The thesis rests upon the premise that the relative prominence of each of these four aspects in university publications gives a university a certain textual identity at a given time. The thesis further suggests that certain specific forces - State intervention, economic pressures, industry, and competition - affect the priority given to these aspects. The University of Surrey is examined as a case study and changes in the relative prominence of these aspects are observed in the textual presentation of this institution over time. These findings, when compared with an analysis of the public documents of a cross-sector sample of other institutions, revealed different textual identities and this has implications regarding university mission and performance. The thesis shows that external factors do have an influence upon textual identity. CAQDAS was also able to reveal that university textual identity is not monolithic and varies over time and depending on the intended audience. The remit of the study extends to January 2002, and is therefore timely in light of the 2001 review of the structure and funding of higher education (Newby, 2001), particularly because a key aspect of the Newby review is the increasingly explicit linking of funding to mission. This analysis contributes to debates in higher education concerning institutional identity, the usefulness of existing institutional typologies, mission, and possible futures for the sector. The study also makes a methodological contribution to educational research in its innovative employment of the CAQDAS tool.
66

'Powellsnakes' : a fast Bayesian approach to discrete object detection in multi-frequency astronomical data sets

Carvalho, Fernando Pedro January 2014 (has links)
In this work we introduce a fast Bayesian algorithm designed for detecting compact objects immersed in a diffuse background. A general methodology is presented in terms of formal correctness and optimal use of all the available information in a consistent unified framework, where no distinction is made between point sources (unresolved objects), SZ clusters, single or multi-channel detection. An emphasis is placed on the necessity of a multi-frequency, multi-model detection algorithm in order to achieve optimality. We have chosen to use the Bayes/Laplace probability theory as it grants a fully consistent extension of formal deductive logic to a more general inferential system with optimal inclusion of all ancillary information [Jaynes, 2004]. Nonetheless, probability theory only informs us about the plausibility, a ‘degree-of-belief ’, of a proposition given the data, the model that describes it and all ancillary (prior) information. However, detection or classification is mostly about making educated choices and a wrong decision always carries a cost/loss. Only resorting to ‘Decision Theory’, supported by probability theory, one can take the best decisions in terms of maximum yield at minimal cost. Despite the rigorous and formal approach employed, practical efficiency and applicability have always been kept as primary design goals. We have attempted to select and employ the relevant tools to explore a likelihood form and its manifold symmetries to achieve the very high computational performance required not only by our ‘decision machine’ but mostly to tackle large realistic contemporary cosmological data sets. As an illustration, we successfully applied the methodology to ESA’s (European Space Agency) Planck satellite data [Planck Collaboration et al., 2011d]. This data set is large, complex and typical of the contemporary precision observational cosmology state-of-the-art. Two catalogue products are already released: (i) A point sources catalogue [Planck Collaboration et al., 2011e], (ii) A catalogue of galaxy clusters [Planck Collaboration et al., 2011f]. Many other contributions, in science products, as an estimation device, have recently been issued [Planck et al., 2012; Planck Collaboration et al., 2011g,i, 2012a,b,c]. This new method is called ‘PowellSnakes’ (PwS).
67

AN IMPROVED METHOD FOR SYNCHRONIZING MULTIPLE TM FILES

Terrien, Ron, Endress, William 11 1900 (has links)
In a previous paper “Merging Multiple Telemetry Files From Widely Separated Sources For Improved Data Integrity” presented at the 2012 ITC\USA conference, a method for synchronizing TM files at the minor frame level was presented. This paper expands on that work by describing a method for synchronizing the files at the minor frame level faster and at the earliest frame possible using an internal counter. This method is also useful if the minor frames fall out of sync due to large dropouts.
68

The Development of a Qualitative Extension of the Identity Dimensions of Emerging Adulthood (IDEA) Measure Using Relational Data Analysis (RDA)

Quintana, Shannon M 22 July 2011 (has links)
The current study was undertaken as a preliminary evaluation of a qualitative extension measure for use with emerging adults. A series of studies have been previously conducted to provide evidence for the reliability and validity of the RDA framework in evaluating youth development programs (Kurtines et al., 2008) and this study furthers this research to utilize RDA with emerging adults. Building on previous RDA research, the current study analyzed psychometric properties of the Identity Dimensions of Emerging Adulthood-Qualitative Extension (IDEA-QE) using RDA. Inter-coder percent agreement among the Theoretical Open Coders (TOC) and Theoretical Content Coders (TCC) for each of the category levels was moderate to high, ranging from .67 to .87. The Fleiss’ kappa across all category levels was from moderate agreement to almost perfect agreement, ranging from .60 to .88. The correlation between the TOC and the TCC demonstrated medium to high correlation, ranging from r(31)=.65, pr(31)=.74, p<.001.
69

New method of all-sky searches for continuous gravitational waves / 連続重力波の新たな全天探索手法

Yamamoto, Takahiro S. 24 May 2021 (has links)
京都大学 / 新制・課程博士 / 博士(理学) / 甲第23361号 / 理博第4732号 / 新制||理||1679(附属図書館) / 京都大学大学院理学研究科物理学・宇宙物理学専攻 / (主査)教授 田中 貴浩, 准教授 久徳 浩太郎, 教授 萩野 浩一 / 学位規則第4条第1項該当 / Doctor of Science / Kyoto University / DFAM
70

A Geometric Analysis Approach to Distinguish BasalSerotonin Levels in Control and Depressed Mice

Marrero Garcia, Hilary January 2020 (has links)
No description available.

Page generated in 0.0652 seconds