• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3081
  • 943
  • 353
  • 314
  • 185
  • 108
  • 49
  • 49
  • 49
  • 49
  • 49
  • 48
  • 40
  • 37
  • 30
  • Tagged with
  • 6330
  • 1456
  • 1126
  • 1081
  • 845
  • 741
  • 735
  • 723
  • 651
  • 625
  • 510
  • 493
  • 484
  • 484
  • 457
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

Statistical language models for Chinese recognition: speech and character

黃伯光, Wong, Pak-kwong. January 1998 (has links)
published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
552

Analysis of Spatial Data

Zhang, Xiang 01 January 2013 (has links)
In many areas of the agriculture, biological, physical and social sciences, spatial lattice data are becoming increasingly common. In addition, a large amount of lattice data shows not only visible spatial pattern but also temporal pattern (see, Zhu et al. 2005). An interesting problem is to develop a model to systematically model the relationship between the response variable and possible explanatory variable, while accounting for space and time effect simultaneously. Spatial-temporal linear model and the corresponding likelihood-based statistical inference are important tools for the analysis of spatial-temporal lattice data. We propose a general asymptotic framework for spatial-temporal linear models and investigate the property of maximum likelihood estimates under such framework. Mild regularity conditions on the spatial-temporal weight matrices will be put in order to derive the asymptotic properties (consistency and asymptotic normality) of maximum likelihood estimates. A simulation study is conducted to examine the finite-sample properties of the maximum likelihood estimates. For spatial data, aside from traditional likelihood-based method, a variety of literature has discussed Bayesian approach to estimate the correlation (auto-covariance function) among spatial data, especially Zheng et al. (2010) proposed a nonparametric Bayesian approach to estimate a spectral density. We will also discuss nonparametric Bayesian approach in analyzing spatial data. We will propose a general procedure for constructing a multivariate Feller prior and establish its theoretical property as a nonparametric prior. A blocked Gibbs sampling algorithm is also proposed for computation since the posterior distribution is analytically manageable.
553

A study of improving the reliability of the Cochrane risk of bias tool for assessing validity of clinical trials: 一個用於提高考柯藍風險評價工具信度的評價臨床試驗偏倚風險的研究 / 一個用於提高考柯藍風險評價工具信度的評價臨床試驗偏倚風險的研究 / CUHK electronic theses & dissertations collection / study of improving the reliability of the Cochrane risk of bias tool for assessing validity of clinical trials: Yi ge yong yu ti gao Kaokelan feng xian ping jia gong ju xin du de ping jia lin chuang shi yan pian yi feng xian de yan jiu / Yi ge yong yu ti gao Kaokelan feng xian ping jia gong ju xin du de ping jia lin chuang shi yan pian yi feng xian de yan jiu

January 2014 (has links)
Objective. The Cochrane risk of bias tool (CRoB) is one of the most widely used tools for assessing the risk of bias of clinical trials. However, it was criticized for its poor inter-rater reliability, lack of clear and detailed guidelines for its application, and no clear distinguishing between reporting quality from real quality in implementation. This study aims to develop a framework (or improved CRoB, iCRoB) so as to improve the inter-rater reliability of the CRoB in its first 4 domains: sequence generation, allocation concealment, blinding of participants and personnel, and blinding of outcome assessment, through providing: i) a structured pathway for assessing risk of bias assessment; and ii) a comprehensive dictionary of scenarios for each domain. / Methods. The study is consisted of 4 steps: / i) Step 1: Develop a step-by-step structured pathway for assessing the risk of bias. / ii) Step 2: Identify and summarize possible scenarios that are used in literature to describe a domain in clinical trials by using a qualitative content analysis approach. A random sample of 100 Cochrane systematic reviews (SRs) was taken from the Cochrane Database of Systematic Reviews. Each review was carefully scrutinized for this purpose. / iii) Step 3: Merge the scenarios identified from the sample with those already provided in the CRoB. The combined list of scenarios extends the current coverage of the CRoB and forms a more comprehensive dictionary of scenarios for use in the future. The bias assessment pathway and the new dictionary of scenarios in combination are the new components added or contribution to the CRoB to form the iCRoB. / v) Step 4: Conduct a randomized controlled study that allocated at random 8 raters equally into either using the CRoB or our new iCRoB. 150 clinical trials were randomly selected from the fore-mentioned 100 SRs for the inter-rater reliability comparison. Both inter-rater reliability among individual raters (measured with Fleiss’ κ) and that across rater pairs (measured with weighted Cohen’s κ) were computed. Data analyses were conducted by using STATA version 13.0. / Results. A structured pathway for systematically assessing bias was designed, which helps classify a study into one of 5 categories for each risk of bias domain based on the information provided in the report of a trial: Category A: a trial reports in details how a bias reduction method was conducted and it is also deemed by the assessor to be conducted adequately; Category B: a trial reports in details how a bias reduction method was conducted but it is deemed by the assessor to be conducted inadequately; Category C: a trial reports that a bias reduction method was conducted but no detailed description was given which can be used to judge whether it was done adequately; Category D: a trial reports that a bias reduction method was not conducted; Category E: a trial does not mention at all whether or not a bias reduction method was conducted. / A total of 34, 36, 26 and 20 scenarios were generated for sequence generation, allocation concealment, blinding of participants and personnel, and blinding of outcome assessment, respectively. We extended the current CRoB list of scenarios by a number of 20, 23, 26 and 20 respectively for the 4 bias reduction domains. / Our trial results showed that the iCRoB had a higher inter-rater reliability across rater pairs than the original CRoB for every bias reduction domain. The weighted κ was 0.71 and 0.81 for sequence generation respectively for CRoB and iCRoB; 0.53 and 0.61 for allocation concealment respectively for CRoB and iCRoB; 0.56 for blinding of participants and personnel in CRoB, 0.68 for blinding of participants and 0.70 for blinding of personnel ini CRoB; and 0.19 and 0.43 for blinding of outcome assessment respectively for CRoB and iCRoB. / Conclusion. We developed the iCRoB including a standard pathway and extended substantively the dictionary of scenarios for making the judgement on risk of bias in the reports of clinical trials. Our iCRoB showed a higher reliability than the current CRoB in all the domains examined. The iCRoB can be recommended for improving the assessment of bias in clinical trials. / 目的:考柯藍偏倚風險評估工具(CRoB)是最廣泛應用的用於評價臨床試驗偏倚風險的工具之一。然而,CRoB 有以下三個缺陷:評價者間信度低,缺乏明確和詳細的應用說明和沒有明確區分報告質量和方法學質量。本研究擬制定一個新的工具iCRoB 用以提高CRoB 前4 項指標的評價者間信度。這4項指標分別為隨機序列生成,分配隱藏,對研究對象和研究者實施盲法,和對結局評估者實施盲法。本研究通過以下2 點實現這一目的:i) 提供一個結構化路徑用以評估偏倚風險;ii) 為每個研究指標提供一個廣泛包含偏倚風險評估相關描述場景的字典。 / 方法:本研究包含以下4 個步驟: / 第1 步:制定一個用以評估偏倚風險的結構化路徑。 / 第2 步:從考柯藍系統綜述數據庫中隨機抽取100 篇系統綜述,應用定性內容分析法從中確定並總結出臨床試驗中與偏倚風險相關的可能的描述場景。 / 第3 步:將從100 個樣本中總結的描述場景與CRoB 中已有的場景合併,從而擴大CRoB 的描述場景的覆蓋範圍,得到一個更廣泛包含偏倚風險評估相關描述場景字典。偏倚風險評估的結構化路徑和包含場景描述的字典共同形成了本研究中新制定的iCRoB,用以評估臨床試驗的偏倚風險。 / 第4 步:在一個隨機對照研究中,8 名評價者被隨機平均分配至CRoB 組或者iCRoB 組。在上述100個系統綜述所納入的臨床試驗中隨機抽取150 個臨床試驗用以比較CRoB 和iCRoB 的評價者間信度。評價者間信度的比較包括個體評價者間信度(用Fleiss’κ 測量)和配對評價者間信度(用加權Cohen’s κ 測量)的比較。數據採用Stata 13.0 進行統計分析。 / 結果:本研究成功的制定了一個用於系統評價偏倚風險的結構化路徑,在該結構化路徑中,每個偏倚風險相關的指標在一個臨床研究中將分為以下5 類: / A 類:臨床試驗詳細描述了預防偏倚的措施的實施,根據描述可以判定該措施的實施能預防偏倚的產生; B 類:臨床試驗詳細描述了預防偏倚的措施的實施,根據描述可以判定該措施的實施不能預防偏倚的產生; C 類:臨床試驗報告採取了預防偏倚的措施,但未描述這一過程如何實施,從而無法判斷其實施是否正確; D 類:臨床試驗報告沒有採取任何預防偏倚的措施; E 類:臨床試驗沒有報告是否採取了預防偏倚的措施。 / 本研究分別為隨機序列生成,分配隱藏,對研究對象和研究者實施盲法,和對結局評估者實施盲法收集了34,36,26 和20 個描述場景。與CRoB 提供的描述場景比較,iCRoB 分別為隨機序列生成,分配隱藏,對研究對象和研究者實施盲法,和對結局評估者實施盲法增加了20,23,26 和20 個新的描述場景。 / 隨機對照試驗結果顯示,iCRoB 中每個研究指標的配對評價者間信度均高於CRoB,其中,隨機序列生成加權κ 為0.71(CRoB)和0.81(iCRoB),分配隱藏加權κ 為0.53(CRoB)和0.61(iCRoB),對研究對象和研究者實施盲法加權κ 為0.56(CRoB),對研究對象實施盲法加權κ 為0.68(iCRoB),對研究者實施盲法加權κ 為0.70(iCRoB),對結局評估者實施盲法加權κ 為0.19(CRoB)和0.43(iCRoB)。 / 結論:本研究通過制定一個由偏倚風險評估的結構化路徑和包含場景描述的字典組成的iCRoB,用以改善CRoB 中對臨床試驗中隨機序列生成,分配隱藏,對研究對象和研究者實施盲法,和對結局評估者實施盲法偏倚風險評估過程。相比於CRoB,iCRoB 在每個研究指標中均顯示出更好的配對評價者間信度。這些結果證明評價者間信度可以通過提供結構化偏倚風險評估路徑和更全面的描述場景字典而提高。 / Wu, Xinyin. / Thesis Ph.D. Chinese University of Hong Kong 2014. / Includes bibliographical references (leaves 93-105). / Abstracts also in Chinese. / Title from PDF title page (viewed on 09, September, 2016). / Wu, Xinyin. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only.y066 / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only.
554

THE FAMILY OF CONDITIONAL PENALIZED METHODS WITH THEIR APPLICATION IN SUFFICIENT VARIABLE SELECTION

Xie, Jin 01 January 2018 (has links)
When scientists know in advance that some features (variables) are important in modeling a data, then these important features should be kept in the model. How can we utilize this prior information to effectively find other important features? This dissertation is to provide a solution, using such prior information. We propose the Conditional Adaptive Lasso (CAL) estimates to exploit this knowledge. By choosing a meaningful conditioning set, namely the prior information, CAL shows better performance in both variable selection and model estimation. We also propose Sufficient Conditional Adaptive Lasso Variable Screening (SCAL-VS) and Conditioning Set Sufficient Conditional Adaptive Lasso Variable Screening (CS-SCAL-VS) algorithms based on CAL. The asymptotic and oracle properties are proved. Simulations, especially for the large p small n problems, are performed with comparisons with other existing methods. We further extend to the linear model setup to the generalized linear models (GLM). Instead of least squares, we consider the likelihood function with L1 penalty, that is the penalized likelihood methods. We proposed for Generalized Conditional Adaptive Lasso (GCAL) for the generalized linear models. We then further extend the method for any penalty terms that satisfy certain regularity conditions, namely Conditionally Penalized Estimate (CPE). Asymptotic and oracle properties are showed. Four corresponding sufficient variable screening algorithms are proposed. Simulation examples are evaluated for our method with comparisons with existing methods. GCAL is also evaluated with a read data set on leukemia.
555

Multivariate fault detection and visualization in the semiconductor industry

Chamness, Kevin Andrew 28 August 2008 (has links)
Not available / text
556

On the statistical modelling of stochastic volatility and its applications to financial markets

So, Ka-pui., 蘇家培. January 1996 (has links)
published_or_final_version / Statistics / Doctoral / Doctor of Philosophy
557

Minimalist theory for mesoscale reaction dynamics

Craven, Galen Thomas 07 January 2016 (has links)
The prediction of an atomistic system's macroscopic observables from microscopic physical characteristics is often intractable, either by theory or computation, due to the intrinsic complexity of the underlying dynamical rules. This complexity can be simplified by identifying key mechanisms that drive behavior and considering the system in a reduced representation that captures these mechanisms. Through theory, this thesis examines complex relationships in structured assembly and reaction mechanisms that occur when effective interactions are applied to mesoscale structures. In the first part of this thesis, the structure and assembly of soft matter systems are characterized while varying the interpenetrability of the constituent particles. The nature of the underlying softness allows these systems to be packed at ever higher density, albeit with an increasing penalty in energy. Stochastic equations of motion are developed in which mesoscopic structures are mapped to single degrees of freedom through a coarse-graining procedure. The effective interactions between these coarse-grained sites are modeled using stochastic potentials that capture the spatial behavior observed in systems governed by deterministic bounded potentials. The second part of this thesis presents advancements in time-dependent transition state theory, focusing on chemical reactions that are induced by oscillatory external forces. The optimal dividing surface for a model driven reaction is constructed over a transition state trajectory. The stability of the transition state trajectory is found to directly dictate the reaction rate, and it is thus the fundamental and singular object needed to predict barrier-crossing rates in periodically driven chemical reactions. This thesis demonstrates that using minimalist models to examine these complex systems can provide valuable insight into the dynamical mechanisms that drive behavior.
558

Evaluation of one classical and two Bayesian estimators of system availability using multiple attribute decision making techniques

McCahon, Cynthia S January 2011 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
559

Nonparametric statistical methods in financial market research.

Corrado, Charles J. January 1988 (has links)
This dissertation presents an exploration of the use of nonparametric statistical methods based on ranks for use in financial market research. Applications to event study methodology and the estimation of security systematic risk are analyzed using a simulation methodology with actual daily security return data. The results indicate that procedures based on ranks are more efficient than normal theory procedures currently in common use.
560

Unification-based constraints for statistical machine translation

Williams, Philip James January 2014 (has links)
Morphology and syntax have both received attention in statistical machine translation research, but they are usually treated independently and the historical emphasis on translation into English has meant that many morphosyntactic issues remain under-researched. Languages with richer morphologies pose additional problems and conventional approaches tend to perform poorly when either source or target language has rich morphology. In both computational and theoretical linguistics, feature structures together with the associated operation of unification have proven a powerful tool for modelling many morphosyntactic aspects of natural language. In this thesis, we propose a framework that extends a state-of-the-art syntax-based model with a feature structure lexicon and unification-based constraints on the target-side of the synchronous grammar. Whilst our framework is language-independent, we focus on problems in the translation of English to German, a language pair that has a high degree of syntactic reordering and rich target-side morphology. We first apply our approach to modelling agreement and case government phenomena. We use the lexicon to link surface form words with grammatical feature values, such as case, gender, and number, and we use constraints to enforce feature value identity for the words in agreement and government relations. We demonstrate improvements in translation quality of up to 0.5 BLEU over a strong baseline model. We then examine verbal complex production, another aspect of translation that requires the coordination of linguistic features over multiple words, often with long-range discontinuities. We develop a feature structure representation of verbal complex types, using constraint failure as an indicator of translation error and use this to automatically identify and quantify errors that occur in our baseline system. A manual analysis and classification of errors informs an extended version of the model that incorporates information derived from a parse of the source. We identify clause spans and use model features to encourage the generation of complete verbal complex types. We are able to improve accuracy as measured using precision and recall against values extracted from the reference test sets. Our framework allows for the incorporation of rich linguistic information and we present sketches of further applications that could be explored in future work.

Page generated in 0.091 seconds