• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 431
  • 42
  • 25
  • 24
  • 20
  • 16
  • 16
  • 16
  • 16
  • 16
  • 15
  • 10
  • 5
  • 2
  • 2
  • Tagged with
  • 642
  • 642
  • 140
  • 99
  • 94
  • 79
  • 71
  • 63
  • 57
  • 56
  • 55
  • 54
  • 52
  • 52
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Processos pontuais no modelo de Guiol-Machado-Schinazi de sobrevivência de espécies / Point processes in the Guiol-Machado-Schinazi species survival model

Pinheiro, Maicon Aparecido 13 July 2015 (has links)
Recentemente, Guiol, Machado e Schinazi propuseram um modelo estocástico para a evolução de espécies. Nesse modelo, as intensidades de nascimentos de novas espécies e de ocorrências de extinções são invariantes ao longo do tempo. Ademais, no instante de nascimento de uma nova espécie, a mesma é rotulada com um número aleatório gerado de uma distribuição absolutamente contínua. Toda vez que ocorre uma extinção, apenas uma espécie morre - a com o menor número vinculado. Quando a intensidade com que surgem novas espécies é maior que a com que ocorrem extinções, existe um valor crítico f_c tal que todas as espécies rotuladas com números menores que f_c morrerão quase certamente depois de um tempo aleatório finito, e as rotuladas com números maiores que f_c terão probabilidades positivas de se tornarem perpétuas. No entanto, espécies menos aptas continuam a aparecer durante o processo evolutivo e não há a garantia do surgimento de uma espécie imortal. Consideramos um caso particular do modelo de Guiol, Machado e Schinazi e abordamos estes dois últimos pontos. Caracterizamos o processo pontual limite vinculado às espécies na fase subcrítica do modelo e discorremos sobre a existência de espécies imortais. / Recently, Guiol, Machado and Schinazi proposed a stochastic model for species evolution. In this model, births and deaths of species occur with intensities invariant over time. Moreover, at the time of birth of a new species, it is labeled with a random number sampled from an absolutely continuous distribution. Each time there is an extinction event, exactly one existing species disappears: that with the smallest number. When the birth rate is greater than the extinction rate, there is a critical value f_c such that all species that come with number less than f_c will almost certainly die after a finite random time, and those with numbers higher than f_c survive forever with positive probability. However, less suitable species continue to appear during the evolutionary process and there is no guarantee the emergence of an immortal species. We consider a particular case of Guiol, Machado and Schinazi model and approach these last two points. We characterize the limit point process linked to species in the subcritical phase of the model and discuss the existence of immortal species.
272

Baseline free approach for the semiparametric transformation models with missing covariates.

January 2003 (has links)
Leung Man-Kit. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 37-41). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Basic concepts of survival data --- p.3 / Chapter 1.2 --- Missing Complete at Random (MCAR) --- p.8 / Chapter 1.3 --- Missing at Random (MAR) --- p.9 / Chapter 2 --- The maximaization of the marginal likelihood --- p.11 / Chapter 2.1 --- Survival function --- p.11 / Chapter 2.2 --- Missing covariate pattern --- p.13 / Chapter 2.3 --- Set of survival time with rank restrictions --- p.13 / Chapter 2.4 --- Marginal likelihood --- p.14 / Chapter 2.5 --- Score function --- p.15 / Chapter 3 --- The MCMC stochastic approximation approach --- p.17 / Chapter 4 --- Simulations Studies --- p.22 / Chapter 4.1 --- MCAR : Simulation 1 --- p.23 / Chapter 4.2 --- MCAR : Simulation 2 --- p.24 / Chapter 4.3 --- MAR : Simulation 3 --- p.26 / Chapter 4.4 --- MAR : Simulation 4 --- p.27 / Chapter 5 --- Example --- p.30 / Chapter 6 --- Discussion --- p.33 / Appendix --- p.35 / Bibliography --- p.37
273

Logic knowledge base refinement using unlabeled or limited labeled data. / CUHK electronic theses & dissertations collection

January 2010 (has links)
In many text mining applications, knowledge bases incorporating expert knowledge are beneficial for intelligent decision making. Refining an existing knowledge base from a source domain to a different target domain solving the same task would greatly reduce the effort required for preparing labeled training data in constructing a new knowledge base. We investigate a new framework of refining a kind of logic knowledge base known as Markov Logic Networks (MLN). One characteristic of this adaptation problem is that since the data distributions of the two domains are different, there should be different tailor-made MLNs for each domain. On the other hand, the two knowledge bases should share certain amount of similarities due to the same goal. We investigate the refinement in two situations, namely, using unlabeled target domain data, and using limited amount of labeled target domain data. / When manual annotation of a limited amount of target domain data is possible, we exploit how to actively select the data for annotation and develop two active learning approaches. The first approach is a pool-based active learning approach taking into account of the differences between the source and the target domains. A theoretical analysis on the sampling bound of the approach is conducted to demonstrate that informative data can be actively selected. The second approach is an error-driven approach that is designed to provide estimated labels for the target domain and hence the quality of the logic formulae captured for the target domain can be improved. An error analysis on the cluster-based active learning approach is presented. We have conducted extensive experiments on two different text mining tasks, namely, pronoun resolution and segmentation of citation records, showing consistent ii improvements in both situations of using unlabeled target domain data, and with a limited amount of labeled target domain data. / When there is no manual label given for the target domain data, we re-fine an existing MLN via two components. The first component is the logic formula weight adaptation that jointly maximizes the likelihood of the observations of the target domain unlabeled data and considers the differences between the two domains. Two approaches are designed to capture the differences between the two domains. One approach is to analyze the distribution divergence between the two domains and the other approach is to incorporate a penalized degree of difference. The second component is logic formula refinement where logic formulae specific to the target domain are discovered to further capture the characteristics of the target domain. / Chan, Ki Cecia. / Adviser: Wai Lam. / Source: Dissertation Abstracts International, Volume: 73-02, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (leaves 120-128). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
274

Multi-transputer based isolated word speech recognition system.

January 1996 (has links)
by Francis Cho-yiu Chik. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1996. / Includes bibliographical references (leaves 129-135). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Automatic speech recognition and its applications --- p.1 / Chapter 1.1.1 --- Artificial Neural Network (ANN) approach --- p.3 / Chapter 1.2 --- Motivation --- p.5 / Chapter 1.3 --- Background --- p.6 / Chapter 1.3.1 --- Speech recognition --- p.6 / Chapter 1.3.2 --- Parallel processing --- p.7 / Chapter 1.3.3 --- Parallel architectures --- p.10 / Chapter 1.3.4 --- Transputer --- p.12 / Chapter 1.4 --- Thesis outline --- p.13 / Chapter 2 --- Speech Signal Pre-processing --- p.14 / Chapter 2.1 --- Determine useful signal --- p.14 / Chapter 2.1.1 --- End point detection using energy --- p.15 / Chapter 2.1.2 --- End point detection enhancement using zero crossing rate --- p.18 / Chapter 2.2 --- Pre-emphasis filter --- p.19 / Chapter 2.3 --- Feature extraction --- p.20 / Chapter 2.3.1 --- Filter-bank spectrum analysis model --- p.22 / Chapter 2.3.2 --- Linear Predictive Coding (LPC) coefficients --- p.25 / Chapter 2.3.3 --- Cepstral coefficients --- p.27 / Chapter 2.3.4 --- Zero crossing rate and energy --- p.27 / Chapter 2.3.5 --- Pitch (fundamental frequency) detection --- p.28 / Chapter 2.4 --- Discussions --- p.30 / Chapter 3 --- Speech Recognition Methods --- p.32 / Chapter 3.1 --- Template matching using Dynamic Time Warping (DTW) --- p.32 / Chapter 3.2 --- Hidden Markov Model (HMM) --- p.37 / Chapter 3.2.1 --- Vector Quantization (VQ) --- p.38 / Chapter 3.2.2 --- Description of a discrete HMM --- p.41 / Chapter 3.2.3 --- Probability evaluation --- p.42 / Chapter 3.2.4 --- Estimation technique for model parameters --- p.46 / Chapter 3.2.5 --- State sequence for the observation sequence --- p.48 / Chapter 3.3 --- 2-dimensional Hidden Markov Model (2dHMM) --- p.49 / Chapter 3.3.1 --- Calculation for a 2dHMM --- p.50 / Chapter 3.4 --- Discussions --- p.56 / Chapter 4 --- Implementation --- p.59 / Chapter 4.1 --- Transputer based multiprocessor system --- p.59 / Chapter 4.1.1 --- Transputer Development System (TDS) --- p.60 / Chapter 4.1.2 --- System architecture --- p.61 / Chapter 4.1.3 --- Transtech TMB16 mother board --- p.62 / Chapter 4.1.4 --- Farming technique --- p.64 / Chapter 4.2 --- Farming technique on extracting spectral amplitude feature --- p.68 / Chapter 4.3 --- Feature extraction for LPC --- p.73 / Chapter 4.4 --- DTW based recognition --- p.77 / Chapter 4.4.1 --- Feature extraction --- p.77 / Chapter 4.4.2 --- Training and matching --- p.78 / Chapter 4.5 --- HMM based recognition --- p.80 / Chapter 4.5.1 --- Feature extraction --- p.80 / Chapter 4.5.2 --- Model training and matching --- p.81 / Chapter 4.6 --- 2dHMM based recognition --- p.83 / Chapter 4.6.1 --- Feature extraction --- p.83 / Chapter 4.6.2 --- Training --- p.83 / Chapter 4.6.3 --- Recognition --- p.87 / Chapter 4.7 --- Training convergence in HMM and 2dHMM --- p.88 / Chapter 4.8 --- Discussions --- p.91 / Chapter 5 --- Experimental Results --- p.92 / Chapter 5.1 --- "Comparison of DTW, HMM and 2dHMM" --- p.93 / Chapter 5.2 --- Comparison between HMM and 2dHMM --- p.98 / Chapter 5.2.1 --- Recognition test on 20 English words --- p.98 / Chapter 5.2.2 --- Recognition test on 10 Cantonese syllables --- p.102 / Chapter 5.3 --- Recognition test on 80 Cantonese syllables --- p.113 / Chapter 5.4 --- Speed matching --- p.118 / Chapter 5.5 --- Computational performance --- p.119 / Chapter 5.5.1 --- Training performance --- p.119 / Chapter 5.5.2 --- Recognition performance --- p.120 / Chapter 6 --- Discussions and Conclusions --- p.126 / Bibliography --- p.129 / Chapter A --- An ANN Model for Speech Recognition --- p.136 / Chapter B --- A Speech Signal Represented in Fequency Domain (Spectrogram) --- p.138 / Chapter C --- Dynamic Programming --- p.144 / Chapter D --- Markov Process --- p.145 / Chapter E --- Maximum Likelihood (ML) --- p.146 / Chapter F --- Multiple Training --- p.149 / Chapter F.1 --- HMM --- p.150 / Chapter F.2 --- 2dHMM --- p.150 / Chapter G --- IMS T800 Transputer --- p.152 / Chapter G.1 --- IMS T800 architecture --- p.152 / Chapter G.2 --- Instruction encoding --- p.153 / Chapter G.3 --- Floating point instructions --- p.155 / Chapter G.4 --- Optimizing use of the stack --- p.157 / Chapter G.5 --- Concurrent operation of FPU and CPU --- p.158
275

Text-independent bilingual speaker verification system.

January 2003 (has links)
Ma Bin. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 96-102). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Biometrics --- p.2 / Chapter 1.2 --- Speaker Verification --- p.3 / Chapter 1.3 --- Overview of Speaker Verification Systems --- p.4 / Chapter 1.4 --- Text Dependency --- p.4 / Chapter 1.4.1 --- Text-Dependent Speaker Verification --- p.5 / Chapter 1.4.2 --- GMM-based Speaker Verification --- p.6 / Chapter 1.5 --- Language Dependency --- p.6 / Chapter 1.6 --- Normalization Techniques --- p.7 / Chapter 1.7 --- Objectives of the Thesis --- p.8 / Chapter 1.8 --- Thesis Organization --- p.8 / Chapter 2 --- Background --- p.10 / Chapter 2.1 --- Background Information --- p.11 / Chapter 2.1.1 --- Speech Signal Acquisition --- p.11 / Chapter 2.1.2 --- Speech Processing --- p.11 / Chapter 2.1.3 --- Engineering Model of Speech Signal --- p.13 / Chapter 2.1.4 --- Speaker Information in the Speech Signal --- p.14 / Chapter 2.1.5 --- Feature Parameters --- p.15 / Chapter 2.1.5.1 --- Mel-Frequency Cepstral Coefficients --- p.16 / Chapter 2.1.5.2 --- Linear Predictive Coding Derived Cep- stral Coefficients --- p.18 / Chapter 2.1.5.3 --- Energy Measures --- p.20 / Chapter 2.1.5.4 --- Derivatives of Cepstral Coefficients --- p.21 / Chapter 2.1.6 --- Evaluating Speaker Verification Systems --- p.22 / Chapter 2.2 --- Common Techniques --- p.24 / Chapter 2.2.1 --- Template Model Matching Methods --- p.25 / Chapter 2.2.2 --- Statistical Model Methods --- p.26 / Chapter 2.2.2.1 --- HMM Modeling Technique --- p.27 / Chapter 2.2.2.2 --- GMM Modeling Techniques --- p.30 / Chapter 2.2.2.3 --- Gaussian Mixture Model --- p.31 / Chapter 2.2.2.4 --- The Advantages of GMM --- p.32 / Chapter 2.2.3 --- Likelihood Scoring --- p.32 / Chapter 2.2.4 --- General Approach to Decision Making --- p.35 / Chapter 2.2.5 --- Cohort Normalization --- p.35 / Chapter 2.2.5.1 --- Probability Score Normalization --- p.36 / Chapter 2.2.5.2 --- Cohort Selection --- p.37 / Chapter 2.3 --- Chapter Summary --- p.38 / Chapter 3 --- Experimental Corpora --- p.39 / Chapter 3.1 --- The YOHO Corpus --- p.39 / Chapter 3.1.1 --- Design of the YOHO Corpus --- p.39 / Chapter 3.1.2 --- Data Collection Process of the YOHO Corpus --- p.40 / Chapter 3.1.3 --- Experimentation with the YOHO Corpus --- p.41 / Chapter 3.2 --- CUHK Bilingual Speaker Verification Corpus --- p.42 / Chapter 3.2.1 --- Design of the CUBS Corpus --- p.42 / Chapter 3.2.2 --- Data Collection Process for the CUBS Corpus --- p.44 / Chapter 3.3 --- Chapter Summary --- p.46 / Chapter 4 --- Text-Dependent Speaker Verification --- p.47 / Chapter 4.1 --- Front-End Processing on the YOHO Corpus --- p.48 / Chapter 4.2 --- Cohort Normalization Setup --- p.50 / Chapter 4.3 --- HMM-based Speaker Verification Experiments --- p.53 / Chapter 4.3.1 --- Subword HMM Models --- p.53 / Chapter 4.3.2 --- Experimental Results --- p.55 / Chapter 4.3.2.1 --- Comparison of Feature Representations --- p.55 / Chapter 4.3.2.2 --- Effect of Cohort Normalization --- p.58 / Chapter 4.4 --- Experiments on GMM-based Speaker Verification --- p.61 / Chapter 4.4.1 --- Experimental Setup --- p.61 / Chapter 4.4.2 --- The number of Gaussian Mixture Components --- p.62 / Chapter 4.4.3 --- The Effect of Cohort Normalization --- p.64 / Chapter 4.4.4 --- Comparison of HMM and GMM --- p.65 / Chapter 4.5 --- Comparison with Previous Systems --- p.67 / Chapter 4.6 --- Chapter Summary --- p.70 / Chapter 5 --- Language- and Text-Independent Speaker Verification --- p.71 / Chapter 5.1 --- Front-End Processing of the CUBS --- p.72 / Chapter 5.2 --- Language- and Text-Independent Speaker Modeling --- p.73 / Chapter 5.3 --- Cohort Normalization --- p.74 / Chapter 5.4 --- Experimental Results and Analysis --- p.75 / Chapter 5.4.1 --- Number of Gaussian Mixture Components --- p.78 / Chapter 5.4.2 --- The Cohort Normalization Effect --- p.79 / Chapter 5.4.3 --- Language Dependency --- p.80 / Chapter 5.4.4 --- Language-Independency --- p.83 / Chapter 5.5 --- Chapter Summary --- p.88 / Chapter 6 --- Conclusions and Future Work --- p.90 / Chapter 6.1 --- Summary --- p.90 / Chapter 6.1.1 --- Feature Comparison --- p.91 / Chapter 6.1.2 --- HMM Modeling --- p.91 / Chapter 6.1.3 --- GMM Modeling --- p.91 / Chapter 6.1.4 --- Cohort Normalization --- p.92 / Chapter 6.1.5 --- Language Dependency --- p.92 / Chapter 6.2 --- Future Work --- p.93 / Chapter 6.2.1 --- Feature Parameters --- p.93 / Chapter 6.2.2 --- Model Quality --- p.93 / Chapter 6.2.2.1 --- Variance Flooring --- p.93 / Chapter 6.2.2.2 --- Silence Detection --- p.94 / Chapter 6.2.3 --- Conversational Speaker Verification --- p.95 / Bibliography --- p.102
276

Image denoising using wavelet domain hidden Markov models

Liao, Zhiwu 01 January 2005 (has links)
No description available.
277

Machine Learning for Inspired, Structured, Lyrical Music Composition

Bodily, Paul Mark 01 July 2018 (has links)
Computational creativity has been called the "final frontier" of artificial intelligence due to the difficulty inherent in defining and implementing creativity in computational systems. Despite this difficulty computer creativity is becoming a more significant part of our everyday lives, in particular music. This is observed in the prevalence of music recommendation systems, co-creational music software packages, smart playlists, and procedurally-generated video games. Significant progress can be seen in the advances in industrial applications such as Spotify, Pandora, Apple Music, etc., but several problems persist. Of more general interest, however, is the question of whether or not computers can exhibit autonomous creativity in music composition. One of the primary challenges in this endeavor is enabling computational systems to create music that exhibits global structure, that can learn structure from data, and which can effectively incorporate autonomy and intention. We seek to address these challenges in the context of a modular machine learning framework called hierarchical Bayesian program learning (HBPL). Breaking the problem of music composition into smaller pieces, we focus primarily on developing machine learning models that solve the problems related to structure. In particular we present an adaptation of non-homogenous Markov models that enable binary constraints and we present a structural learning model, the multiple Smith-Waterman (mSW) alignment method, which extends sequence alignment techniques from bioinformatics. To address the issue of intention, we incorporate our work on structured sequence generation into a full-fledged computational creative system called Pop* which we show through various evaluative means to possess to varying extents the characteristics of creativity and also creativity itself.
278

Applications of age-period-cohort and state-transition Markov models in understanding cervical cancer incidence trends and evaluating the cost-effectiveness of cytologic screening

Woo, Pao-sun, Pauline. January 2006 (has links)
Thesis (Ph. D.)--University of Hong Kong, 2006. / Title proper from title frame. Also available in printed format.
279

Analytic methods for calculating performance measures of production lines with buffer storages

January 1978 (has links)
S.B. Gershwin, I.C. Schick. / "October, 1978." Caption title. / Bibliography: leaf 6. / Supported by the National Science Foundation under Grant no. NSF/RANN APR76-12036
280

Bayesian analysis of multivariate stochastic volatility and dynamic models

Loddo, Antonello, January 2006 (has links)
Thesis (Ph.D.)--University of Missouri-Columbia, 2006. / The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (April 26, 2007) Vita. Includes bibliographical references.

Page generated in 0.0588 seconds