• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 199
  • 65
  • 26
  • 26
  • 16
  • 11
  • 11
  • 10
  • 10
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 460
  • 63
  • 56
  • 56
  • 53
  • 48
  • 44
  • 43
  • 41
  • 39
  • 37
  • 37
  • 35
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

A Heterogeneous Household Model Of Consumption Smoothing With Imperfect Capital Markets And Income Risk-Sharing

Svarch, Malena 20 October 2011 (has links)
No description available.
42

Optimal Monitoring Methods for Univariate and Multivariate EWMA Control Charts

Huh, Ick 11 1900 (has links)
Due to the rapid development of technology, quality control charts have attracted more attention from manufacturing industries in order to monitor quality characteristics of interest more effectively. Among many control charts, my research work has focused on the multivariate exponentially weighted moving average (MEWMA) and the univariate exponentially weighted moving average (EWMA) control charts by using the Markov chain method. The performance of the chart is measured by the optimal average run length (ARL). My Ph.D. thesis is composed of the following three contributions. My first research work is about differential smoothing. The MEWMA control chart proposed by Lowry et al. (1992) has become one of the most widely used charts to monitor multivariate processes. Its simplicity, combined with its high sensitivity to small and moderate process mean jumps, is at the core of its appeal. Lowry et al. (1992) advocated equal smoothing of each quality variable unless there is an a priori reason to weigh quality characteristics differently. However, one may have situations where differential smoothing may be justified. For instance: (a) departures in process mean may be different across quality variables, (b) some variables may evolve over time at a much different pace than other variables, and (c) the level of correlation between variables could vary substantially. For these reasons, I focus on and assess the performance of the differentially smoothed MEWMA chart. The case of two quality variables (BEWMA) is discussed in detail. A bivariate Markov chain method that uses conditional distributions is developed for average run length (ARL) calculations. The proposed chart is shown to perform at least as well as Lowry et al. (1992)'s chart, and noticeably better in most other mean jump directions. Comparisons with the recently introduced double-smoothed BEWMA chart and the univariate charts for the independent case show that the proposed differentially smoothed BEWMA chart has superior performance. My second research work is about monitoring skewed multivariate processes. Recently, Xie et al. (2011) studied monitoring bivariate exponential quality measurements using the standard MEWMA chart originally developed to monitor multivariate normal quality data. The focus of my work is on situations where, marginally, the quality measurements may follow not only exponential distributions but also other skewed distributions such as Gamma or Weibull, in any combination. The joint distribution is specified using the Gumbel copula function thus allowing for varying degrees of correlation among the quality measurements. In addition to the standard MEWMA chart, charts based on the largest or smallest of the measurements and on the joint cumulative distribution function or the joint survivor function, are studied in detail. The focus is on the case of two quality measurements, i.e., on skewed bivariate processes. The proposed charts avoid an undesirable feature encountered by Xie et al. (2011) for the standard MEWMA chart where in some cases the off-target average run length turns out to be larger than the on-target one. Using the optimal average run length, our extensive numerical results show that the proposed methods provide an overall good detection performance in most directions. Simulations were performed to obtain the optimal ARL results but the Markov chain method using the empirical CDF of the statistics involved verified the accuracy of the ARL results. In addition, an examination of the effect of correlation on chart performance was undertaken numerically. The methods are easily extendable to more than two variables. Final study is about a new ARL criterion for univariate processes studied in detail in this thesis. The traditional ARL is calculated assuming a given fixed process mean jump and a given time point where the jump occurs, usually taken to be from the very beginning in most chart performance studies. However, Ryu et al. (2010) demonstrated that the assumption of a fixed mean shift might lead to poor performance of control charts when the actual size of the mean shift is significantly different and therefore suggested a new ARL-based performance measure, called expected weighted run length (EWRL), by assuming that the size of the mean shift is not specified but rather it follows a probability distribution. The EWRL becomes the expected value of the weighted ordinary ARL with respect to this distribution. My methods generalize this criterion by allowing the time at which the mean shift occurs to also vary according to a probability distribution. This leads to a joint distribution for the size of the mean shift and the time the shift takes place, then the EWRL is calculated as the weighted expected value with respect to this joint distribution. The benefit of the generalized EWRL is that one can assess the performance of control charts more realistically when the process starts on-target and then the mean shift occurs at some later random time. Moreover, I also propose the effective EWRL, which measures the number of additional process runs that on average are needed to detect a jump in the mean after it happens. I evaluate several well-known univariate control charts based on their EWRL and effective EWRL performance. The numerical results show that the choice of control chart depends on the additional information on the transition point of the mean shift. The methods can readily be extended to other control charts, including multivariate charts. / Thesis / Doctor of Philosophy (PhD) / Since the introduction of the standard multivariate exponentially weighted moving average (MEWMA) procedure (Lowry et al. 1992), equal smoothing on all quality variables has been conveniently adopted. In this thesis, a bivariate exponentially weighted moving average (BEWMA) control statistic with unequal smooth- ing parameters is introduced with the aim of improving performance over the standard BEWMA chart. Extensive numerical comparisons reveal that the proposed chart enhances the efficiency and flexibility of the control chart in many mean-shift directions. Recently, Xie et al. (2011) proposed a chart for bivariate Exponential data when the quality measures follow Gumbel’s bivariate Exponential distribution (Gumbel 1960). However, when the process means experience a downward shift (D-D shift), the control charts are shown to break down. In other words, we encounter the strange situation where the out-of-control ARL becomes larger than the in-control ARL. To address this issue, we have proposed two methods, the MAX-MIN and CDF methods and applied them to the univariate EWMA chart. Our numerical results show that not only do our proposed methods prevent the undesirable behaviour from happening, but they also offer substantial improvement in the ARL over the approach proposed by Xie et al. (2011) in many mean shifts. Finally, in general, when it comes to designing a control chart, it is assumed that the size of the mean shift is fixed and known. However, Ryu et al. (2010) proposed a new general performance measure, EWRL, by modelling the size of the mean shift with a probability distribution function. We further generalize the measure by introducing a new random variable, T, which is the transition point of the mean shift. Based on that, we propose several ARL-based criteria to measure the chart performance and try them on several univariate control charts.
43

Field Evaluation Methodology for Quantifying Network-wide Efficiency, Energy, Emission, and Safety Impacts of Operational-level Transportation Projects

Sin, Heung Gweon 28 September 2001 (has links)
This thesis presents a proposed methodology for the field evaluation of the efficiency, energy, environmental, and safety impacts of traffic-flow improvement projects. The methodology utilizes Global Positioning System (GPS) second-by-second speed measurements using fairly inexpensive GPS units to quantify the impacts of traffic-flow improvement projects on the efficiency, energy, and safety of a transportation network. It should be noted that the proposed methodology is incapable of isolating the effects of induced demand and is not suitable for estimating long-term impacts of such projects that involve changes in land-use. Instead, the proposed methodology can quantify changes in traffic behavior and changes in travel demand. This thesis, also, investigates the ability of various data smoothing techniques to remove such erroneous data without significantly altering the underlying vehicle speed profile. Several smoothing techniques are then applied to the acceleration profile, including data trimming, Simple Exponential smoothing, Double Exponential smoothing, Epanechnikov Kernel smoothing, Robust Kernel smoothing, and Robust Simple Exponential Smoothing. The results of the analysis indicate that the application of Robust smoothing (Kernel of Exponential) to vehicle acceleration levels, combined with a technique to minimize the difference between the integral of the raw and smoothed acceleration profiles, removes invalid GPS data without significantly altering the underlying measured speed profile The methodology has been successfully applied to two case studies provided insights as to the potential benefits of coordinating traffic signals across jurisdictional boundaries. More importantly two case studies demonstrate the feasibility of using GPS second-by-second speed measurements for the evaluation of operational-level traffic flow improvement projects. To identify any statistically significant differences in traffic demand along two case study corridors before and after traffic signal condition, tube counts and turning counts were collected and analyzed using ANOVA technique. The ANOVA results of turning volume counts indicated that there is no statistically significant difference in turning volumes between the before and after conditions. Furthermore, the ANOVA results of tube counts also confirmed that there did not appear to be a statistically significant difference (5 percent level of significance) in the tube counts between the before and after conditions. / Ph. D.
44

O impacto da prática de income smoothing no custo de capital próprio em empresas brasileiras de capital aberto / The impact of income smoothing practice in the cost of equity in Brazilian public companies

Meli, Diego Bevilacqua 08 December 2015 (has links)
Este estudo teve como objetivo geral verificar o efeito das práticas de income smoothing (suavização dos resultados) no custo de capital próprio (Ke) das empresas brasileiras de capital aberto em dois momentos distintos: 2004 a 2007 (antes da adoção das IFRS) e 2011 a 2014 (após a adoção das IFRS). Por income smoothing entende-se como o amortecimento intencional dos resultados da empresa, feita pelo gestor via seu poder discricionário, com o intuito de reduzir a variabilidade dos lucros e, assim, transmitir ao mercado consistência de seus resultados. O custo de capital próprio, por sua vez, evidencia o retorno exigido pelo investidor, o que ocasiona a sua utilidade na tomada de decisão. Como o investidor espera obter um retorno do ativo acima de outro de risco similar, é esperado que as suas decisões sejam alteradas na medida que as empresas se utilizam do income smoothing. Como proxy para identificar a suavização dos resultados, foram selecionadas três métricas constantemente utilizadas na literatura. Além de tais métricas, como metodologia contributiva, foi elaborado, por meio da Análise Fatorial, um fator baseado nas três medidas, dado que os métodos para identificar a suavização são discrepantes e o uso do fator possibilita a conjunção da informação contida nos três métodos. A amostra selecionada contempla 105 empresas no primeiro período e 206 no segundo. O Ke foi calculado utilizando a metodologia por benchmark. Para explicar os efeitos no Ke devido a prática de income smoothing, a regressão linear múltipla por meio de mínimos quadrados ordinários (MQO) foi aplicada para cada período de análise. Também foram aplicadas duas outras regressões: uma com diferenças em diferenças e outra com dados em painel MQO pooled (antes e após a adoção das IFRS) para a verificação de quebra estrutural. Os resultados mostraram que no período 2004-2007 apenas a métrica EM2 foi significativa na explicação do Ke. No período que compreende 2011-2014, tanto a métrica EM1 quanto o Fator foram estatisticamente significativos. Ainda de acordo com os resultados obtidos, verifica-se que houve alteração significativa no Ke após a adoção das IFRS e que pode ter alterado a forma como as métricas identificam o income smoothing. De maneira geral, o investimento em uma empresa suavizadora é inversamente proporcional ao seu Ke, ou seja, o mercado entende que investir em empresa que adota a prática de suavização dos resultados, implica em um menor retorno exigido. / This study aimed to verify the effect of income smoothing practices in the cost of equity (Ke) of Brazilian public companies in two distinct periods: 2004 to 2007 (before adopting IFRS) and 2011-2014 (after adopting IFRS). Income smoothing is understood as the company´s intentional dampening of the results, made by the manager due to his discretionary power, in order to reduce the variability of profits and thus convey to the market consistency of results. The cost of equity capital highlights the return required by the investor, which is useful in decision making. Since the investor expects to achieve an active return above other similar risk, it is expected a change in decisions when firms use the income smoothing. As a proxy to identify the smoothing of results, three metrics constantly used in the literature were selected. In addition to such metrics, Factor Analysis, a factor based on the three measurements was developed as contributory methodology, since the methods to identify the smoothing are disparate the use of the factor enables combining the information contained in the three methods. The selected sample includes 105 companies in the first period and 206 in the second. The Ke was calculated based on benchmark. To explain the effects on Ke due to the practice of income smoothing, linear multiple regression using ordinary least squares (OLS) was applied in each period analyzed. Besides applying two other regressions: one with differences in differences and another with OLS panel pooled data (before and after the adoptng IFRS) to verify structural break. The results showed that in the period of 2004-2007 the metric EM2 was significant to explain the Ke. In the period between 2011-2014, both the metric EM1 as well as the factor were statistically significant. Also according to the results, it appears that there was a significant change in the Ke after adopting IFRS and that may have changed the way the metrics identify the income smoothing. In general, the investment in a smothing company is inversely proportional to its Ke, that is, the market understands that investing in a company that adopts the practice of income smoothing, implies in a lower required return.
45

O impacto da prática de income smoothing no custo de capital próprio em empresas brasileiras de capital aberto / The impact of income smoothing practice in the cost of equity in Brazilian public companies

Diego Bevilacqua Meli 08 December 2015 (has links)
Este estudo teve como objetivo geral verificar o efeito das práticas de income smoothing (suavização dos resultados) no custo de capital próprio (Ke) das empresas brasileiras de capital aberto em dois momentos distintos: 2004 a 2007 (antes da adoção das IFRS) e 2011 a 2014 (após a adoção das IFRS). Por income smoothing entende-se como o amortecimento intencional dos resultados da empresa, feita pelo gestor via seu poder discricionário, com o intuito de reduzir a variabilidade dos lucros e, assim, transmitir ao mercado consistência de seus resultados. O custo de capital próprio, por sua vez, evidencia o retorno exigido pelo investidor, o que ocasiona a sua utilidade na tomada de decisão. Como o investidor espera obter um retorno do ativo acima de outro de risco similar, é esperado que as suas decisões sejam alteradas na medida que as empresas se utilizam do income smoothing. Como proxy para identificar a suavização dos resultados, foram selecionadas três métricas constantemente utilizadas na literatura. Além de tais métricas, como metodologia contributiva, foi elaborado, por meio da Análise Fatorial, um fator baseado nas três medidas, dado que os métodos para identificar a suavização são discrepantes e o uso do fator possibilita a conjunção da informação contida nos três métodos. A amostra selecionada contempla 105 empresas no primeiro período e 206 no segundo. O Ke foi calculado utilizando a metodologia por benchmark. Para explicar os efeitos no Ke devido a prática de income smoothing, a regressão linear múltipla por meio de mínimos quadrados ordinários (MQO) foi aplicada para cada período de análise. Também foram aplicadas duas outras regressões: uma com diferenças em diferenças e outra com dados em painel MQO pooled (antes e após a adoção das IFRS) para a verificação de quebra estrutural. Os resultados mostraram que no período 2004-2007 apenas a métrica EM2 foi significativa na explicação do Ke. No período que compreende 2011-2014, tanto a métrica EM1 quanto o Fator foram estatisticamente significativos. Ainda de acordo com os resultados obtidos, verifica-se que houve alteração significativa no Ke após a adoção das IFRS e que pode ter alterado a forma como as métricas identificam o income smoothing. De maneira geral, o investimento em uma empresa suavizadora é inversamente proporcional ao seu Ke, ou seja, o mercado entende que investir em empresa que adota a prática de suavização dos resultados, implica em um menor retorno exigido. / This study aimed to verify the effect of income smoothing practices in the cost of equity (Ke) of Brazilian public companies in two distinct periods: 2004 to 2007 (before adopting IFRS) and 2011-2014 (after adopting IFRS). Income smoothing is understood as the company´s intentional dampening of the results, made by the manager due to his discretionary power, in order to reduce the variability of profits and thus convey to the market consistency of results. The cost of equity capital highlights the return required by the investor, which is useful in decision making. Since the investor expects to achieve an active return above other similar risk, it is expected a change in decisions when firms use the income smoothing. As a proxy to identify the smoothing of results, three metrics constantly used in the literature were selected. In addition to such metrics, Factor Analysis, a factor based on the three measurements was developed as contributory methodology, since the methods to identify the smoothing are disparate the use of the factor enables combining the information contained in the three methods. The selected sample includes 105 companies in the first period and 206 in the second. The Ke was calculated based on benchmark. To explain the effects on Ke due to the practice of income smoothing, linear multiple regression using ordinary least squares (OLS) was applied in each period analyzed. Besides applying two other regressions: one with differences in differences and another with OLS panel pooled data (before and after the adoptng IFRS) to verify structural break. The results showed that in the period of 2004-2007 the metric EM2 was significant to explain the Ke. In the period between 2011-2014, both the metric EM1 as well as the factor were statistically significant. Also according to the results, it appears that there was a significant change in the Ke after adopting IFRS and that may have changed the way the metrics identify the income smoothing. In general, the investment in a smothing company is inversely proportional to its Ke, that is, the market understands that investing in a company that adopts the practice of income smoothing, implies in a lower required return.
46

Strategisk resultatutjämning : En studie av income smoothing i svenska börsnoterade företag / Income smoothing : Motive and structure of income smoothing in swedish listed companies

Brusewitz, Katrin, Otteborn, Sofi January 2014 (has links)
Redovisningen har många olika nyanser, vilket beror på alla de möjliga val som en redovisareställs inför. Copeland (1968) säger att det finns 30 miljoner olika sätt, på vilket ett företagsresultat kan beräknas, inom ramen för de redovisningsstandarder som finns. Med det sagt, kanmanipulering av resultatet koordineras med en standard eller utan en standard. I bakgrundenav studien presenterar vi begreppet income smoothing och i problemdiskussionen utvecklar vibegreppet ett steg längre genom att utveckla ett koncept av tre relaterande begrepp; motivstruktur-resultat. Därefter såg vi det utifrån en praktisk synvinkel och utvecklade treforskningsfrågor: Hur utbrett är income smoothing bland stora svenska företag? Vilka motivoch vilken struktur har svenska börsnoterade företag för att jämna ut sina resultat? Vilkeneffekt får income smoothing på företagets börsvärde? Vidare har vår metod till största delvarit operationell då vi utvecklat konceptet motiv-struktur-resultat och använt finansiellarapporter som informationskälla. Vi har även gjort en teknisk analys genom att vi använtKustonos (2011) modell för att kategorisera företag som en smoother eller icke-Smoother.Teorin som vi presenterar i den teoretiska referensramen kommer från relevant litteratur ochanvänds sedan för att analysera våra resultat.Vår studie har bidragit med tre saker. (1) Den har ökat kunskapen kring income smoothing iSverige. (2) Den har visat att income smoothing inte är vanligt förekommande bland storasvenska företag. (3) Den har bidragit med en teoretisk utveckling genom konceptet motivstruktur-resultat, för att skapa ett observerbart kriterium för förståelse av manipulering isvensk redovisning. / Program: Civilekonomprogrammet
47

Selection of smoothing parameters with application in causal inference

Häggström, Jenny January 2011 (has links)
This thesis is a contribution to the research area concerned with selection of smoothing parameters in the framework of nonparametric and semiparametric regression. Selection of smoothing parameters is one of the most important issues in this framework and the choice can heavily influence subsequent results. A nonparametric or semiparametric approach is often desirable when large datasets are available since this allow us to make fewer and weaker assumptions as opposed to what is needed in a parametric approach. In the first paper we consider smoothing parameter selection in nonparametric regression when the purpose is to accurately predict future or unobserved data. We study the use of accumulated prediction errors and make comparisons to leave-one-out cross-validation which is widely used by practitioners. In the second paper a general semiparametric additive model is considered and the focus is on selection of smoothing parameters when optimal estimation of some specific parameter is of interest. We introduce a double smoothing estimator of a mean squared error and propose to select smoothing parameters by minimizing this estimator. Our approach is compared with existing methods.The third paper is concerned with the selection of smoothing parameters optimal for estimating average treatment effects defined within the potential outcome framework. For this estimation problem we propose double smoothing methods similar to the method proposed in the second paper. Theoretical properties of the proposed methods are derived and comparisons with existing methods are made by simulations.In the last paper we apply our results from the third paper by using a double smoothing method for selecting smoothing parameters when estimating average treatment effects on the treated. We estimate the effect on BMI of divorcing in middle age. Rich data on socioeconomic conditions, health and lifestyle from Swedish longitudinal registers is used.
48

Adaptive Bayesian P-splines models for fitting time-activity curves and estimating associated clinical parameters in Positron Emission Tomography and Pharmacokinetic study

Jullion, Astrid 01 July 2008 (has links)
In clinical experiments, the evolution of a product concentration in tissue over time is often under study. Different products and tissues may be considered. For instance, one could analyse the evolution of drug concentration in plasma over time, by performing successive blood sampling from the subjects participating to the clinical study. One could also observe the evolution of radioactivity uptakes in different regions of the brain during a PET scan (Positron Emission Tomography). The global objective of this thesis is the modelling of such evolutions, which will be called, generically, pharmacokinetic curves (PK curves). Some clinical measures of interest are derived from PK curves. For instance, when analysing the evolution of drug concentration in plasma, PK parameters such as the area under the curve (AUC), the maximal concentration (Cmax) and the time at which it occurs (tmax) are usually reported. In a PET study, one could measure Receptor Occupancy (RO) in some regions of the brain, i.e. the percentage of specific receptors to which the drug is bound. Such clinical measures may be badly estimated if the PK curves are noisy. Our objective is to provide statistical tools to get better estimations of the clinical measures of interest from appropriately smoothed PK curves. Plenty of literature addresses the problem of PK curves fitting using parametric models. It usually relies on a compartmental approach to describe the kinetic of the product under study. The use of parametric models to fit PK curves can lead to problems in some specific cases. Firstly, the estimation procedures rely on algorithms which convergence can be hard to attain with sparse and/or noisy data. Secondly, it may be difficult to choose the adequate underlying compartmental model, especially when a new drug is under study and its kinetic is not well known. The method that we advocate to fit such PK curves is based on Bayesian Penalized splines (P-splines): it provides good results both in terms of PK curves fitting and clinical measures estimations. It avoids the difficult choice of a compartmental model and is more robust than parametric models to a small sample size or a low signal to noise ratio. Working in a Bayesian context provides several advantages: prior information can be injected, models can easily be generalized and extended to hierarchical settings, and uncertainty for associated clinical parameters are straightforwardly derived from credible intervals obtained by MCMC methods. These are major advantages over traditional frequentist approaches.
49

Estudo do efeito de suavização da krigagem ordinária em diferentes distribuições estatísticas / A study of ordinary kriging smoothing effect using diferent statistics distributions

Anelise de Lima Souza 12 July 2007 (has links)
Esta dissertação apresenta os resultados da investigação quanto à eficácia do algoritmo de pós-processamento para a correção do efeito de suavização nas estimativas da krigagem ordinária. Foram consideradas três distribuições estatísticas distintas: gaussiana, lognormal e lognormal invertida. Como se sabe, dentre estas distribuições, a distribuição lognormal é a mais difícil de trabalhar, já que neste tipo de distribuição apresenta um grande número de valores baixos e um pequeno número de valores altos, sendo estes responsáveis pela grande variabilidade do conjunto de dados. Além da distribuição estatística, outros parâmetros foram considerados: a influencia do tamanho da amostra e o numero de pontos da vizinhança. Para distribuições gaussianas e lognormais invertidas o algoritmo de pós-processamento funcionou bem em todas a situações. Porém, para a distribuição lognormal, foi observada a perda de precisão global. Desta forma, aplicou-se a krigagem ordinária lognormal para este tipo de distribuição, na realidade, também foi aplicado um método recém proposto de transformada reversa de estimativas por krigagem lognormal. Esta técnica é baseada na correção do histograma das estimativas da krigagem lognormal e, então, faz-se a transformada reversa dos dados. Os resultados desta transformada reversa sempre se mostraram melhores do que os resultados da técnica clássica. Além disto, a as estimativas de krigagem lognormal se provaram superiores às estimativas por krigagem ordinária. / This dissertation presents the results of an investigation into the effectiveness of the post-processing algorithm for correcting the smoothing effect of ordinary kriging estimates. Three different statistical distributions have been considered in this study: gaussian, lognormal and inverted lognormal. As we know among these distributions, the lognormal distribution is the most difficult one to handle, because this distribution presents a great number of low values and a few high values in which these high values are responsible for the great variability of the data set. Besides statistical distribution other parameters have been considered in this study: the influence of the sample size and the number of neighbor data points as well. For gaussian and inverted lognormal distributions the post-processing algorithm worked well in all situations. However, it was observed loss of local accuracy for lognormal data. Thus, for these data the technique of ordinary lognormal kriging was applied. Actually, a recently proposed approach for backtransforming lognormal kriging estimates was also applied. This approach is based on correcting the histogram of lognormal kriging estimates and then backtransforming it to the original scale of measurement. Results of back-transformed lognormal kriging estimates were always better than the traditional approach. Furthermore, lognormal kriging estimates have provided better results than the normal kriging ones.
50

Linguistic constraints for large vocabulary speech recognition.

January 1999 (has links)
by Roger H.Y. Leung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1999. / Includes bibliographical references (leaves 79-84). / Abstracts in English and Chinese. / ABSTRACT --- p.I / Keywords: --- p.I / ACKNOWLEDGEMENTS --- p.III / TABLE OF CONTENTS: --- p.IV / Table of Figures: --- p.VI / Table of Tables: --- p.VII / Chapter CHAPTER 1 --- INTRODUCTION --- p.1 / Chapter 1.1 --- Languages in the World --- p.2 / Chapter 1.2 --- Problems of Chinese Speech Recognition --- p.3 / Chapter 1.2.1 --- Unlimited word size: --- p.3 / Chapter 1.2.2 --- Too many Homophones: --- p.3 / Chapter 1.2.3 --- Difference between spoken and written Chinese: --- p.3 / Chapter 1.2.4 --- Word Segmentation Problem: --- p.4 / Chapter 1.3 --- Different types of knowledge --- p.5 / Chapter 1.4 --- Chapter Conclusion --- p.6 / Chapter CHAPTER 2 --- FOUNDATIONS --- p.7 / Chapter 2.1 --- Chinese Phonology and Language Properties --- p.7 / Chapter 2.1.1 --- Basic Syllable Structure --- p.7 / Chapter 2.2 --- Acoustic Models --- p.9 / Chapter 2.2.1 --- Acoustic Unit --- p.9 / Chapter 2.2.2 --- Hidden Markov Model (HMM) --- p.9 / Chapter 2.3 --- Search Algorithm --- p.11 / Chapter 2.4 --- Statistical Language Models --- p.12 / Chapter 2.4.1 --- Context-Independent Language Model --- p.12 / Chapter 2.4.2 --- Word-Pair Language Model --- p.13 / Chapter 2.4.3 --- N-gram Language Model --- p.13 / Chapter 2.4.4 --- Backoff n-gram --- p.14 / Chapter 2.5 --- Smoothing for Language Model --- p.16 / Chapter CHAPTER 3 --- LEXICAL ACCESS --- p.18 / Chapter 3.1 --- Introduction --- p.18 / Chapter 3.2 --- Motivation: Phonological and lexical constraints --- p.20 / Chapter 3.3 --- Broad Classes Representation --- p.22 / Chapter 3.4 --- Broad Classes Statistic Measures --- p.25 / Chapter 3.5 --- Broad Classes Frequency Normalization --- p.26 / Chapter 3.6 --- Broad Classes Analysis --- p.27 / Chapter 3.7 --- Isolated Word Speech Recognizer using Broad Classes --- p.33 / Chapter 3.8 --- Chapter Conclusion --- p.34 / Chapter CHAPTER 4 --- CHARACTER AND WORD LANGUAGE MODEL --- p.35 / Chapter 4.1 --- Introduction --- p.35 / Chapter 4.2 --- Motivation --- p.36 / Chapter 4.2.1 --- Perplexity --- p.36 / Chapter 4.3 --- Call Home Mandarin corpus --- p.38 / Chapter 4.3.1 --- Acoustic Data --- p.38 / Chapter 4.3.2 --- Transcription Texts --- p.39 / Chapter 4.4 --- Methodology: Building Language Model --- p.41 / Chapter 4.5 --- Character Level Language Model --- p.45 / Chapter 4.6 --- Word Level Language Model --- p.48 / Chapter 4.7 --- Comparison of Character level and Word level Language Model --- p.50 / Chapter 4.8 --- Interpolated Language Model --- p.54 / Chapter 4.8.1 --- Methodology --- p.54 / Chapter 4.8.2 --- Experiment Results --- p.55 / Chapter 4.9 --- Chapter Conclusion --- p.56 / Chapter CHAPTER 5 --- N-GRAM SMOOTHING --- p.57 / Chapter 5.1 --- Introduction --- p.57 / Chapter 5.2 --- Motivation --- p.58 / Chapter 5.3 --- Mathematical Representation --- p.59 / Chapter 5.4 --- Methodology: Smoothing techniques --- p.61 / Chapter 5.4.1 --- Add-one Smoothing --- p.62 / Chapter 5.4.2 --- Witten-Bell Discounting --- p.64 / Chapter 5.4.3 --- Good Turing Discounting --- p.66 / Chapter 5.4.4 --- Absolute and Linear Discounting --- p.68 / Chapter 5.5 --- Comparison of Different Discount Methods --- p.70 / Chapter 5.6 --- Continuous Word Speech Recognizer --- p.71 / Chapter 5.6.1 --- Experiment Setup --- p.71 / Chapter 5.6.2 --- Experiment Results: --- p.72 / Chapter 5.7 --- Chapter Conclusion --- p.74 / Chapter CHAPTER 6 --- SUMMARY AND CONCLUSIONS --- p.75 / Chapter 6.1 --- Summary --- p.75 / Chapter 6.2 --- Further Work --- p.77 / Chapter 6.3 --- Conclusion --- p.78 / REFERENCE --- p.79

Page generated in 0.0674 seconds