• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 288
  • 67
  • 48
  • 32
  • 28
  • 18
  • 14
  • 13
  • 12
  • 9
  • 3
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 668
  • 668
  • 360
  • 360
  • 150
  • 148
  • 101
  • 72
  • 66
  • 66
  • 65
  • 63
  • 62
  • 60
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

On labour market discrimination against Roma in South East Europe

Milcher, Susanne, Fischer, Manfred M. 10 1900 (has links) (PDF)
This paper directs interest on country-specific labour market discrimination Roma may suffer in South East Europe. The study lies in the tradition of statistical Blinder-Oaxaca decomposition analysis. We use microdata from UNDP's 2004 survey of Roma minorities, and apply a Bayesian approach, proposed by Keith and LeSage (2004), for the decomposition analysis of wage differentials. This approach is based on a robust Bayesian heteroscedastic linear regression model in conjunction with Markov Chain Monte Carlo (MCMC) estimation. The results obtained indicate the presence of labour market discrimination in Albania and Kosovo, but point to its absence in Bulgaria, Croatia, and Serbia. (authors' abstract)
372

兩相依製程之調適性管制圖 / Adaptive Control Charts for Two Dependent Process Steps

蘇惠君 Unknown Date (has links)
近年來,許多調適性管制圖都只探討單一製程,然而現今存在許多相依製程的問題.因此本論文提出兩相依製程之調適性管制圖,並以ATS測量管制圖的績效.本論文所提出的變動抽樣間隔時間之調適性管制圖對於偵測製程中幅度及小幅度的偏移有良好的績效.此外,本論文所提出的變動抽樣樣本大小及變動抽樣間隔時間之調適性管制圖對於偵測製程極小幅度的偏移有良好的績效. / In recent years, many research papers about adaptive control charts all consider a single process step. However, there are many multiple process steps in industry process. In this article, we propose adaptive control charts to monitor two dependent process steps, and their average time to signal (ATS) is calculated by Markov chain approach to measure the performance of these proposed control charts. It has been shown that the performance of the adaptive sampling interval (ASI) control charts in detecting small and moderate shifts in process means is better than the fixed sampling interval control charts, especially for small shifts, and the proposed adaptive sample size and sampling interval (ASSI) control charts have better performance in detecting very small shifts in process means than the fixed sample size and sampling interval control charts and the adaptive sample size control charts.
373

全製程過度調整下之變動抽樣時間 / Economic Design of VSI Control Charts for Monitoring Over-adjusted Process

柯芝穎 Unknown Date (has links)
The over-adjustment means that the process is adjusted unnecessarily when a false alarm occurs. It may result in shifts in process mean and variance affecting the quality of products and have the effect of an increase in variability and cost. In this paper, the economic variable sample interval (VSI) standard and control charts are proposed to monitor effectively the mean and variance of the over-adjusted process. We use a Markov chain approach to derive the design parameters of the standard and charts by the minimum of the cost function. An example of shampoo making process is used to illustrate the application and performance of the proposed VSI standard and charts in detecting shifts in process mean and variance. Furthermore, we compare the cost and performances for the economic FSI (fixed sampling interval) and VSI control charts. Support for this research was provided in part by the National Science Council of the Republic of China, grant No. NSC 94-2118-M-004-003. / The over-adjustment means that the process is adjusted unnecessarily when a false alarm occurs. It may result in shifts in process mean and variance affecting the quality of products and have the effect of an increase in variability and cost. In this paper, the economic variable sample interval (VSI) standard and control charts are proposed to monitor effectively the mean and variance of the over-adjusted process. We use a Markov chain approach to derive the design parameters of the standard and charts by the minimum of the cost function. An example of shampoo making process is used to illustrate the application and performance of the proposed VSI standard and charts in detecting shifts in process mean and variance. Furthermore, we compare the cost and performances for the economic FSI (fixed sampling interval) and VSI control charts. Support for this research was provided in part by the National Science Council of the Republic of China, grant No. NSC 94-2118-M-004-003.
374

Data-Adaptive Multivariate Density Estimation Using Regular Pavings, With Applications to Simulation-Intensive Inference

Harlow, Jennifer January 2013 (has links)
A regular paving (RP) is a finite succession of bisections that partitions a multidimensional box into sub-boxes using a binary tree-based data structure, with the restriction that an existing sub-box in the partition may only be bisected on its first widest side. Mapping a real value to each element of the partition gives a real-mapped regular paving (RMRP) that can be used to represent a piecewise-constant function density estimate on a multidimensional domain. The RP structure allows real arithmetic to be extended to density estimates represented as RMRPs. Other operations such as computing marginal and conditional functions can also be carried out very efficiently by exploiting these arithmetical properties and the binary tree structure. The purpose of this thesis is to explore the potential for density estimation using RPs. The thesis is structured in three parts. The first part formalises the operational properties of RP-structured density estimates. The next part considers methods for creating a suitable RP partition for an RMRP-structured density estimate. The advantages and disadvantages of a Markov chain Monte Carlo algorithm, already developed, are investigated and this is extended to include a semi-automatic method for heuristic diagnosis of convergence of the chain. An alternative method is also proposed that uses an RMRP to approximate a kernel density estimate. RMRP density estimates are not differentiable and have slower convergence rates than good multivariate kernel density estimators. The advantages of an RMRP density estimate relate to its operational properties. The final part of this thesis describes a new approach to Bayesian inference for complex models with intractable likelihood functions that exploits these operational properties.
375

教育、階級流通與社會福利 / Education, Mobility and Social Welfare

吳致謙, Wu, Jhih Chian Unknown Date (has links)
本文主旨在於討論政府的政策對於階級流通跟社會福利的影響。我們設計了一個簡單的階級流通的模型模型,在模型當中,我們想像社會上面的人都得要參加「職業考試」來決定個人的職業,如果通過考試,則可以得到比較高薪水的工作,反之,則只能得到薪資比較低的工作。在模型中,我們分析義務教育與所得重分配這兩種不同廣被使用的促進階級流通的政策下,分別對於社會福利所帶來的影響。 在模型當中,我們發現到的是,政策會產生兩種影響,第一種是縮短貧富差距的影響,第二種則是政策會讓家庭中小孩子受到的教育增加的影響,政策透過這兩種效果會影響社會上面的階級流通。而縮短貧富差距的效果要成立,有兩個條件:1. 這社會高資質的下一代,會是高資質的機率比其他低資質的小孩大;2. 這社會上面得要是有很多低資質的人處在比較高的社會階層。另外一部份,增加教育的效果要發生,也有兩個條件:1. 家長對於小孩教育的所得份額得要小於一;2. 高資質的能力得要夠高。唯有在上面的條件成立之下,政府政策所帶來的這些效果才會發生,並且也才有辦法影響社會階級流通跟社會福利。 而在不同政策下,所得重分配上,只會有縮短貧富差距的效果,在義務教育下,則是兩種效果都可能發生。也因此,本文認為,如義務教育或所得移轉之類的政府政策,並非一定有效,得要在如上述條件成立之下,才會發生階級流通的效果。 / The main purpose of this article is to find the relationship between social mobility and policies which government execute. In this article, I discuss two policies: income transfer and compulsory education. In article, I construct a model where people have to take a “occupation test” to decide what job they can obtain. If people’s scores are larger thanminimumscore, they will pass the test. If they pass the test, they will obtain a job with more earnings. If not, they will obtain a job with lower earnings. Scores of people is affected by their innate ability, family background, and levels of education. Furthermore, I set that if one’s parents are talented, then one is more likely to be a talented one than people with untalented parents and I call this advantage of people with talented parents“advantage of ability inheritance”. Moreover, levels of education is positive related to their parental income. I find that these policies raisemobility by two effects: one effect is reduction of income gap and another one is net increasing of children’s education. Moreover, I find that when the government executes income transfer, only effect fromreduction of income gapmay work. However, upon executing compulsory education, two effects may both work. In addition, I find the keys for reduction of income gap to work are (1) the advantage of ability inheritance and (2) the amounts of untalented people who pass the test before executing policy. Moreover, the keys for compulsory education are (1) income share of children’s utility and (2) ability of talented people.
376

Application of Bayesian Inference Techniques for Calibrating Eutrophication Models

Zhang, Weitao 26 February 2009 (has links)
This research aims to integrate mathematical water quality models with Bayesian inference techniques for obtaining effective model calibration and rigorous assessment of the uncertainty underlying model predictions. The first part of my work combines a Bayesian calibration framework with a complex biogeochemical model to reproduce oligo-, meso- and eutrophic lake conditions. The model accurately describes the observed patterns and also provides realistic estimates of predictive uncertainty for water quality variables. The Bayesian estimations are also used for appraising the exceedance frequency and confidence of compliance of different water quality criteria. The second part introduces a Bayesian hierarchical framework (BHF) for calibrating eutrophication models at multiple systems (or sites of the same system). The models calibrated under the BHF provided accurate system representations for all the scenarios examined. The BHF allows overcoming problems of insufficient local data by “borrowing strength” from well-studied sites. Both frameworks can facilitate environmental management decisions.
377

Probabilistic inference for phrase-based machine translation : a sampling approach

Arun, Abhishek January 2011 (has links)
Recent advances in statistical machine translation (SMT) have used dynamic programming (DP) based beam search methods for approximate inference within probabilistic translation models. Despite their success, these methods compromise the probabilistic interpretation of the underlying model thus limiting the application of probabilistically defined decision rules during training and decoding. As an alternative, in this thesis, we propose a novel Monte Carlo sampling approach for theoretically sound approximate probabilistic inference within these models. The distribution we are interested in is the conditional distribution of a log-linear translation model; however, often, there is no tractable way of computing the normalisation term of the model. Instead, a Gibbs sampling approach for phrase-based machine translation models is developed which obviates the need of computing this term yet produces samples from the required distribution. We establish that the sampler effectively explores the distribution defined by a phrase-based models by showing that it converges in a reasonable amount of time to the desired distribution, irrespective of initialisation. Empirical evidence is provided to confirm that the sampler can provide accurate estimates of expectations of functions of interest. The mix of high probability and low probability derivations obtained through sampling is shown to provide a more accurate estimate of expectations than merely using the n-most highly probable derivations. Subsequently, we show that the sampler provides a tractable solution for finding the maximum probability translation in the model. We also present a unified approach to approximating two additional intractable problems: minimum risk training and minimum Bayes risk decoding. Key to our approach is the use of the sampler which allows us to explore the entire probability distribution and maintain a strict probabilistic formulation through the translation pipeline. For these tasks, sampling allies the simplicity of n-best list approaches with the extended view of the distribution that lattice-based approaches benefit from, while avoiding the biases associated with beam search. Our approach is theoretically well-motivated and can give better and more stable results than current state of the art methods.
378

Transiting exoplanets : characterisation in the presence of stellar activity

Alapini Odunlade, Aude Ekundayo Pauline January 2010 (has links)
The combined observations of a planet’s transits and the radial velocity variations of its host star allow the determination of the planet’s orbital parameters, and most inter- estingly of its radius and mass, and hence its mean density. Observed densities provide important constraints to planet structure and evolution models. The uncertainties on the parameters of large exoplanets mainly arise from those on stellar masses and radii. For small exoplanets, the treatment of stellar variability limits the accuracy on the de- rived parameters. The goal of this PhD thesis was to reduce these sources of uncertainty by developing new techniques for stellar variability filtering and for the determination of stellar temperatures, and by robustly fitting the transits taking into account external constraints on the planet’s host star. To this end, I developed the Iterative Reconstruction Filter (IRF), a new post-detection stellar variability filter. By exploiting the prior knowledge of the planet’s orbital period, it simultaneously estimates the transit signal and the stellar variability signal, using a com- bination of moving average and median filters. The IRF was tested on simulated CoRoT light curves, where it significantly improved the estimate of the transit signal, particu- lary in the case of light curves with strong stellar variability. It was then applied to the light curves of the first seven planets discovered by CoRoT, a space mission designed to search for planetary transits, to obtain refined estimates of their parameters. As the IRF preserves all signal at the planet’s orbital period, t can also be used to search for secondary eclipses and orbital phase variations for the most promising cases. This en- abled the detection of the secondary eclipses of CoRoT-1b and CoRoT-2b in the white (300–1000 nm) CoRoT bandpass, as well as a marginal detection of CoRoT-1b’s orbital phase variations. The wide optical bandpass of CoRoT limits the distinction between thermal emission and reflected light contributions to the secondary eclipse. I developed a method to derive precise stellar relative temperatures using equiv- alent width ratios and applied it to the host stars of the first eight CoRoT planets. For stars with temperature within the calibrated range, the derived temperatures are con- sistent with the literature, but have smaller formal uncertainties. I then used a Markov Chain Monte Carlo technique to explore the correlations between planet parameters derived from transits, and the impact of external constraints (e.g. the spectroscopically derived stellar temperature, which is linked to the stellar density). Globally, this PhD thesis highlights, and in part addresses, the complexity of perform- ing detailed characterisation of transit light curves. Many low amplitude effects must be taken into account: residual stellar activity and systematics, stellar limb darkening, and the interplay of all available constraints on transit fitting. Several promising areas for further improvements and applications were identified. Current and future high precision photometry missions will discover increasing numbers of small planets around relatively active stars, and the IRF is expected to be useful in characterising them.
379

Probabilistic Modelling of Domain and Gene Evolution

Muhammad, Sayyed Auwn January 2016 (has links)
Phylogenetic inference relies heavily on statistical models that have been extended and refined over the past years into complex hierarchical models to capture the intricacies of evolutionary processes. The wealth of information in the form of fully sequenced genomes has led to the development of methods that are used to reconstruct the gene and species evolutionary histories in greater and more accurate detail. However, genes are composed of evolutionary conserved sequence segments called domains, and domains can also be affected by duplications, losses, and bifurcations implied by gene or species evolution. This thesis proposes an extension of evolutionary models, such as duplication-loss, rate, and substitution, that have previously been used to model gene evolution, to model the domain evolution. In this thesis, I am proposing DomainDLRS: a comprehensive, hierarchical Bayesian method, based on the DLRS model by Åkerborg et al., 2009, that models domain evolution as occurring inside the gene and species tree. The method incorporates a birth-death process to model the domain duplications and losses along with a domain sequence evolution model with a relaxed molecular clock assumption. The method employs a variant of Markov Chain Monte Carlo technique called, Grouped Independence Metropolis-Hastings for the estimation of posterior distribution over domain and gene trees. By using this method, we performed analyses of Zinc-Finger and PRDM9 gene families, which provides an interesting insight of domain evolution. Finally, a synteny-aware approach for gene homology inference, called GenFamClust, is proposed that uses similarity and gene neighbourhood conservation to improve the homology inference. We evaluated the accuracy of our method on synthetic and two biological datasets consisting of Eukaryotes and Fungal species. Our results show that the use of synteny with similarity is providing a significant improvement in homology inference. / <p>QC 20160904</p>
380

Segmentation and analysis of vascular networks

Allen, K. E. January 2010 (has links)
From a clinical perspective retinal vascular segmentation and analysis are important tasks in aiding quantification of vascular disease progression for such prevalent pathologies as diabetic retinopathy, arteriolosclerosis and hypertension. Combined with the emergence of inexpensive digital imaging, retinal fundus images are becoming increasingly available through public databases fuelling interest in retinal vessel research. Vessel segmentation is a challenging task which needs to fulfil many requirements: the accurate segmentation of both normal and pathological vessels; the extraction of vessels of different sizes from large high contrast to small low contrast; minimal user interaction; low computational requirements; and the potential for application among different imaging modalities. We demonstrate a novel and significant improvement on an emerging stochastic vessel segmentation technique, particle filtering, in terms of improved performance at vascular bifurcations and extensibility. An alternative deterministic approach is also presented in the form of a framework utilising morphological Tramline filtering and non-parametric windows pdf estimation. Results of the deterministic algorithm on retinal images match those of state-of-art unsupervised methods in terms of pixel accuracy. In analysing retinal vascular networks, an important initial step is to distinguish between arteries and veins in order to proceed with pathological metrics such as branching angle, diameter, length and arteriole to venule diameter ratio. Practical difficulties include the lack of intensity and textural differences between arteries and veins in all but the largest vessels and the obstruction of vessels and connectivity by low contrast or other vessels. To this end, an innovative Markov Chain Monte Carlo Metropolis-Hastings framework is formulated for the separation of vessel trees. It is subsequently applied to both synthetic and retinal image data with promising results.

Page generated in 0.0691 seconds