591 |
研究Ferguson-Dirichlet過程和條件分配族相容性之新工具 / New tools for studying the Ferguson-Dirichlet process and compatibility of a family of conditionals郭錕霖, Kuo,Kun Lin Unknown Date (has links)
單變量c-特徵函數已被證明可處理一些難以使用傳統特徵函數解決的問題,
在本文中,我們首先提出其反演公式,透過此反演公式,我們獲得(1)Dirichlet隨機向量之線性組合的機率密度函數;(2)以一些有趣測度為參數之Ferguson-Dirichlet過程其隨機動差的機率密度函數;(3)Ferguson-Dirichlet過程之隨機泛函的Lebesgue積分表示式。
本文給予對稱分配之多變量c-特徵函數的新性質,透過這些性質,我們證明在任何$n$維球面上之Ferguson-Dirichlet過程其隨機均值是一對稱分配,並且我們亦獲得其確切的機率密度函數,此外,我們將這些結果推廣至任何n維橢球面上。
我們亦探討條件分配相容性的問題,這個問題在機率理論與貝式計算上有其重要性,我們提出其充要條件。當給定相容的條件分配時,我們不但解決相關聯合分配唯一性的問題,而且也提供方法去獲得所有可能的相關聯合分配,我們亦給予檢驗相容性、唯一性及建構機率密度函數的演算法。
透過相容性的相關理論,我們提出完整且清楚地統合性貝氏反演公式理論,並建構可應用於一般測度空間的廣義貝氏反演公式。此外,我們使用廣義貝氏反演公式提供一個配適機率密度函數的演算法,此演算法沒有疊代演算法(如Gibbs取樣法)的收斂問題。 / The univariate c-characteristic function has been shown to be important in cases that are hard to manage using the traditional characteristic function. In this thesis, we first give its inversion formulas. We then use them to obtain (1) the probability density functions (PDFs) of a linear combination of the components of a Dirichlet random vector; (2) the PDFs of random functionals of a Ferguson-Dirichlet process with some interesting parameter measures; (3) a Lebesgue integral expression of any random functional
of the Ferguson-Dirichlet process.
New properties of the multivariate c-characteristic function with a spherical distribution are given in this thesis. With them, we show that the random mean of a Ferguson-Dirichlet process over a spherical surface in n dimensions has a spherical distribution on the n-dimensional ball. Moreover, we derive its exact PDF. Furthermore, we generalize this result to any ellipsoidal surface in n-space.
We also study the issue of compatibility for specified conditional distributions. This issue is important in probability theory and Bayesian computations. Several necessary and sufficient conditions for the compatibility are provided. We also address the problem of uniqueness of the associated joint distribution when the given conditionals are compatible. In addition, we provide a method to obtain all possible joint distributions that have the given compatible conditionals. Algorithms for checking the compatibility and the uniqueness, and for constructing all associated densities are also given.
Through the related compatibility theorems, we provide a fully and cleanly unified theory of inverse Bayes formula (IBF) and construct a generalized IBF (GIBF) that is applicable in the more general measurable space. In addition, using the GIBF, we provide a marginal density fitting algorithm, which avoids the problems of convergence in iterative algorithm such as the Gibbs sampler.
|
592 |
應用共變異矩陣描述子及半監督式學習於行人偵測 / Semi-supervised learning for pedestrian detection with covariance matrix feature黃靈威, Huang, Ling Wei Unknown Date (has links)
行人偵測為物件偵測領域中一個極具挑戰性的議題。其主要問題在於人體姿勢以及衣著服飾的多變性,加之以光源照射狀況迥異,大幅增加了辨識的困難度。吾人在本論文中提出利用共變異矩陣描述子及結合單純貝氏分類器與級聯支持向量機的線上學習辨識器,以增進行人辨識之正確率與重現率。
實驗結果顯示,本論文所提出之線上學習策略在某些辨識狀況較差之資料集中能有效提升正確率與重現率達百分之十四。此外,即便於相同之初始訓練條件下,在USC Pedestrian Detection Test Set、 INRIA Person dataset 及 Penn-Fudan Database for Pedestrian Detection and Segmentation三個資料集中,本研究之正確率與重現率亦較HOG搭配AdaBoost之行人辨識方式為優。 / Pedestrian detection is an important yet challenging problem in object classification due to flexible body pose, loose clothing and ever-changing illumination. In this thesis, we employ covariance feature and propose an on-line learning classifier which combines naïve Bayes classifier and cascade support vector machine (SVM) to improve the precision and recall rate of pedestrian detection in a still image.
Experimental results show that our on-line learning strategy can improve precision and recall rate about 14% in some difficult situations. Furthermore, even under the same initial training condition, our method outperforms HOG + AdaBoost in USC Pedestrian Detection Test Set, INRIA Person dataset and Penn-Fudan Database for Pedestrian Detection and Segmentation.
|
593 |
Textual data mining applications for industrial knowledge management solutionsUr-Rahman, Nadeem January 2010 (has links)
In recent years knowledge has become an important resource to enhance the business and many activities are required to manage these knowledge resources well and help companies to remain competitive within industrial environments. The data available in most industrial setups is complex in nature and multiple different data formats may be generated to track the progress of different projects either related to developing new products or providing better services to the customers. Knowledge Discovery from different databases requires considerable efforts and energies and data mining techniques serve the purpose through handling structured data formats. If however the data is semi-structured or unstructured the combined efforts of data and text mining technologies may be needed to bring fruitful results. This thesis focuses on issues related to discovery of knowledge from semi-structured or unstructured data formats through the applications of textual data mining techniques to automate the classification of textual information into two different categories or classes which can then be used to help manage the knowledge available in multiple data formats. Applications of different data mining techniques to discover valuable information and knowledge from manufacturing or construction industries have been explored as part of a literature review. The application of text mining techniques to handle semi-structured or unstructured data has been discussed in detail. A novel integration of different data and text mining tools has been proposed in the form of a framework in which knowledge discovery and its refinement processes are performed through the application of Clustering and Apriori Association Rule of Mining algorithms. Finally the hypothesis of acquiring better classification accuracies has been detailed through the application of the methodology on case study data available in the form of Post Project Reviews (PPRs) reports. The process of discovering useful knowledge, its interpretation and utilisation has been automated to classify the textual data into two classes.
|
594 |
The memory-based paradigm for vision-based robot localizationJüngel, Matthias 04 October 2012 (has links)
Für mobile autonome Roboter ist ein solides Modell der Umwelt eine wichtige Voraussetzung um die richtigen Entscheidungen zu treffen. Die gängigen existierenden Verfahren zur Weltmodellierung basieren auf dem Bayes-Filter und verarbeiten Informationen mit Hidden Markov Modellen. Dabei wird der geschätzte Zustand der Welt (Belief) iterativ aktualisiert, indem abwechselnd Sensordaten und das Wissen über die ausgeführten Aktionen des Roboters integriert werden; alle Informationen aus der Vergangenheit sind im Belief integriert. Wenn Sensordaten nur einen geringen Informationsgehalt haben, wie zum Beispiel Peilungsmessungen, kommen sowohl parametrische Filter (z.B. Kalman-Filter) als auch nicht-parametrische Filter (z.B. Partikel-Filter) schnell an ihre Grenzen. Das Problem ist dabei die Repräsentation des Beliefs. Es kann zum Beispiel sein, dass die gaußschen Modelle beim Kalman-Filter nicht ausreichen oder Partikel-Filter so viele Partikel benötigen, dass die Rechendauer zu groß wird. In dieser Dissertation stelle ich ein neues Verfahren zur Weltmodellierung vor, das Informationen nicht sofort integriert, sondern erst bei Bedarf kombiniert. Das Verfahren wird exemplarisch auf verschiedene Anwendungsfälle aus dem RoboCup (autonome Roboter spielen Fußball) angewendet. Es wird gezeigt, wie vierbeinige und humanoide Roboter ihre Position und Ausrichtung auf einem Spielfeld sehr präzise bestimmen können. Grundlage für die Lokalisierung sind bildbasierte Peilungsmessungen zu Objekten. Für die Roboter-Ausrichtung sind dabei Feldlinien eine wichtige Informationsquelle. In dieser Dissertation wird ein Verfahren zur Erkennung von Feldlinien in Kamerabildern vorgestellt, das ohne Kalibrierung auskommt und sehr gute Resultate liefert, auch wenn es starke Schatten und Verdeckungen im Bild gibt. / For autonomous mobile robots, a solid world model is an important prerequisite for decision making. Current state estimation techniques are based on Hidden Markov Models and Bayesian filtering. These methods estimate the state of the world (belief) in an iterative manner. Data obtained from perceptions and actions is accumulated in the belief which can be represented parametrically (like in Kalman filters) or non-parametrically (like in particle filters). When the sensor''s information gain is low, as in the case of bearing-only measurements, the representation of the belief can be challenging. For instance, a Kalman filter''s Gaussian models might not be sufficient or a particle filter might need an unreasonable number of particles. In this thesis, I introduce a new state estimation method which doesn''t accumulate information in a belief. Instead, perceptions and actions are stored in a memory. Based on this, the state is calculated when needed. The system has a particular advantage when processing sparse information. This thesis presents how the memory-based technique can be applied to examples from RoboCup (autonomous robots play soccer). In experiments, it is shown how four-legged and humanoid robots can localize themselves very precisely on a soccer field. The localization is based on bearings to objects obtained from digital images. This thesis presents a new technique to recognize field lines which doesn''t need any pre-run calibration and also works when the field lines are partly concealed and affected by shadows.
|
595 |
Avaliação da distorção harmônica total de tensão no ponto de acoplamento comum industrial usando o processo KDD baseado em medição / Evaluation of total voltage harmonic distortion at the industrial joint coupling point using the KDD-based measurement processOLIVEIRA, Edson Farias de 27 March 2018 (has links)
Submitted by Kelren Mota (kelrenlima@ufpa.br) on 2018-06-13T17:38:37Z
No. of bitstreams: 2
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Tese_AvaliacaoDistorcaoHarmonica.pdf: 4309009 bytes, checksum: 017d26b4d8e0ce6653f66d67f13f4cb6 (MD5) / Approved for entry into archive by Kelren Mota (kelrenlima@ufpa.br) on 2018-06-13T17:39:00Z (GMT) No. of bitstreams: 2
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Tese_AvaliacaoDistorcaoHarmonica.pdf: 4309009 bytes, checksum: 017d26b4d8e0ce6653f66d67f13f4cb6 (MD5) / Made available in DSpace on 2018-06-13T17:39:00Z (GMT). No. of bitstreams: 2
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Tese_AvaliacaoDistorcaoHarmonica.pdf: 4309009 bytes, checksum: 017d26b4d8e0ce6653f66d67f13f4cb6 (MD5)
Previous issue date: 2018-03-27 / In the last decades, the transformation industry has provided the introduction of increasingly faster and more energy efficient products for residential, commercial and industrial use, however these loads due to their non-linearity have contributed significantly to the increase of distortion levels harmonic of voltage as a result of the current according to the Power Quality indicators of the Brazilian electricity distribution system. The constant increase in the levels of distortions, especially at the point of common coupling, has generated in the current day a lot of concern in the concessionaires and in the consumers of electric power, due to the problems that cause like losses of the quality of electric power in the supply and in the installations of the consumers and this has provided several studies on the subject. In order to contribute to the subject, this thesis proposes a procedure based on the Knowledge Discovery in Database - KDD process to identify the impact loads of harmonic distortions of voltage at the common coupling point. The proposed methodology uses computational intelligence and data mining techniques to analyze the data collected by energy quality meters installed in the main loads and the common coupling point of the consumer and consequently establish the correlation between the harmonic currents of the nonlinear loads with the harmonic distortion at the common coupling point. The proposed process consists in analyzing the loads and the layout of the location where the methodology will be applied, in the choice and installation of the QEE meters and in the application of the complete KDD process, including the procedures for collection, selection, cleaning, integration, transformation and reduction, mining, interpretation, and evaluation of data. In order to contribute, the data mining techniques of Decision Tree and Naïve Bayes were applied and several algorithms were tested for the algorithm with the most significant results for this type of analysis as presented in the results. The results obtained evidenced that the KDD process has applicability in the analysis of the Voltage Total Harmonic Distortion at the Point of Common Coupling and leaves as contribution the complete description of each step of this process, and for this it was compared with different indices of data balancing, training and test and different scenarios in different shifts of analysis and presented good performance allowing their application in other types of consumers and energy distribution companies. It also shows, in the chosen application and using different scenarios, that the most impacting load was the seventh current harmonic of the air conditioning units for the collected data set. / Nas últimas décadas, a indústria de transformação, tem proporcionado a introdução de produtos cada vez mais rápidos e energeticamente mais eficientes para utilização residencial, comercial e industrial, no entanto essas cargas devido à sua não linearidade têm contribuído significativamente para o aumento dos níveis de distorção harmônica de tensão em decorrência da corrente conforme indicadores de Qualidade de Energia Elétrica do sistema brasileiro de distribuição de energia elétrico. O constante aumento dos níveis das distorções, principalmente no ponto de acoplamento comum, tem gerado nos dias atuais muita preocupação nas concessionárias e nos consumidores de energia elétrica, devido aos problemas que causam como perdas da qualidade de energia elétrica no fornecimento e nas instalações dos consumidores e isso têm proporcionado diversos estudos sobre o assunto. Com o intuito de contribuir com o assunto, a presente tese propõe um procedimento com base no processo Knowledge Discovery in Database - KDD para identificação das cargas impactantes das distorções harmônicas de tensão no ponto de acoplamento comum. A metodologia proposta utiliza técnicas de Inteligência computacional e mineração de dados para análise dos dados coletados por medidores de qualidade de energia instalados nas cargas principais e no ponto de acoplamento comum do consumidor e consequentemente estabelecer a correlação entre as correntes harmônicas das cargas não lineares com a distorção harmônica no ponto de acoplamento comum. O processo proposto consiste na análise das cargas e do layout do local onde a metodologia será aplicada, na escolha e na instalação dos medidores de QEE e na aplicação do processo KDD completo, incluindo os procedimentos de coleta, seleção, limpeza, integração, transformação e redução, mineração, interpretação, e avaliação dos dados. Com o propósito de contribuição foram aplicadas as técnicas de mineração de dados Árvore de Decisão e Naïve Bayes e foram testados diversos algoritmos em busca do algoritmo com resultados mais significativos para esse tipo de análise conforme apresentado nos resultados. Os resultados obtidos evidenciaram que o processo KDD possui aplicabilidade na análise da Distorção Harmônica Total de Tensão no Ponto de Acoplamento Comum e deixa como contribuição a descrição completa de cada etapa desse processo, e para isso foram comparados com diferentes índices de balanceamento de dados, treinamento e teste e diferentes cenários em diferentes turnos de análise e apresentaram bom desempenho possibilitando sua aplicação em outros tipos de consumidores e empresas de distribuição de energia. Evidencia também, na aplicação escolhida e utilizando diferentes cenários, que a carga mais impactante foi a sétima harmônica de corrente das centrais de ar condicionado para o conjunto de dados coletados.
|
596 |
Identifying exoplanets and unmasking false positives with NGTSGünther, Maximilian Norbert January 2018 (has links)
In my PhD, I advanced the scientific exploration of the Next Generation Transit Survey (NGTS), a ground-based wide-field survey operating at ESO’s Paranal Observatory in Chile since 2016. My original contribution to knowledge is the development of novel methods to 1) estimate NGTS’ yield of planets and false positives; 2) disentangle planets from false positives; and 3) accurately characterise planets. If an exoplanet passes (transits) in front of its host star, we can measure a periodic decrease in brightness. The study of transiting exoplanets gives insight into their size, formation, bulk composition and atmospheric properties. Transit surveys are limited by their ability to identify false positives, which can mimic planets and out-number them by a hundredfold. First, I designed a novel yield simulator to optimise NGTS’ observing strategy and identification of false positives (published in Günther et al., 2017a). This showed that NGTS’ prime targets, Neptune- and Earth-sized signals, are frequently mimicked by blended eclipsing binaries, allowing me to quantify and prepare strategies for candidate vetting and follow-up. Second, I developed a centroiding algorithm for NGTS, achieving a precision of 0.25 milli-pixel in a CCD image (published in Günther et al., 2017b). With this, one can measure a shift of light during an eclipse, readily identifying unresolved blended objects. Third, I innovated a joint Bayesian fitting framework for photometry, centroids, and radial velocity cross-correlation function profiles. This allows to disentangle which object (target or blend) is causing the signal and to characterise the system. My method has already unmasked numerous false positives. Most importantly, I confirmed that a signal which was almost erroneously rejected, is in fact an exoplanet (published in Günther et al., 2018). The presented achievements minimise the contamination with blended false positives in NGTS candidates by 80%, and show a new approach for unmasking hidden exoplanets. This research enhanced the success of NGTS, and can provide guidance for future missions.
|
597 |
Monte Carlo Simulation of Boundary Crossing Probabilities with Applications to Finance and StatisticsGür, Sercan 04 1900 (has links) (PDF)
This dissertation is cumulative and encompasses three self-contained research articles. These essays share one common theme: the probability that a given stochastic process crosses a certain boundary function, namely the boundary crossing probability, and the related financial and statistical applications.
In the first paper, we propose a new Monte Carlo method to price a type of barrier option called the Parisian option by simulating the first and last hitting time of the barrier. This research work aims at filling the gap in the literature on pricing of Parisian options with general curved boundaries while providing accurate results compared to the other Monte Carlo techniques available in the literature. Some numerical examples are presented for illustration.
The second paper proposes a Monte Carlo method for analyzing the sensitivity of boundary crossing probabilities of the Brownian motion to small changes of the boundary. Only for few boundaries the sensitivities can be computed in closed form. We propose an efficient Monte Carlo procedure for general boundaries and provide upper bounds for the bias and the simulation error.
The third paper focuses on the inverse first-passage-times. The inverse first-passage-time problem deals with finding the boundary given the distribution of hitting times. Instead of a known distribution, we are given a sample of first hitting times and we propose and analyze estimators of the boundary. Firstly, we consider the empirical estimator and prove that it is strongly consistent and derive (an upper bound of) its asymptotic convergence rate. Secondly, we provide a Bayes estimator based on an approximate likelihood function. Monte Carlo
experiments suggest that the empirical estimator is simple, computationally manageable and outperforms the alternative procedure considered in this paper.
|
598 |
Apport des Systèmes Multi-Agent et de la logique floue pour l'assistance au tuteur dans une communauté d'apprentissage en ligne / Contribution of Multi-Agent Systems and Fuzzy logic to support tutors in Learning CommunitiesChaabi, Youness 11 July 2016 (has links)
La place importante du tutorat dans la réussite d'un dispositif de formation en ligne a ouvert un nouvel axe de recherche dans le domaine des EIAH (Environnements Informatiques pour l'Apprentissage Humain). Nos travaux se situent plus particulièrement dans le champ de recherches des ACAO. Dans un contexte collaboratif, le tutorat et les outils « d'awareness » constituent des solutions admises pour faire face à l'isolement qui très souvent, mène à l'abandon de l'apprenant. Ainsi, du fait des difficultés rencontrées par le tuteur pour assurer un encadrement et un suivi appropriés à partir des traces de communication (en quantités conséquentes) laissées par les apprenants, nous proposons une approche multi-agents pour analyser les conversations textuelles asynchrones entre apprenants. Ces interactions sont révélatrices de comportements sociaux-animateur, indépendant, etc... qu'il nous paraît important de pouvoir repérer lors d'une pédagogie de projet pour permettre aux apprenants de situer leurs travaux par rapport aux autres apprenants et situer leur groupe par rapport aux autres groupes d'une part, et d'autre part permettre au tuteur d'accompagner les apprenants dans leur processus d'apprentissage, repérer et soutenir les individus en difficulté pour leur éviter l'abandon. Ces indicateurs seront déduits à partir des grands volumes d'échanges textuels entre apprenants.L'approche a été ensuite testée sur une situation réelle, qui a montré une parfaite concordance entre les résultatsobservés par des tuteurs humains et ceux déterminés automatiquement par notre système. / The growing importance of online training has put emphasis on the role of remote tutoring. A whole new area of research, dedicated to environment for human learning (EHL), is emerging. We are concerned with this field. More specifically, we will focus on the monitoring of learners.The instrumentation and observation of learners activities by exploiting interaction traces in the EHL and the development of indicators can help tutors to monitor activities of learners and support them in their collaborative learning process. Indeed, in a learning situation, the teacher needs to observe the behavior of learners in order to build an idea about their involvement, preferences and learning styles so that he can adapt the proposed activities. As part of the automatic analysis of collaborative learner¿s activities, we describe a multi agent approach for supporting learning activities in a Virtual Learning Environment context. In order to assist teachers who monitor learning processes, viewed as a specific type of collaboration, the proposed system estimates a behavioral (sociological) profile for each student. This estimation is based on automatic analysis of students textual asynchronous conversations. The determined profiles are proposed to the teacher and may provide assistance toteacher during tutoring tasks. The system was experimented with students of the master "software quality" of the Ibn Tofail University. The results obtained show that the proposed approach is effective and gives satisfactory results.
|
599 |
Verallgemeinerte Maximum-Likelihood-Methoden und der selbstinformative GrenzwertJohannes, Jan 16 December 2002 (has links)
Es sei X eine Zufallsvariable mit unbekannter Verteilung P. Zu den Hauptaufgaben der Mathematischen Statistik zählt die Konstruktion von Schätzungen für einen abgeleiteten Parameter theta(P) mit Hilfe einer Beobachtung X=x. Im Fall einer dominierten Verteilungsfamilie ist es möglich, das Maximum-Likelihood-Prinzip (MLP) anzuwenden. Eine Alternative dazu liefert der Bayessche Zugang. Insbesondere erweist sich unter Regularitätsbedingungen, dass die Maximum-Likelihood-Schätzung (MLS) dem Grenzwert einer Folge von Bayesschen Schätzungen (BSen) entspricht. Eine BS kann aber auch im Fall einer nicht dominierten Verteilungsfamilie betrachtet werden, was als Ansatzpunkt zur Erweiterung des MLPs genutzt werden kann. Weiterhin werden zwei Ansätze einer verallgemeinerten MLS (vMLS) von Kiefer und Wolfowitz sowie von Gill vorgestellt. Basierend auf diesen bekannten Ergebnissen definieren wir einen selbstinformativen Grenzwert und einen selbstinformativen a posteriori Träger. Im Spezialfall einer dominierten Verteilungsfamilie geben wir hinreichende Bedingungen an, unter denen die Menge der MLSen einem selbstinformativen a posteriori Träger oder, falls die MLS eindeutig ist, einem selbstinformativen Grenzwert entspricht. Das Ergebnis für den selbstinformativen a posteriori Träger wird dann auf ein allgemeineres Modell ohne dominierte Verteilungsfamilie erweitert. Insbesondere wird gezeigt, dass die Menge der vMLSen nach Kiefer und Wolfowitz ein selbstinformativer a posteriori Träger ist. Weiterhin wird der selbstinformative Grenzwert bzw. a posteriori Träger in einem Modell mit nicht identifizierbarem Parameter bestimmt. Im Mittelpunkt dieser Arbeit steht ein multivariates semiparametrisches lineares Modell. Zunächst weisen wir jedoch nach, dass in einem rein nichtparametrischen Modell unter der a priori Annahme eines Dirichlet Prozesses der selbstinformative Grenzwert existiert und mit der vMLS nach Kiefer und Wolfowitz sowie der nach Gill übereinstimmt. Anschließend untersuchen wir das multivariate semiparametrische lineare Modell und bestimmen die vMLSen nach Kiefer und Wolfowitz bzw. nach Gill sowie den selbstinformativen Grenzwert unter der a priori Annahme eines Dirichlet Prozesses und einer Normal-Wishart-Verteilung. Im Allgemeinen sind die so erhaltenen Schätzungen verschieden. Abschließend gehen wir dann auf den Spezialfall eines semiparametrischen Lokationsmodells ein, in dem die vMLSen nach Kiefer und Wolfowitz bzw. nach Gill und der selbstinformative Grenzwert wieder identisch sind. / We assume to observe a random variable X with unknown probability distribution. One major goal of mathematical statistics is the estimation of a parameter theta(P) based on an observation X=x. Under the assumption that P belongs to a dominated family of probability distributions, we can apply the maximum likelihood principle (MLP). Alternatively, the Bayes approach can be used to estimate the parameter. Under some regularity conditions it turns out that the maximum likelihood estimate (MLE) is the limit of a sequence of Bayes estimates (BE's). Note that BE's can even be defined in situations where no dominating measure exists. This allows us to derive an extension of the MLP using the Bayes approach. Moreover, two versions of a generalised MLE (gMLE) are presented, which have been introduced by Kiefer and Wolfowitz and Gill, respectively. Based on the known results, we define a selfinformative limit and a posterior carrier. In the special case of a model with dominated distribution family, we state sufficient conditions under which the set of MLE's is a selfinformative posterior carrier or, in the case of a unique MLE, a selfinformative limit. The result for the posterior carrier is extended to a more general model without dominated distributions. In particular we show that the set of gMLE's of Kiefer and Wolfowitz is a posterior carrier. Furthermore we calculate the selfinformative limit and posterior carrier, respectively, in the case of a model with possibly nonidentifiable parameters. In this thesis we focus on a multivariate semiparametric linear model. At first we show that, in the case of a nonparametric model, the selfinformative limit coincides with the gMLE of Kiefer and Wolfowitz as well as that of Gill, if a Dirichlet process serves as prior. Then we investigate both versions of gMLE's and the selfinformative limit in the multivariate semiparametric linear model, where the prior for the latter estimator is given by a Dirichlet process and a normal-Wishart distribution. In general the estimators are not identical. However, in the special case of a location model we find again that the three considered estimates coincide.
|
600 |
當 k>v 之貝氏 A 式最適設計 / Bayes A-Optimal Designs for Comparing Test Treatments with a Control When k>v楊玉韻, Yang,Yu Yun Unknown Date (has links)
在工業、農業、或醫藥界的實驗中,經常必須拿數個不同的試驗處理
(test treatments)和一個已使用過的對照處理(control treatment)比較
。所謂的試驗處理可能是數組新的儀器、不同配方的新藥、或不同成份的
肥料等。以實驗新藥為例,研藥者想決定是否能以新藥取代原來所使用的
藥,故對v種新藥與原藥做比較,評估其藥效之差異。為了降低實驗中不
必要的誤差以增加其準確性,集區設計成為實驗者常用的設計方法之一;
又因A式最適設計是我們欲估計的對照處理效果(effect)與試驗處理效果
之差異之估計值最小的設計,基於此良好的統計特性,我們選擇A式最適
性為評判根據。古典的A式最適性並未將對照處理與試驗處理所具備的先
前資訊(prior information)加以考慮,以上例而言,我們不可能對原來
使用的藥一無所知,經由過去的實驗或臨床的反應,研藥者必已對其藥性
有某種程度的了解,直觀上,這種過去經驗的累積,影響到實驗配置上,
可能使對照處理的實驗次數減少,相對地可對試驗處理多做實驗,設計遂
更具意義。因而本文考慮在k>v的情形下之貝式最適集區設計,對先前分
配施以某種限制,依據準確設計理論(exact design theory),推導單項
異種消除模型(one- way elimination of heterogeneity model)之下的
貝氏A式最適設計與Γ- minimax最適設計,使Majumdar(1992)的結果能適
用於完全集區設計。此種設計對先前分配具有強韌性,即當先前分配有所
偏誤,且其誤差在某一範圍內時,此設計仍為最適設計或仍可維持所謂的
高效度(high efficiency)。本文將列舉許多實例以說明此一特性。
We consider the problem of comparing a set of v test treatments
simultaneously with a control treatment when k>v. Following the
work of Majumdar(1992), we use exact design theory to derive
Bayes A-optimal designs and optimal Γ-minimax designs for the
one-way elimination of heterogeneity model. These designs have
the same properties as of Bayes A-optimal incomplete block
designs. We also provide several examples of robust optimal
designs and highly efficient designs.
|
Page generated in 0.0217 seconds