• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 164
  • 30
  • 15
  • 10
  • 9
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 292
  • 292
  • 143
  • 82
  • 59
  • 46
  • 46
  • 37
  • 32
  • 31
  • 31
  • 26
  • 24
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

新聞記者的認知策略之研究 / The Cognitive Strategy of Journalists: Hypothesis-testing

章倩萍, Chang, Chien Ping Unknown Date (has links)
本研究主要探討新聞記者身為一個「直觀科學家」(intuitive scientists),在報導新聞時會不會有假設?如果有,這些假設的內容是什麼?以及假設如何影響記者的採訪、報導工作?   新聞記者在認知世界、認識社會的時候,並不會像張白紙一樣,毫無選擇的完全接受外來的訊息、刺激。根據認知心理學的理論指出,新聞記者在認知的時候,會以他的先前知識結構為基礎,選擇性的、主動的詮釋外來刺激。這些知識結構就像理論一樣,可能使記者對外來資訊,產生特定方向的預期和假設,整個資訊處理的過程,就如同科學研究的假設驗證過程。   本研究以立法院朝野黨團協商、中美智慧財產權諮商新聞,一共進行了二次個案研究,分別訪問、觀察了八位採訪記者。這二次個案研究的進行時間,分別是三天和四天。其中第一天主要的工作是訪談記者,了解記者對於報導事件是否有假設,以及他打算怎麼跑這條新聞(策畫);第二天以及第二、三天參與觀察記錄新聞記者的採訪、寫作工作,探究假設是否會影響新聞記者的採訪、寫作工作;第三天以及第四天,則對記者進行事後訪問,以對照先前的訪談內容和觀察記錄,找出差異,了解記者採訪計畫改變的情形、原因、假設驗證的過程以及有無更換假設的情形。   資料分析結果發現,新聞記在採訪前,對於這二個報導事件,都採取了明確的類別假設、主題假設、結果假設、影響假設;這些假設並影響了記者的採訪、報導過程和方向。這些影響主要展現在以下幾方面:   一、新聞記者的的採訪策畫範圍和方向,都侷限在假設範圍內。   二、假設影響記者的採訪工作。     1. 記者選擇的消息來源,大部份和記者的假設有關。     2. 記者問的問題,大部份和記者的假設有關;記者問問題的方式和問題 的內容,多數傾向支持原先的假設。   三、假設影響新聞記者的寫作過程。     1. 記者選擇寫入新聞稿的資訊,大致上和記者的假設一致。     2. 記者詮釋新聞稿資訊的方向,也大致符合記者原先的假設。
182

Application of random matrix theory to future wireless flexible networks.

Couillet, Romain 12 November 2010 (has links) (PDF)
Future cognitive radio networks are expected to come as a disruptive technological advance in the currently saturated field of wireless communications. The idea behind cognitive radios is to think of the wireless channels as a pool of communication resources, which can be accessed on-demand by a primary licensed network or opportunistically preempted (or overlaid) by a secondary network with lower access priority. From a physical layer point of view, the primary network is ideally oblivious of the existence of a co-localized secondary networks. The latter are therefore required to autonomously explore the air in search for resource left-overs, and then to optimally exploit the available resource. The exploration and exploitation procedures, which involve multiple interacting agents, are requested to be highly reliable, fast and efficient. The objective of the thesis is to model, analyse and propose computationally efficient and close-to-optimal solutions to the above operations.Regarding the exploration phase, we first resort to the maximum entropy principle to derive communication models with many unknowns, from which we derive the optimal multi-source multi-sensor Neyman-Pearson signal sensing procedure. The latter allows for a secondary network to detect the presence of spectral left-overs. The computational complexity of the optimal approach however calls for simpler techniques, which are recollected and discussed. We then proceed to the extension of the signal sensing approach to the more advanced blind user localization, which provides further valuable information to overlay occupied spectral resources.The second part of the thesis is dedicaded to the exploitation phase, that is, the optimal sharing of available resources. To this end, we derive an (asymptotically accurate) approximated expression for the uplink ergodic sum rate of a multi-antenna multiple-access channel and propose solutions for cognitive radios to adapt rapidly to the evolution of the primary network at a minimum feedback cost for the secondary networks.
183

Selection and ranking procedures based on likelihood ratios

Chotai, Jayanti January 1979 (has links)
This thesis deals with random-size subset selection and ranking procedures• • • )|(derived through likelihood ratios, mainly in terms of the P -approach.Let IT , . .. , IT, be k(> 2) populations such that IR.(i = l, . . . , k) hasJ_ K. — 12the normal distribution with unknwon mean 0. and variance a.a , where a.i i i2 . . is known and a may be unknown; and that a random sample of size n^ istaken from . To begin with, we give procedure (with tables) whichselects IT. if sup L(0;x) >c SUD L(0;X), where SÎ is the parameter space1for 0 = (0-^, 0^) ; where (with c: ß) is the set of all 0 with0. = max 0.; where L(*;x) is the likelihood function based on the total1sample; and where c is the largest constant that makes the rule satisfy theP*-condition. Then, we consider other likelihood ratios, with intuitivelyreasonable subspaces of ß, and derive several new rules. Comparisons amongsome of these rules and rule R of Gupta (1956, 1965) are made using differentcriteria; numerical for k=3, and a Monte-Carlo study for k=10.For the case when the populations have the uniform (0,0^) distributions,and we have unequal sample sizes, we consider selection for the populationwith min 0.. Comparisons with Barr and Rizvi (1966) are made. Generalizai<j<k Jtions are given.Rule R^ is generalized to densities satisfying some reasonable assumptions(mainly unimodality of the likelihood, and monotonicity of the likelihoodratio). An exponential class is considered, and the results are exemplifiedby the gamma density and the Laplace density. Extensions and generalizationsto cover the selection of the t best populations (using various requirements)are given. Finally, a discussion oil the complete ranking problem,and on the relation between subset selection based on likelihood ratios andstatistical inference under order restrictions, is given. / digitalisering@umu
184

Asymptotics for the maximum likelihood estimators of diffusion models

Jeong, Minsoo 15 May 2009 (has links)
In this paper I derive the asymptotics of the exact, Euler, and Milstein ML estimators for diffusion models, including general nonstationary diffusions. Though there have been many estimators for the diffusion model, their asymptotic properties were generally unknown. This is especially true for the nonstationary processes, even though they are usually far from the standard ones. Using a new asymptotics with respect to both the time span T and the sampling interval ¢, I find the asymptotics of the estimators and also derive the conditions for the consistency. With this new asymptotic result, I could show that this result can explain the properties of the estimators more correctly than the existing asymptotics with respect only to the sample size n. I also show that there are many possibilities to get a better estimator utilizing this asymptotic result with a couple of examples, and in the second part of the paper, I derive the higher order asymptotics which can be used in the bootstrap analysis.
185

A generalized Neyman-Pearson lemma for hedge problems in incomplete markets

Rudloff, Birgit 07 October 2005 (has links) (PDF)
Some financial problems as minimizing the shortfall risk when hedging in incomplete markets lead to problems belonging to test theory. This paper considers a generalization of the Neyman-Pearson lemma. With methods of convex duality we deduce the structure of an optimal randomized test when testing a compound hypothesis against a simple alternative. We give necessary and sufficient optimality conditions for the problem.
186

The statistical tests on mean reversion properties in financial markets

Wong, Chun-mei, May., 王春美 January 1994 (has links)
published_or_final_version / Statistics / Master / Master of Philosophy
187

Essays on Optimal Control of Dynamic Systems with Learning

Alizamir, Saed January 2013 (has links)
<p>This dissertation studies the optimal control of two different dynamic systems with learning: (i) diagnostic service systems, and (ii) green incentive policy design. In both cases, analytical models have been developed to improve our understanding of the system, and managerial insights are gained on its optimal management.</p><p>We first consider a diagnostic service system in a queueing framework, where the service is in the form of sequential hypothesis testing. The agent should dynamically weigh the benefit of performing an additional test on the current task to improve the accuracy of her judgment against the incurred delay cost for the accumulated workload. We analyze the accuracy/congestion tradeoff in this setting and fully characterize the structure of the optimal policy. Further, we allow for admission control (dismissing tasks from the queue without processing) in the system, and derive its implications on the structure of the optimal policy and system's performance.</p><p>We then study Feed-in-Tariff (FIT) policies, which are incentive mechanisms by governments to promote renewable energy technologies. We focus on two key network externalities that govern the evolution of a new technology in the market over time: (i) technological learning, and (ii) social learning. By developing an intertemporal model that captures these dynamics, we investigate how lawmakers should leverage on such effects to make FIT policies more efficient. We contrast our findings against the current practice of FIT-implementing jurisdictions, and also determine how the FIT regimes should depend on specific technology and market characteristics.</p> / Dissertation
188

Improved critical values for extreme normalized and studentized residuals in Gauss-Markov models / Verbesserte kritische Werte für extreme normierte und studentisierte Verbesserungen in Gauß-Markov-Modellen

Lehmann, Rüdiger 06 August 2014 (has links) (PDF)
We investigate extreme studentized and normalized residuals as test statistics for outlier detection in the Gauss-Markov model possibly not of full rank. We show how critical values (quantile values) of such test statistics are derived from the probability distribution of a single studentized or normalized residual by dividing the level of error probability by the number of residuals. This derivation neglects dependencies between the residuals. We suggest improving this by a procedure based on the Monte Carlo method for the numerical computation of such critical values up to arbitrary precision. Results for free leveling networks reveal significant differences to the values used so far. We also show how to compute those critical values for non‐normal error distributions. The results prove that the critical values are very sensitive to the type of error distribution. / Wir untersuchen extreme studentisierte und normierte Verbesserungen als Teststatistik für die Ausreißererkennung im Gauß-Markov-Modell von möglicherweise nicht vollem Rang. Wir zeigen, wie kritische Werte (Quantilwerte) solcher Teststatistiken von der Wahrscheinlichkeitsverteilung einer einzelnen studentisierten oder normierten Verbesserung abgeleitet werden, indem die Irrtumswahrscheinlichkeit durch die Anzahl der Verbesserungen dividiert wird. Diese Ableitung vernachlässigt Abhängigkeiten zwischen den Verbesserungen. Wir schlagen vor, diese Prozedur durch Einsatz der Monte-Carlo-Methode zur Berechnung solcher kritischen Werte bis zu beliebiger Genauigkeit zu verbessern. Ergebnisse für freie Höhennetze zeigen signifikante Differenzen zu den bisher benutzten Werten. Wir zeigen auch, wie man solche Werte für nicht-normale Fehlerverteilungen berechnet. Die Ergebnisse zeigen, dass die kritischen Werte sehr empfindlich auf den Typ der Fehlerverteilung reagieren.
189

Aspects of Moment Testing when p&gt;n

Wang, Zhizheng January 2018 (has links)
This thesis concerns the problem of statistical hypothesis testing for mean vector as well as testing for non-normality in a high-dimensional setting which is called the Kolmogorov condition. Since we consider mainly the first and the second moment in testing for mean vector and we utilize the third and the fourth moment in testing for non-normality, this thesis concerns a more general moment testing problem. The research question is related to a data matrix with $p$ rows, which is the number of parameters and $n$ columns which is the sample size, where $p$ can exceed $n$, assuming that the ratio $\frac{p}{n}$ converges when both the number of parameters and the sample size increase.  The first paper reviews the Dempster's non-exact test for mean vector, with a focus on one-sample case. We investigated its size and power properties compared to Hotelling's $\mathit{T}^2$ test as well as Srivastava's test using Monte Carlo simulation.  The second paper concerns the problem of testing for multivariate non-normality in high-dimensional data. We proposed three test statistics which are based on marginal skewness and kurtosis. Simulation studies are carried out for examining the size and power properties of the three test statistics. / Avhandlingen undersöker hypotesprövning i höga dimensioner, under förutsättning att det så kallad Kolmogorovvillkoret (Kolmogorov condition) är uppfyllt. Villkoret innerbär att antalet parametrar ökar tillsammans med storleken på stickprovet med en konstant hastighet. Till kategorin multivariat analys räknas de statistiska metoder som analyserar stickprov från flerdimensionella fördelningar, särskilt multivariat normalfördelning. För högdimensionella data fungerar klassiska skattningar av kovariansmatris inte tillfredställande eftersom komplexiteten med att skatta den inversa kovariansmatrisen ökar när dimensionen ökar. I den första uppsatsen utförs en genomgång av Dempsters (non-exact) test där skattning av den inversa kovariansmatrisen inte behövs. Istället används spåret (trace) av en kovariansmatris. I den andra uppsatsen testas antagandet om normalfördelning med hjälp av tredje och fjärde ordningens moment. Tre olika testvariabler har föreslagits där sumuleringar också presenteras för att jämföra hur väl en icke-normalfördelning identifieras av testet.
190

Algoritmiese rangordebepaling van akademiese tydskrifte

Strydom, Machteld Christina 31 October 2007 (has links)
Opsomming Daar bestaan 'n behoefte aan 'n objektiewe maatstaf om die gehalte van akademiese publikasies te bepaal en te vergelyk. Hierdie navorsing het die invloed of reaksie wat deur 'n publikasie gegenereer is uit verwysingsdata bepaal. Daar is van 'n iteratiewe algoritme gebruik gemaak wat gewigte aan verwysings toeken. In die Internetomgewing word hierdie benadering reeds met groot sukses toegepas deur onder andere die PageRank-algoritme van die Google soekenjin. Hierdie en ander algoritmes in die Internetomgewing is bestudeer om 'n algoritme vir akademiese artikels te ontwerp. Daar is op 'n variasie van die PageRank-algoritme besluit wat 'n Invloedwaarde bepaal. Die algoritme is op gevallestudies getoets. Die empiriese studie dui daarop dat hierdie variasie spesialisnavorsers se intu¨ıtiewe gevoel beter weergee as net die blote tel van verwysings. Abstract Ranking of journals are often used as an indicator of quality, and is extensively used as a mechanism for determining promotion and funding. This research studied ways of extracting the impact, or influence, of a journal from citation data, using an iterative process that allocates a weight to the source of a citation. After evaluating and discussing the characteristics that influence quality and importance of research with specialist researchers, a measure called the Influence factor was introduced, emulating the PageRankalgorithm used by Google to rank web pages. The Influence factor can be seen as a measure of the reaction that was generated by a publication, based on the number of scientists who read and cited itA good correlation between the rankings produced by the Influence factor and that given by specialist researchers were found. / Mathematical Sciences / M.Sc. (Operasionele Navorsing)

Page generated in 0.0755 seconds