Spelling suggestions: "subject:"heorem"" "subject:"atheorem""
251 |
Real Second-Order Freeness and Fluctuations of Random MatricesREDELMEIER, CATHERINE EMILY ISKA 09 September 2011 (has links)
We introduce real second-order freeness in second-order noncommutative probability spaces. We demonstrate that under this definition, independent ensembles of the three real models of random matrices which we consider, namely real Ginibre matrices, Gaussian orthogonal matrices, and real Wishart matrices, are asymptotically second-order free. These ensembles do not satisfy the complex definition of second-order freeness satisfied by their complex analogues. This definition may be used to calculate the asymptotic fluctuations of products of matrices in terms of the fluctuations of each ensemble.
We use a combinatorial approach to the matrix calculations similar to genus expansion, but in which nonorientable surfaces appear, demonstrating the commonality between the real ensembles and the distinction from their complex analogues, motivating this distinct definition. We generalize the description of graphs on surfaces in terms of the symmetric group to the nonorientable case.
In the real case we find, in addition to the terms appearing in the complex case corresponding to annular spoke diagrams, an extra set of terms corresponding to annular spoke diagrams in which the two circles of the annulus are oppositely oriented, and in which the matrix transpose appears. / Thesis (Ph.D, Mathematics & Statistics) -- Queen's University, 2011-09-09 11:07:37.414
|
252 |
Extensions of Skorohod’s almost sure representation theoremHernandez Ceron, Nancy Unknown Date
No description available.
|
253 |
Evaluation of fully Bayesian disease mapping models in correctly identifying high-risk areas with an application to multiple sclerosisCharland, Katia. January 2007 (has links)
Disease maps are geographical maps that display local estimates of disease risk. When the disease is rare, crude risk estimates can be highly variable, leading to extreme estimates in areas with low population density. Bayesian hierarchical models are commonly used to stabilize the disease map, making them more easily interpretable. By exploiting assumptions about the correlation structure in space and time, the statistical model stabilizes the map by shrinking unstable, extreme risk estimates to the risks in surrounding areas (local spatial smoothing) or to the risks at contiguous time points (temporal smoothing). Extreme estimates that are based on smaller populations are subject to a greater degree of shrinkage, particularly when the risks in adjacent areas or at contiguous time points do not support the extreme value and are more stable themselves. / A common goal in disease mapping studies is to identify areas of elevated risk. The objective of this thesis is to compare the accuracy of several fully Bayesian hierarchical models in discriminating between high-risk and background-risk areas. These models differ according to the various spatial, temporal and space-time interaction terms that are included in the model, which can greatly affect the smoothing of the risk estimates. This was accomplished with simulations based on the cervical cancer rate of Kentucky and at-risk person-years of the state of Kentucky's 120 counties from 1995 to 2002. High-risk areas were 'planted' in the generated maps that otherwise had background relative risks of one. The various disease mapping models were applied and their accuracy in correctly identifying high- and background-risk areas was compared by means of Receiver Operating Characteristic curve methodology. Using data on Multiple Sclerosis (MS) on the island of Sardinia, Italy we apply the more successful models to identify areas of elevated MS risk.
|
254 |
A bipolar theorem for $L^0_+(\Om, \Cal F, \P)$Brannath, Werner, Schachermayer, Walter January 1999 (has links) (PDF)
A consequence of the Hahn-Banach theorem is the classical bipolar theorem which states that the bipolar of a subset of a locally convex vector pace equals its closed convex hull. The space $\L$ of real-valued random variables on a probability space $\OF$ equipped with the topology of convergence in measure fails to be locally convex so that - a priori - the classical bipolar theorem does not apply. In this note we show an analogue of the bipolar theorem for subsets of the positive orthant $\LO$, if we place $\LO$ in duality with itself, the scalar product now taking values in $[0, \infty]$. In this setting the order structure of $\L$ plays an important role and we obtain that the bipolar of a subset of $\LO$ equals its closed, convex and solid hull. In the course of the proof we show a decomposition lemma for convex subsets of $\LO$ into a "bounded" and "hereditarily unbounded" part, which seems interesting in its own right. (author's abstract) / Series: Working Papers SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
|
255 |
An introduction to the value-distribution theory of zeta-functionsMATSUMOTO, Kohji January 2006 (has links)
No description available.
|
256 |
Matrix Integrals : Calculating Matrix Integrals Using Feynman DiagramsFriberg, Adam January 2014 (has links)
In this project, we examine how integration over matrices is performed. We investigate and develop a method for calculating matrix integrals over the set of real square matrices. Matrix integrals are used for calculations in several different areas of physics and mathematics; for example quantum field theory, string theory, quantum chromodynamics, and random matrix theory. Our method consists of ways to apply perturbative Taylor expansions to the matrix integrals, reducing each term of the resulting Taylor series to a combinatorial problem using Wick's theorem, and representing the terms of the Wick sum graphically with the help of Feynman diagrams and fat graphs. We use the method in a few examples that aim to clearly demonstrate how to calculate the matrix integrals. / I detta projekt undersöker vi hur integration över matriser genomförs. Vi undersöker och utvecklar en metod för beräkning av matrisintegraler över mängden av alla reell-värda kvadratiska matriser. Matrisintegraler används för beräkningar i ett flertal olika områden inom fysik och matematik, till exempel kvantfältteori, strängteori, kvantkromodynamik och slumpmatristeori. Vår metod består av sätt att applicera perturbativa Taylorutvecklingar på matrisintegralerna, reducera varje term i den resulterande Taylorserien till ett kombinatoriellt problem med hjälp av Wicks sats, och att representera termerna i Wicksumman grafiskt med hjälp av Feynmandiagram. Vi använder metoden i några exempel som syftar till att klart demonstrera hur beräkningen av matrisintegraler går till.
|
257 |
Estimation of the variation of prices using high-frequency financial dataYsusi Mendoza, Carla Mariana January 2005 (has links)
When high-frequency data is available, realised variance and realised absolute variation can be calculated from intra-day prices. In the context of a stochastic volatility model, realised variance and realised absolute variation can estimate the integrated variance and the integrated spot volatility respectively. A central limit theory enables us to do filtering and smoothing using model-based and model-free approaches in order to improve the precision of these estimators. When the log-price process involves a finite activity jump process, realised variance estimates the quadratic variation of both continuous and jump components. Other consistent estimators of integrated variance can be constructed on the basis of realised multipower variation, i.e., realised bipower, tripower and quadpower variation. These objects are robust to jumps in the log-price process. Therefore, given adequate asymptotic assumptions, the difference between realised multipower variation and realised variance can provide a tool to test for jumps in the process. Realised variance becomes biased in the presence of market microstructure effect, meanwhile realised bipower, tripower and quadpower variation are more robust in such a situation. Nevertheless there is always a trade-off between bias and variance; bias is due to market microstructure noise when sampling at high frequencies and variance is due to the asymptotic assumptions when sampling at low frequencies. By subsampling and averaging realised multipower variation this effect can be reduced, thereby allowing for calculations with higher frequencies.
|
258 |
Studies in the completeness and efficiency of theorem-proving by resolutionKowalski, Robert Anthony January 1970 (has links)
Inference systems Τ and search strategies E for T are distinguished from proof procedures β = (T,E) The completeness of procedures is studied by studying separately the completeness of inference systems and of search strategies. Completeness proofs for resolution systems are obtained by the construction of semantic trees. These systems include minimal α-restricted binary resolution, minimal α-restricted M-clash resolution and maximal pseudo-clash resolution. Certain refinements of hyper-resolution systems with equality axioms are shown to be complete and equivalent to refinements of the pararmodulation method for dealing with equality. The completeness and efficiency of search strategies for theorem-proving problems is studied in sufficient generality to include the case of search strategies for path-search problems in graphs. The notion of theorem-proving problem is defined abstractly so as to be dual to that of and" or tree. Special attention is given to resolution problems and to search strategies which generate simpler before more complex proofs. For efficiency, a proof procedure (T,E) requires an efficient search strategy E as well as an inference system T which admits both simple proofs and relatively few redundant and irrelevant derivations. The theory of efficient proof procedures outlined here is applied to proving the increased efficiency of the usual method for deleting tautologies and subsumed clauses. Counter-examples are exhibited for both the completeness and efficiency of alternative methods for deleting subsumed clauses. The efficiency of resolution procedures is improved by replacing the single operation of resolving a clash by the two operations of generating factors of clauses and of resolving a clash of factors. Several factoring methods are investigated for completeness. Of these the m-factoring method is shown to be always more efficient than the Wos-Robinson method.
|
259 |
Mechanizing structural inductionAubin, Raymond January 1976 (has links)
This thesis proposes improved methods for the automatic generation of proofs by structural induction in a formal system. The main application considered is proving properties of programs. The theorem-proving problem divides into two parts: (1) a formal system, and (2) proof generating methods. A formal system is presented which allows for a typed language; thus, abstract data types can be naturally defined in it. Its main feature is a general structural induction rule using a lexicographic ordering based on the substructure ordering induced by type definitions. The proof generating system is carefully introduced in order to convince of its consistency. It is meant to bring solutions to three problems. Firstly, it offers a method for generalizing only certain occurrences of a term in a theorem; this is achieved by associating generalization with the selection of induction variables. Secondly, it treats another generalization problem: that of terms occurring in the positions of arguments which vary within function definitions, besides recursion controlling arguments. The method is called indirect generalization, since it uses specialization as a means of attaining generalization. Thirdly, it presents a sound strategy for using the general induction rule which takes into account all induction subgoals, and for each of them, all induction hypotheses. Only then are the hypotheses retained and instantiated, or rejected altogether, according to their potential usefulness. The system also includes a search mechanism for counter-examples to conjectures, and a fast simplification algorithm.
|
260 |
Using goal structure to direct search in a problem solverTate, Brian Austin January 1975 (has links)
This thesis describes a class of problems in which interactions occur when plans to achieve members of a set of simultaneous goals are concatenated in the hope of achieving the whole goal. They will be termed "interaction problems". Several well known problems fall into this class. Swapping the values of two computer registers is a typical example. A very simple 3 block problem is used to illustrate the interaction difficulty. It is used to describe how a simple method can be employed to derive enough information from an interaction which has occurred to allow problem solving to proceed effectively. The method used to detect interactions and derive information from them, allowing problem solving to be re-directed, relies on an analysis of the goal and subgoal structure being considered by the problem solver. This goal structure will be called the "approach" taken by the system. It specifies the order in which individual goals are being attempted and any precedence relationships between them (say because one goal is a precondition of an action to achieve another). We argue that the goal structure of a problem contains information which is simpler and more meaningful than the actual plan (sequence of actions) being considered. We then show how an analysis of the goal structure of a problem, and the correction of such a structure in the light of any interaction, can direct the search towards a successful solution. Interaction problems pose particular difficulties for most current problem solvers because they achieve each part of a composite goal independently and assume that the resulting plans can be concatenated to achieve the overall goal. This assumption is beneficial in that it can drastically reduce the search necessary in many problems. However, it does restrict the range of problems which can be tackled. The problem solver, INTERPLAN, to be described as a result of this investigation, also assumes that subgoals can be solved independently, but when an interaction is detected it performs an analysis of the goal structure of the problem to re-direct the search. INTERPLAN is an efficient system which allows the class of interaction problems to be coped with. INTERPLAN uses a data structure called a "ticklist" as the basis of its mechanism for keeping track of the search it performs. The ticklist allows a very simple method to be employed for detecting and correcting for interactions by providing a summary of the goal structure of the problem being tried.
|
Page generated in 0.0371 seconds