1 |
Optimal Design of Experiments Subject to Correlated ErrorsPazman, Andrej, Müller, Werner January 2000 (has links) (PDF)
In this paper we consider optimal design of experiments in the case of correlated observations, when no replications are possible. This situation is typical when observing a random process or random field with known covariance structure. We present a theorem which demonstrates that the computation of optimum exact designs corresponds to solving minimization problems in terms of design measures. (author's abstract) / Series: Forschungsberichte / Institut für Statistik
|
2 |
Maximum Likelihood Identification of an Information Matrix Under Constraints in a Corresponding Graphical ModelLi, Nan 22 January 2017 (has links)
We address the problem of identifying the neighborhood structure of an undirected graph, whose nodes are labeled with the elements of a multivariate normal (MVN) random vector. A semi-definite program is given for estimating the information matrix under arbitrary constraints on its elements. More importantly, a closed-form expression is given for the maximum likelihood (ML) estimator of the information matrix, under the constraint that the information matrix has pre-specified elements in a given pattern (e.g., in a principal submatrix). The results apply to the identification of dependency labels in a graphical model with neighborhood constraints. This neighborhood structure excludes nodes which are conditionally independent of a given node and the graph is determined by the non- zero elements in the information matrix for the random vector. A cross-validation principle is given for determining whether the constrained information matrix returned from this procedure is an acceptable model for the information matrix, and as a consequence for the neighborhood structure of the Markov Random Field (MRF) that is identified with the MVN random vector.
|
3 |
Faster Optimal Design Calculations for Practical ApplicationsStrömberg, Eric January 2011 (has links)
PopED is a software developed by the Pharmacometrics Research Group at the Department of Pharmaceutical Biosiences, Uppsala University written mainly in MATLAB. It uses pharmacometric population models to describe the pharmacokinetics and pharmacodynamics of a drug and then estimates an optimal design of a trial for that drug. With optimization calculations in average taking a very long time, it was desirable to increase the calculation speed of the software by parallelizing the serial calculation script. The goal of this project was to investigate different methods of parallelization and implement the method which seemed the best for the circumstances.The parallelization was implemented in C/C++ by using Open MPI and tested on the UPPMAX Kalkyl High-Performance Computation Cluster. Some alterations were made in the original MATLAB script to adapt PopED to the new parallel code. The methods which where parallelized included the Random Search and the Line Search algorithms. The testing showed a significant performance increase, with effectiveness per active core rangingfrom 55% to 89% depending on model and number of evaluated designs.
|
4 |
Further aspects on an example of D-optimal designs in the case of correlated errorsStehlik, Milan January 2004 (has links) (PDF)
The aim of this paper is discussion on particular aspects of the extension of a classic example in the design of experiments under the presence of correlated errors. Such extension allows us to study the effect of the correlation range on the design. We discuss the dependence of the information gained by the D-optimum design on the covariance bandwidth and also we concentrate to some technical aspects that occurs in such settings. (author's abstract) / Series: Research Report Series / Department of Statistics and Mathematics
|
5 |
Relay Selection for Multiple Source Communications and LocalizationPerez-Ramirez, Javier 10 1900 (has links)
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV / Relay selection for optimal communication as well as multiple source localization is studied. We consider the use of dual-role nodes that can work both as relays and also as anchors. The dual-role nodes and multiple sources are placed at fixed locations in a two-dimensional space. Each dual-role node estimates its distance to all the sources within its radius of action. Dual-role selection is then obtained considering all the measured distances and the total SNR of all sources-to-destination channels for optimal communication and multiple source localization. Bit error rate performance as well as mean squared error of the proposed optimal dual-role node selection scheme are presented.
|
6 |
An Opportunistic Relaying Scheme for Optimal Communications and Source LocalizationPerez-Ramirez, Javier 10 1900 (has links)
ITC/USA 2012 Conference Proceedings / The Forty-Eighth Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2012 / Town and Country Resort & Convention Center, San Diego, California / The selection of relay nodes (RNs) for optimal communication and source location estimation is studied. The RNs are randomly placed at fixed and known locations over a geographical area. A mobile source senses and collects data at various locations over the area and transmits the data to a destination node with the help of the RNs. The destination node not only needs to collect the sensed data but also the location of the source where the data is collected. Hence, both high quality data collection and the correct location of the source are needed. Using the measured distances between the relays and the source, the destination estimates the location of the source. The selected RNs must be optimal for joint communication and source location estimation. We show in this paper how this joint optimization can be achieved. For practical decentralized selection, an opportunistic RN selection algorithm is used. Bit error rate performance as well as mean squared error in location estimation are presented and compared to the optimal relay selection results.
|
7 |
Testing for Heteroskedasticity in Bivariate Probit ModelsThorn, Thomas 28 June 2013 (has links)
Two score tests for heteroskedasticity in the errors of a bivariate Probit model are
developed, and numerous simulations are performed. These tests are based on an outer
product of the gradient estimate of the information matrix, and are constructed using an
artificial regression. The empirical sizes of both tests are found to be well-behaved,
settling down to the nominal size under the asymptotic distribution as the sample size
approaches 1000 observations. Similarly, the empirical powers of both tests increase
quickly with sample size. The largest improvement in power occurs as the sample size
increases from 250 to 500. An application with health care data from the German
Socioeconomic Panel is performed, and strong evidence of heteroskedasticity is detected.
This suggests that the maximum likelihood estimator for the standard bivariate Probit
model will be inconsistent in this particular case. / Graduate / 0501
|
8 |
Model-based analysis of stability in networks of neuronsPanas, Dagmara January 2017 (has links)
Neurons, the building blocks of the brain, are an astonishingly capable type of cell. Collectively they can store, manipulate and retrieve biologically important information, allowing animals to learn and adapt to environmental changes. This universal adaptability is widely believed to be due to plasticity: the readiness of neurons to manipulate and adjust their intrinsic properties and strengths of connections to other cells. It is through such modifications that associations between neurons can be made, giving rise to memory representations; for example, linking a neuron responding to the smell of pancakes with neurons encoding sweet taste and general gustatory pleasure. However, this malleability inherent to neuronal cells poses a dilemma from the point of view of stability: how is the brain able to maintain stable operation while in the state of constant flux? First of all, won’t there occur purely technical problems akin to short-circuiting or runaway activity? And second of all, if the neurons are so easily plastic and changeable, how can they provide a reliable description of the environment? Of course, evidence abounds to testify to the robustness of brains, both from everyday experience and scientific experiments. How does this robustness come about? Firstly, many control feedback mechanisms are in place to ensure that neurons do not enter wild regimes of behaviour. These mechanisms are collectively known as homeostatic plasticity, since they ensure functional homeostasis through plastic changes. One well-known example is synaptic scaling, a type of plasticity ensuring that a single neuron does not get overexcited by its inputs: whenever learning occurs and connections between cells get strengthened, subsequently all the neurons’ inputs get downscaled to maintain a stable level of net incoming signals. And secondly, as hinted by other researchers and directly explored in this work, networks of neurons exhibit a property present in many complex systems called sloppiness. That is, they produce very similar behaviour under a wide range of parameters. This principle appears to operate on many scales and is highly useful (perhaps even unavoidable), as it permits for variation between individuals and for robustness to mutations and developmental perturbations: since there are many combinations of parameters resulting in similar operational behaviour, a disturbance of a single, or even several, parameters does not need to lead to dysfunction. It is also that same property that permits networks of neurons to flexibly reorganize and learn without becoming unstable. As an illustrative example, consider encountering maple syrup for the first time and associating it with pancakes; thanks to sloppiness, this new link can be added without causing the network to fire excessively. As has been found in previous experimental studies, consistent multi-neuron activity patterns arise across organisms, despite the interindividual differences in firing profiles of single cells and precise values of connection strengths. Such activity patterns, as has been furthermore shown, can be maintained despite pharmacological perturbation, as neurons compensate for the perturbed parameters by adjusting others; however, not all pharmacological perturbations can be thus amended. In the present work, it is for the first time directly demonstrated that groups of neurons are by rule sloppy; their collective parameter space is mapped to reveal which are the sensitive and insensitive parameter combinations; and it is shown that the majority of spontaneous fluctuations over time primarily affect the insensitive parameters. In order to demonstrate the above, hippocampal neurons of the rat were grown in culture over multi-electrode arrays and recorded from for several days. Subsequently, statistical models were fit to the activity patterns of groups of neurons to obtain a mathematically tractable description of their collective behaviour at each time point. These models provide robust fits to the data and allow for a principled sensitivity analysis with the use of information-theoretic tools. This analysis has revealed that groups of neurons tend to be governed by a few leader units. Furthermore, it appears that it was the stability of these key neurons and their connections that ensured the stability of collective firing patterns across time. The remaining units, in turn, were free to undergo plastic changes without risking destabilizing the collective behaviour. Together with what has been observed by other researchers, the findings of the present work suggest that the impressively adaptable yet robust functioning of the brain is made possible by the interplay of feedback control of few crucial properties of neurons and the general sloppy design of networks. It has, in fact, been hypothesised that any complex system subject to evolution is bound to rely on such design: in order to cope with natural selection under changing environmental circumstances, it would be difficult for a system to rely on tightly controlled parameters. It might be, therefore, that all life is just, by nature, sloppy.
|
9 |
Optimal sensing matricesAchanta, Hema Kumari 01 December 2014 (has links)
Location information is of extreme importance in every walk of life ranging from commercial applications such as location based advertising and location aware next generation communication networks such as the 5G networks to security based applications like threat localization and E-911 calling. In indoor and dense urban environments plagued by multipath effects there is usually a Non Line of Sight (NLOS) scenario preventing GPS based localization. Wireless localization using sensor networks provides a cost effective and accurate solution to the wireless source localization problem. Certain sensor geometries show significantly poor performance even in low noise scenarios when triangulation based localization methods are used. This brings the need for the design of an optimum sensor placement scheme for better performance in the source localization process.
The optimum sensor placement is the one that optimizes the underlying Fisher Information Matrix(FIM) . This thesis will present a class of canonical optimum sensor placements that produce the optimum FIM for N-dimensional source localization N greater than or equal to 2 for a case where the source location has a radially symmetric probability density function within a N-dimensional sphere and the sensors are all on or outside the surface of a concentric outer N-dimensional sphere. While the canonical solution that we designed for the 2D problem represents optimum spherical codes, the study of 3 or higher dimensional design provides great insights into the design of measurement matrices with equal norm columns that have the smallest possible condition number. Such matrices are of importance in compressed sensing based applications.
This thesis also presents an optimum sensing matrix design for energy efficient source localization in 2D. Specifically, the results relate to the worst case scenario when the minimum number of sensors are active in the sensor network. We also propose a distributed control law that guides the motion of the sensors on the circumference of the outer circle so that achieve the optimum sensor placement with minimum communication overhead.
The design of equal norm column sensing matrices has a variety of other applications apart from the optimum sensor placement for N-dimensional source localization. One such application is fourier analysis in Magnetic Resonance Imaging (MRI). Depending on the method used to acquire the MR image, one can choose an appropriate transform domain that transforms the MR image into a sparse image that is compressible. Some such transform domains include Wavelet Transform and Fourier Transform. The inherent sparsity of the MR images in an appropriately chosen transform domain, motivates one of the objectives of this thesis which is to provide a method for designing a compressive sensing measurement matrix by choosing a subset of rows from the Discrete Fourier Transform (DFT) matrix. This thesis uses the spark of the matrix as the design criterion. The spark of a matrix is defined as the smallest number of linearly dependent columns of the matrix. The objective is to select a subset of rows from the DFT matrix in order to achieve maximum spark. The design procedure leads us to an interest study of coprime conditions on the row indices chosen with the size of the DFT matrix.
|
10 |
Neural Networks and the Natural GradientBastian, Michael R. 01 May 2010 (has links)
Neural network training algorithms have always suffered from the problem of local minima. The advent of natural gradient algorithms promised to overcome this shortcoming by finding better local minima. However, they require additional training parameters and computational overhead. By using a new formulation for the natural gradient, an algorithm is described that uses less memory and processing time than previous algorithms with comparable performance.
|
Page generated in 0.1011 seconds