1 |
Correlated optimum design with parametrized covariance function. Justification of the Fisher information matrix and of the method of virtual noise.Pazman, Andrej January 2004 (has links) (PDF)
We consider observations of a random field (or a random process), which is modeled by a nonlinear regression with a parametrized mean (or trend) and a parametrized covariance function. In the first part we show that under the assumption that the errors are normal with small variances, even when the number of observations is small, the ML estimators of both parameters are approximately unbiased, uncorrelated, with variances given by the inverse of the Fisher information matrix. In the second part we are extending the result of Pazman & Müller (2001) to the case of parametrized covariance function, namely we prove that the optimum designs with and without the presence of the virtual noise are identical. This in principle justify the use the method of virtual noise as a computational device also in this case. (authors' abstract) / Series: Research Report Series / Department of Statistics and Mathematics
|
2 |
Fisher Information in X-ray/Gamma-ray Imaging Instrumentation DesignSalcin, Esen, Salcin, Esen January 2015 (has links)
Signal formation in a photon-counting x-ray/gamma-ray imaging detector is a complex process resulting in detector signals governed by multiple random effects. Recovering maximum possible information about event attributes of interest requires a systematic collection of calibration data and analysis provided by estimation theory. In this context, a likelihood model provides a description of the connection between the observed signals and the event attributes. A quantitative measure of how well the measured signals can be used to produce an estimate of the parameters is given by Fisher Information analysis. In this work, we demonstrate several applications of the Fisher Information Matrix (FIM) as a powerful and practical tool for investigating and optimizing potential next-generation x-ray/gamma-ray detector designs, with an emphasis on medical-imaging applications. Using FIM as a design tool means to explore the physical detector design choices that have a relationship with the FIM through the likelihood function, how are they interrelated, and determining whether it is possible to modify any of these choices to yield or retain higher values for Fisher Information. We begin by testing these ideas by investigating a new type of a semiconductor detector, a Cadmium Telluride (CdTe) detector with double-sided-strip geometry developed by our collaborators at the Japan Aerospace Exploration Agency (JAXA). The statistical properties of the detector signals as a function of interaction positions in 3D (x, y, z) are presented with mathematical expressions as well as experimental data from measurements using synchrotron radiation at the Advanced Photon Source at Argonne National Laboratory. We show the computation of FIM for evaluating positioning performance and discuss how various detector parameters, that are identified to affect FIM, can be used in detector optimization. Next, we show the application of FIM analysis in a detector system based on multi-anode photomultiplier tubes coupled to a monolithic scintillator in the design of smart electronic read-out strategies. We conclude by arguing that a detector system is expected to perform the best when the hardware is optimized jointly with the estimation algorithm (simply referred to as the "software" in this context) that will be used with it. The results of this work lead to the idea of a detector development approach where the detector hardware platform is developed concurrently with the software and firmware in order to achieve optimal performance.
|
3 |
Analysing the information contributions and anatomical arrangement of neurons in population codesYarrow, Stuart James January 2015 (has links)
Population coding—the transmission of information by the combined activity of many neurons—is a feature of many neural systems. Identifying the role played by individual neurons within a population code is vital for the understanding of neural codes. In this thesis I examine which stimuli are best encoded by a given neuron within a population and how this depends on the informational measure used, on commonly-measured neuronal properties, and on the population size and the spacing between stimuli. I also show how correlative measures of topography can be used to test for significant topography in the anatomical arrangement of arbitrary neuronal properties. The neurons involved in a population code are generally clustered together in one region of the brain, and moreover their response selectivity is often reflected in their anatomical arrangement within that region. Although such topographic maps are an often-encountered feature in the brains of many species, there are no standard, objective procedures for quantifying topography. Topography in neural maps is typically identified and described subjectively, but in cases where the scale of the map is close to the resolution limit of the measurement technique, identifying the presence of a topographic map can be a challenging subjective task. In such cases, an objective statistical test for detecting topography would be advantageous. To address these issues, I assess seven measures by quantifying topography in simulated neural maps, and show that all but one of these are effective at detecting statistically significant topography even in weakly topographic maps. The precision of the neural code is commonly investigated using two different families of statistical measures: (i) Shannon mutual information and derived quantities when investigating very small populations of neurons and (ii) Fisher information when studying large populations. The Fisher information always predicts that neurons convey most information about stimuli coinciding with the steepest regions of the tuning curve, but it is known that information theoretic measures can give very different predictions. Using a Monte Carlo approach to compute a stimulus-specific decomposition of the mutual information (the stimulus-specific information, or SSI) for populations up to hundreds of neurons in size, I address the following questions: (i) Under what conditions can Fisher information accurately predict the information transmitted by a neuron within a population code? (ii) What are the effects of level of trial-to-trial variability (noise), correlations in the noise, and population size on the best-encoded stimulus? (iii) How does the type of task in a behavioural experiment (i.e. fine and coarse discrimination, classification) affect the best-encoded stimulus? I show that, for both unimodal and monotonic tuning curves, the shape of the SSI is dependent upon trial-to-trial variability, population size and stimulus spacing, in addition to the shape of the tuning curve. It is therefore important to take these factors into account when assessing which stimuli a neuron is informative about; just knowing the tuning curve may not be sufficient.
|
4 |
Protocol optimization of the filter exchange imaging (FEXI) sequence and implications on group sizes : a test-retest studyLampinen, Björn January 2012 (has links)
Diffusion weighted imaging (DWI) is a branch within the field of magnetic resonance imaging (MRI) that relies on the diffusion of water molecules for its contrast. Its clinical applications include the early diagnosis of ischemic stroke and mapping of the nerve tracts of the brain. The recent development of filter exchange imaging (FEXI) and the introduction of the apparent exchange rate (AXR) present a new DWI based technique that uses the exchange of water between compartments as contrast. FEXI could offer new clinical possibilities in diagnosis, differentiation and treatment follow-up of conditions involving edema or altered membrane permeability, such as tumors, cerebral edema, multiple sclerosis and stroke. Necessary steps in determining the potential of AXR as a new biomarker include running comparative studies between controls and different patient groups, looking for conditions showing large AXR-changes. However, before designing such studies, the experimental protocol of FEXI should be optimized to minimize the experimental variance. Such optimization would improve the data quality, shorten the scan time and keep the required study group sizes smaller. Here, optimization was done using an active imaging approach and the Cramer-Rao lower bound (CRLB) of Fisher information theory. Three optimal protocols were obtained, each specialized at different tissue types, and the CRLB method was verified by bootstrapping. A test-retest study of 18 volunteers was conducted in order to investigate the reproducibility of the AXR as measured by one of the protocols, adapted for the scanner. Group sizes required were calculated based on both CRLB and the variability of the test-retest data, as well as choices in data analysis such as region of interest (ROI) size. The result of this study is new protocols offering a reduction in coefficient of variation (CV) of around 30%, as compared to previously presented protocols. Calculations of group sizes required showed that they can be used to decide whether any patient group, in a given brain region, has large alterations of AXR using as few as four individuals per group, on average, while still keeping the scan time below 15 minutes. The test-retest study showed a larger than expected variability however, and uncovered artifact like changes in AXR between measurements. Reproducibility of AXR values ranged from modest to acceptable, depending on the brain region. Group size estimations based on the collected data showed that it is still possible to detect AXR difference larger than 50% in most brain regions using fewer than ten individuals. Limitations of this study include an imprecise knowledge of model priors and a possibly suboptimal modeling of the bias caused by weak signals. Future studies on FEXI methodology could improve the method further by addressing these matters and possibly also the unknown source of variability. For minimal variability, comparative studies of AXR in patient groups could use a protocol among those presented here, while choosing large ROI sizes and calculating the AXR based on averaged signals.
|
5 |
Faster Optimal Design Calculations for Practical ApplicationsStrömberg, Eric January 2011 (has links)
PopED is a software developed by the Pharmacometrics Research Group at the Department of Pharmaceutical Biosiences, Uppsala University written mainly in MATLAB. It uses pharmacometric population models to describe the pharmacokinetics and pharmacodynamics of a drug and then estimates an optimal design of a trial for that drug. With optimization calculations in average taking a very long time, it was desirable to increase the calculation speed of the software by parallelizing the serial calculation script. The goal of this project was to investigate different methods of parallelization and implement the method which seemed the best for the circumstances.The parallelization was implemented in C/C++ by using Open MPI and tested on the UPPMAX Kalkyl High-Performance Computation Cluster. Some alterations were made in the original MATLAB script to adapt PopED to the new parallel code. The methods which where parallelized included the Random Search and the Line Search algorithms. The testing showed a significant performance increase, with effectiveness per active core rangingfrom 55% to 89% depending on model and number of evaluated designs.
|
6 |
On the Fisher Information of Discretized DataPötzelberger, Klaus, Felsenstein, Klaus January 1991 (has links) (PDF)
In this paper we study the loss of Fisher information in approximating a continous distribution by a multinominal distribution coming from a partition of the sample space into a finite number of intervals. We describe and characterize the Fisher information as a function of the partition chosen especially for location parameters. For a small number of intervals the consequences of the choice is demonstrated by instructive examples. For increasing number of individuals we give the asymptotically optimal partition. (author's abstract) / Series: Forschungsberichte / Institut für Statistik
|
7 |
Relay Selection for Multiple Source Communications and LocalizationPerez-Ramirez, Javier 10 1900 (has links)
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV / Relay selection for optimal communication as well as multiple source localization is studied. We consider the use of dual-role nodes that can work both as relays and also as anchors. The dual-role nodes and multiple sources are placed at fixed locations in a two-dimensional space. Each dual-role node estimates its distance to all the sources within its radius of action. Dual-role selection is then obtained considering all the measured distances and the total SNR of all sources-to-destination channels for optimal communication and multiple source localization. Bit error rate performance as well as mean squared error of the proposed optimal dual-role node selection scheme are presented.
|
8 |
An Opportunistic Relaying Scheme for Optimal Communications and Source LocalizationPerez-Ramirez, Javier 10 1900 (has links)
ITC/USA 2012 Conference Proceedings / The Forty-Eighth Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2012 / Town and Country Resort & Convention Center, San Diego, California / The selection of relay nodes (RNs) for optimal communication and source location estimation is studied. The RNs are randomly placed at fixed and known locations over a geographical area. A mobile source senses and collects data at various locations over the area and transmits the data to a destination node with the help of the RNs. The destination node not only needs to collect the sensed data but also the location of the source where the data is collected. Hence, both high quality data collection and the correct location of the source are needed. Using the measured distances between the relays and the source, the destination estimates the location of the source. The selected RNs must be optimal for joint communication and source location estimation. We show in this paper how this joint optimization can be achieved. For practical decentralized selection, an opportunistic RN selection algorithm is used. Bit error rate performance as well as mean squared error in location estimation are presented and compared to the optimal relay selection results.
|
9 |
Model-based analysis of stability in networks of neuronsPanas, Dagmara January 2017 (has links)
Neurons, the building blocks of the brain, are an astonishingly capable type of cell. Collectively they can store, manipulate and retrieve biologically important information, allowing animals to learn and adapt to environmental changes. This universal adaptability is widely believed to be due to plasticity: the readiness of neurons to manipulate and adjust their intrinsic properties and strengths of connections to other cells. It is through such modifications that associations between neurons can be made, giving rise to memory representations; for example, linking a neuron responding to the smell of pancakes with neurons encoding sweet taste and general gustatory pleasure. However, this malleability inherent to neuronal cells poses a dilemma from the point of view of stability: how is the brain able to maintain stable operation while in the state of constant flux? First of all, won’t there occur purely technical problems akin to short-circuiting or runaway activity? And second of all, if the neurons are so easily plastic and changeable, how can they provide a reliable description of the environment? Of course, evidence abounds to testify to the robustness of brains, both from everyday experience and scientific experiments. How does this robustness come about? Firstly, many control feedback mechanisms are in place to ensure that neurons do not enter wild regimes of behaviour. These mechanisms are collectively known as homeostatic plasticity, since they ensure functional homeostasis through plastic changes. One well-known example is synaptic scaling, a type of plasticity ensuring that a single neuron does not get overexcited by its inputs: whenever learning occurs and connections between cells get strengthened, subsequently all the neurons’ inputs get downscaled to maintain a stable level of net incoming signals. And secondly, as hinted by other researchers and directly explored in this work, networks of neurons exhibit a property present in many complex systems called sloppiness. That is, they produce very similar behaviour under a wide range of parameters. This principle appears to operate on many scales and is highly useful (perhaps even unavoidable), as it permits for variation between individuals and for robustness to mutations and developmental perturbations: since there are many combinations of parameters resulting in similar operational behaviour, a disturbance of a single, or even several, parameters does not need to lead to dysfunction. It is also that same property that permits networks of neurons to flexibly reorganize and learn without becoming unstable. As an illustrative example, consider encountering maple syrup for the first time and associating it with pancakes; thanks to sloppiness, this new link can be added without causing the network to fire excessively. As has been found in previous experimental studies, consistent multi-neuron activity patterns arise across organisms, despite the interindividual differences in firing profiles of single cells and precise values of connection strengths. Such activity patterns, as has been furthermore shown, can be maintained despite pharmacological perturbation, as neurons compensate for the perturbed parameters by adjusting others; however, not all pharmacological perturbations can be thus amended. In the present work, it is for the first time directly demonstrated that groups of neurons are by rule sloppy; their collective parameter space is mapped to reveal which are the sensitive and insensitive parameter combinations; and it is shown that the majority of spontaneous fluctuations over time primarily affect the insensitive parameters. In order to demonstrate the above, hippocampal neurons of the rat were grown in culture over multi-electrode arrays and recorded from for several days. Subsequently, statistical models were fit to the activity patterns of groups of neurons to obtain a mathematically tractable description of their collective behaviour at each time point. These models provide robust fits to the data and allow for a principled sensitivity analysis with the use of information-theoretic tools. This analysis has revealed that groups of neurons tend to be governed by a few leader units. Furthermore, it appears that it was the stability of these key neurons and their connections that ensured the stability of collective firing patterns across time. The remaining units, in turn, were free to undergo plastic changes without risking destabilizing the collective behaviour. Together with what has been observed by other researchers, the findings of the present work suggest that the impressively adaptable yet robust functioning of the brain is made possible by the interplay of feedback control of few crucial properties of neurons and the general sloppy design of networks. It has, in fact, been hypothesised that any complex system subject to evolution is bound to rely on such design: in order to cope with natural selection under changing environmental circumstances, it would be difficult for a system to rely on tightly controlled parameters. It might be, therefore, that all life is just, by nature, sloppy.
|
10 |
On the Design of Methods to Estimate Network CharacteristicsRibeiro, Bruno F. 01 May 2010 (has links)
Social and computer networks permeate our lives. Large networks, such as the Internet, the World Wide Web (WWW), AND wireless smartphones, have indisputable economic and social importance. These networks have non-trivial topological features, i.e., features that do not occur in simple networks such as lattices or random networks. Estimating characteristics of these networks from incomplete (sampled) data is a challenging task. This thesis provides two frameworks within which common measurement tasks are analyzed and new, principled, measurement methods are designed. The first framework focuses on sampling directly observable network characteristics. This framework is applied to design a novel multidimensional random walk to efficiently sample loosely connected networks. The second framework focuses on the design of measurement methods to estimate indirectly observable network characteristics. This framework is applied to design two new, principled, estimators of flow size distributions over Internet routers using (1) randomly sampled IP packets and (2) a data stream algorithm.
|
Page generated in 0.1283 seconds