Spelling suggestions: "subject:"2metric"" "subject:"trimetric""
201 |
New statistical methods to derive functional connectivity from multiple spike trainsMasud, Mohammad Shahed January 2011 (has links)
Analysis of functional connectivity of simultaneously recorded multiple spike trains is one of the major issues in the neuroscience. The progress of the statistical methods to the analysis of functional connectivity of multiple spike trains is relatively slow. In this thesis two statistical techniques are presented to the analysis of functional connectivity of multiple spike trains. The first method is known as the modified correlation grid (MCG). This method is based on the calculation of cross-correlation function of all possible pair-wise spike trains. The second technique is known as the Cox method. This method is based on the modulated renewal process (MRP). The original paper on the application of the Cox method (Borisyuk et al., 1985) to neuroscience data was used to analyse only pairs and triplets of spike trains. This method is further developed in this thesis to support simultaneously recorded of any possible set of multiple spike trains. A probabilistic model is developed to test the Cox method. This probabilistic model is based on the MRP. Due to the common probabilistic basis of the probabilistic model and the Cox method, the probabilistic model is a convenient technique to test the Cox method. A new technique based on a pair-wise analysis of Cox method known as the Cox metric is presented to find the groups of coupled spike trains. Another new technique known as motif analysis is introduced which is useful in identifying interconnections among the spike trains. This technique is based on the triplet-wise analysis of the Cox method. All these methods are applied to several sets of spike trains generated by the Enhanced Leaky and Integrate Fire (ELIF) model. The results suggest that these methods are successful for analysing functional connectivity of simultaneously recorded multiple spike trains. These methods are also applied to an experimental data recorded from cat’s visual cortex. The connection matrix derived from the experimental data by the Cox method is further applied to the graph theoretical methods.
|
202 |
Perceptual Image Quality Prediction Using Region of Interest Based Reduced Reference Metrics Over Wireless ChannelR V Krishnam Raju, Kunadha Raju January 2016 (has links)
As there is a rapid growth in the field of wireless communications, the demand for various multimedia services is also increasing. The data that is being transmitted suffers from distortions through source encoding and transmission over errorprone channels. Due to these errors, the quality of the content is degraded. There is a need for service providers to provide certain Quality of Experience (QoE) to the end user. Several methods are being developed by network providers for better QoE.The human tendency mainly focuses on distortions in the Region of Interest(ROI) which are perceived to be more annoying compared to the Background(BG). With this as a base, the main aim of this thesis is to get an accurate prediction quality metric to measure the quality of the image over ROI and the BG independently. Reduced Reference Image Quality Assessment (RRIQA), a reduced reference image quality assessment metric, is chosen for this purpose. In this method, only partial information about the reference image is available to assess the quality. The quality metric is measured independently over ROI and BG. Finally the metric estimated over ROI and BG are pooled together to get aROI aware metric to predict the Mean Opinion Score (MOS) of the image.In this thesis, an ROI aware quality metric is used to measure the quality of distorted images that are generated using a wireless channel. The MOS of distorted images are obtained. Finally, the obtained MOS are validated with the MOS obtained from a database [1].It is observed that the proposed image quality assessment method provides better results compared to the traditional approach. It also gives a better performance over a wide variety of distortions. The obtained results show that the impairments in ROI are perceived to be more annoying when compared to the BG.
|
203 |
Sparse distance metric learningChoy, Tze Leung January 2014 (has links)
A good distance metric can improve the accuracy of a nearest neighbour classifier. Xing et al. (2002) proposed distance metric learning to find a linear transformation of the data so that observations of different classes are better separated. For high-dimensional problems where many un-informative variables are present, it is attractive to select a sparse distance metric, both to increase predictive accuracy but also to aid interpretation of the result. In this thesis, we investigate three different types of sparsity assumption for distance metric learning and show that sparse recovery is possible under each type of sparsity assumption with an appropriate choice of L1-type penalty. We show that a lasso penalty promotes learning a transformation matrix having lots of zero entries, a group lasso penalty recovers a transformation matrix having zero rows/columns and a trace norm penalty allows us to learn a low rank transformation matrix. The regularization allows us to consider a large number of covariates and we apply the technique to an expanded set of basis called rule ensemble to allow for a more flexible fit. Finally, we illustrate an application of the metric learning problem via a document retrieval example and discuss how similarity-based information can be applied to learn a classifier.
|
204 |
Designing for Economic Success: A 50-State Analysis of the Genuine Progress IndicatorFox, Mairi-Jane Venesky 01 January 2017 (has links)
The use of Gross Domestic Product (GDP) as the primary measure of economic progress has arguably led to unintended consequences of environmental degradation and socially skewed outcomes. The Genuine Progress Indicator (GPI) was designed to reveal the trade offs associated with conventional economic growth and to assess the broader impact of economic benefits and costs on sustainable human welfare. Although originally designed for use at the national scale, an interest has developed in the United States in a state-level uptake of the GPI to inform and guide policy. However, questions exist about the quality and legitimacy of the GPI as a composite indicator. These questions include concerns about the underlying assumptions, the monetary weights and variables used, statistical rigor, magnitude of data collection required, and lack of a transparent governance mechanism for the metric. This study aims to address these issues and explore the GPI through a design-thinking lens as both a design artifact and intervention.
The leading paper in this dissertation offers the first GPI accounting for all 50 U.S. states. State GPI results are introduced and compared to Gross State Product (GSP). Then an analysis of the components to GPI reveals which drive the differences in outcomes, including examining the sustainability aspects of the state-level results. The second paper investigates the quality of the GPI as a composite indicator by testing its sensitivity to numerical assumptions and relative magnitudes of components, with particular attention to the possible unintended policy consequences of the design. The third paper seeks to answer the question of both efficiency (data parsimony) and effectiveness (comparatively to other indicators) by analysis of correlations between GPI components and with other state-level indicators such as the Gallup Well-Being Indicator, Ecological Footprint, and UN Human Development Index. To garner insight about possible GPI improvements, goals, and governance gaps in the informal U.S GPI network, the final paper dives into processes, outputs, and outcomes from the community of practice as revealed through a facilitated U.S. GPI workshop.
|
205 |
Approaching the Singularity in Gowdy UniversesEdmonds, Bartlett Douglas, Jr. 01 January 2006 (has links)
It has been shown that the cosmic censorship conjecture holds for polarized Gowdy spacetimes. In the more general, unpolarized case, however, the question remains open. It is known that cylindrically symmetric dust can collapse to form a naked singularity. Since Gowdy universes comprise gravitational waves that are locally cylindrically symmetric, perhaps these waves can collapse onto a symmetry axis and create a naked singularity. It is known that in the case of cylindrical symmetry, event horizons will not form under gravitational collapse, so the formation of a singularity on the symmetry axis would be a violation of the cosmic censorship conjecture.To search for cosmic censorship violation in Gowdy spacetimes, we must have a better understanding of their singularities. It is known that far from the symmetry axes, the spacetimes are asymptotically velocity term dominated, but this property is not known to hold near the axes. In this thesis, we take the first steps toward understanding on and near axis behavior of Gowdy spacetimes with space-sections that have the topology of the three-sphere. Null geodesic behavior on the symmetry axes is studied, and it is found that in some cases, a photon will wrap around the universe infinitely many times on its way back toward the initial singularity.
|
206 |
Correlation Between Bioassessments of Macroinvertebrates and Fishes and Natural Land Cover in Virginia Coastal Plain WatershedsSmigo, Warren Hunter 01 January 2005 (has links)
Twenty five first through third order streams in the Coastal Plain of Virginia were sampled for benthic macroinvertebrates and fishes to determine whether a predictable relationship between areas of Unfragmented Natural Land Cover (UNLC) and biotic integrity could be established. I hypothesized that as the area of UNLC increased in a watershed at either the whole catchment or riparian scale, biotic indices measuring stream water and habitat quality would increase. Biotic integrity was measured through the scores from the Coastal Plain Macroinvertebrate Index (CPMI) for benthic macroinvertebrates and the VCU Index of Biotic Integrity (IBI) for fishes. Using GIS, the percentage of UNLC at the catchment and riparian scale was calculated for each stream's watershed. Physicochemical parameters, habitat metrics and other environmental data were also analyzed to determine if relationships existed between those parameters and biotic integrity. Unfragmented Natural Land Cover ranged from 7% to 82% at the catchment scale and 10%to 96% in the riparian area. There were no significant correlations between the biological assessment scores for either the benthic macroinvertebrate or the fish communities and UNLC at either scale. Analyses of physicochemical parameters and habitat metrics did show some significant correlations between those variables and biotic metrics. Dissolved oxygen (DO) and pH were positively correlated with the CPMI and DO was positively correlated with the IBI scores. Several habitat metrics were significantly correlated with the CPMI, including pool variability, which was positively correlated with the CPMI, and bank stability, sediment deposition, and channel flow status, which were negatively correlated with the CPMI. The results of this study indicated that streams with unconstrained channels score significantly lower on the CPMI and have significantly lower DO concentrations than streams with constrained channels despite some streams with unconstrained channels having high percentages of UNLC in the watershed. Although there were other biotic and abiotic factors that may have introduced variability into the study, such as severe weather, beaver activity, and changing land use, it is likely that the CPMI was not an appropriate bioassessment tool for swampy Coastal Plain streams. It is therefore imperative from assessment and management perspectives for state agencies and researchers to develop appropriate bioassessment indices for Coastal Plain streams that have limiting water quality influenced by natural processes.
|
207 |
Metrické indexování, Multimediální explorace, PM-Strom, Cut-region / Using Metric Indexes For Effective and Efficient Multimedia ExplorationČech, Přemysl January 2014 (has links)
The exponential growth of multimedia data challenges effectiveness and efficiency of the state-of-the- art retrieval techniques. In this thesis, we focus on browsing of large datasets using exploration approaches where query cannot be particularly expressed or where the need of some general notion of the dataset is required. More specifically, we study exploration scenarios utilizing the metric access method PM-Tree which natively creates a hierarchy of nested metric regions. We enhance the PM- Tree for the exploration purposes and define different traversing and querying strategies. Further, we investigate range multi-query (range query defined by multiple objects) approaches. We propose new effective and efficient cut-region range query and compare the new query with other approaches for efficient multi-query processing. Finally, we implemented all new methods and strategies for the PM- Tree into multimedia exploration framework and tested browsing algorithms on a game-like demo application.
|
208 |
Harnack's inequality in spaces of homogeneous typeSilwal, Sharad Deep January 1900 (has links)
Doctor of Philosophy / Department of Mathematics / Diego Maldonado / Originally introduced in 1961 by Carl Gustav Axel Harnack [36] in the context of harmonic
functions in R[superscript]2, the so-called Harnack inequality has since been established for solutions to a wide variety of different partial differential equations (PDEs) by mathematicians
at different times of its historical development. Among them, Moser's iterative scheme [47-49] and Krylov-Safonov's probabilistic method [43, 44] stand out as pioneering theories, both in terms of their originality and their impact on the study of regularity of solutions to PDEs.
Caffarelli's work [12] in 1989 greatly simplified Krylov-Safonov's theory and established Harnack's
inequality in the context of fully non-linear elliptic PDEs. In this scenario, Caffarelli
and Gutierrez's study of the linearized Monge-Ampere equation [15, 16] in 2002-2003 served
as a motivation for axiomatizations of Krylov-Safonov-Caffarelli theory [3, 25, 57]. The
main work in this dissertation is a new axiomatization of Krylov-Safonov-Caffarelli theory.
Our axiomatic approach to Harnack's inequality in spaces of homogeneous type has some distinctive features. It sheds more light onto the role of the so-called critical density property, a property which is at the heart of the techniques developed by Krylov and Safonov. Our structural assumptions become more natural, and thus, our theory better suited, in the context of variational PDEs. We base our method on the theory of Muckenhoupt's A[subscript]p weights. The dissertation also gives an application of our axiomatic approach to Harnack's inequality in the context of infinite graphs. We provide an alternate proof of Harnack's inequality for harmonic functions on graphs originally proved in [21].
|
209 |
Decoding of block and convolutional codes in rank metric / Décodage des codes en bloc et des codes convolutifs en métrique rangWachter-Zeh, Antonia 04 October 2013 (has links)
Les code en métrique rang attirent l’attention depuis quelques années en raison de leur application possible au codage réseau linéaire aléatoire (random linear network coding), à la cryptographie à clé publique, au codage espace-temps et aux systèmes de stockage distribué. Une construction de codes algébriques en métrique rang de cardinalité optimale a été introduite par Delsarte, Gabidulin et Roth il y a quelques décennies. Ces codes sont considérés comme l’équivalent des codes de Reed – Solomon et ils sont basés sur l’évaluation de polynômes linéarisés. Ils sont maintenant appelés les codes de Gabidulin. Cette thèse traite des codes en bloc et des codes convolutifs en métrique rang avec l’objectif de développer et d’étudier des algorithmes de décodage efficaces pour ces deux classes de codes. Après une introduction dans le chapitre 1, le chapitre 2 fournit une introduction rapide aux codes en métrique rang et leurs propriétés. Dans le chapitre 3, on considère des approches efficaces pour décoder les codes de Gabidulin. Lapremière partie de ce chapitre traite des algorithmes rapides pour les opérations sur les polynômes linéarisés. La deuxième partie de ce chapitre résume tout d’abord les techniques connues pour le décodage jusqu’à la moitié de la distance rang minimale (bounded minimum distance decoding) des codes de Gabidulin, qui sont basées sur les syndromes et sur la résolution d’une équation clé. Ensuite, nous présentons et nous prouvons un nouvel algorithme efficace pour le décodage jusqu’à la moitié de la distance minimale des codes de Gabidulin. Le chapitre 4 est consacré aux codes de Gabidulin entrelacés et à leur décodage au-delà de la moitié de la distance rang minimale. Dans ce chapitre, nous décrivons d’abord les deux approches connues pour le décodage unique et nous tirons une relation entre eux et leurs probabilités de défaillance. Ensuite, nous présentons un nouvel algorithme de décodage des codes de Gabidulin entrelacés basé sur l’interpolation des polynômes linéarisés. Nous prouvons la justesse de ses deux étapes principales — l’interpolation et la recherche des racines — et montrons que chacune d’elles peut être effectuée en résolvant un système d’équations linéaires. Jusqu’à présent, aucun algorithme de décodage en liste en temps polynomial pour les codes de Gabidulin n’est connu et en fait il n’est même pas clair que cela soit possible. Cela nous a motivé à étudier, dans le chapitre 5, les possibilités du décodage en liste en temps polynomial des codes en métrique rang. Cette analyse est effectuée par le calcul de bornes sur la taille de la liste des codes en métriques rang en général et des codes de Gabidulin en particulier. Étonnamment, les trois nouvelles bornes révèlent toutes un comportement des codes en métrique rang qui est complètement différent de celui des codes en métrique de Hamming. Enfin, dans le chapitre 6, on introduit des codes convolutifs en métrique rang. Ce qui nous motive à considérer ces codes est le codage réseau linéaire aléatoire multi-shot, où le réseau inconnu varie avec le temps et est utilisé plusieurs fois. Les codes convolutifs créent des dépendances entre les utilisations différentes du réseau aun de se adapter aux canaux difficiles. Basé sur des codes en bloc en métrique rang (en particulier les codes de Gabidulin), nous donnons deux constructions explicites des codes convolutifs en métrique rang. Les codes en bloc sous-jacents nous permettent de développer un algorithme de décodage des erreurs et des effacements efficace pour la deuxième construction, qui garantit de corriger toutes les séquences d’erreurs de poids rang jusqu’à la moitié de la distance rang active des lignes. Un résumé et un aperçu des problèmes futurs de recherche sont donnés à la fin de chaque chapitre. Finalement, le chapitre 7 conclut cette thèse. / Rank-metric codes recently attract a lot of attention due to their possible application to network coding, cryptography, space-time coding and distributed storage. An optimal-cardinality algebraic code construction in rank metric was introduced some decades ago by Delsarte, Gabidulin and Roth. This Reed–Solomon-like code class is based on the evaluation of linearized polynomials and is nowadays called Gabidulin codes. This dissertation considers block and convolutional codes in rank metric with the objective of designing and investigating efficient decoding algorithms for both code classes. After giving a brief introduction to codes in rank metric and their properties, we first derive sub-quadratic-time algorithms for operations with linearized polynomials and state a new bounded minimum distance decoding algorithm for Gabidulin codes. This algorithm directly outputs the linearized evaluation polynomial of the estimated codeword by means of the (fast) linearized Euclidean algorithm. Second, we present a new interpolation-based algorithm for unique and (not necessarily polynomial-time) list decoding of interleaved Gabidulin codes. This algorithm decodes most error patterns of rank greater than half the minimum rank distance by efficiently solving two linear systems of equations. As a third topic, we investigate the possibilities of polynomial-time list decoding of rank-metric codes in general and Gabidulin codes in particular. For this purpose, we derive three bounds on the list size. These bounds show that the behavior of the list size for both, Gabidulin and rank-metric block codes in general, is significantly different from the behavior of Reed–Solomon codes and block codes in Hamming metric, respectively. The bounds imply, amongst others, that there exists no polynomial upper bound on the list size in rank metric as the Johnson bound in Hamming metric, which depends only on the length and the minimum rank distance of the code. Finally, we introduce a special class of convolutional codes in rank metric and propose an efficient decoding algorithm for these codes. These convolutional codes are (partial) unit memory codes, built upon rank-metric block codes. This structure is crucial in the decoding process since we exploit the efficient decoders of the underlying block codes in order to decode the convolutional code.
|
210 |
Distância, na matemática e no cotidiano / Distance, in math and everyday lifeApprobato, Daví Carlos Uehara 07 June 2019 (has links)
Este trabalho tem como objetivo discutir o conceito formal de distância em matemática, visando depois apresentar exemplos do conceito de distância em situações do dia a dia. Em geral com esse trabalho pretendemos que o leitor menos familiarizado entenda a importância do conceito matemático de distância. Distância é muito mais que o comprimento do segmento entre dois pontos e isso será apresentado em cada capítulo. O assunto foi inspirado pelo livro Encyclopedia of Distances Deza Michel Marie (2009), no qual são apresentados, espaços métricos, métricas em várias áreas e aplicações. No segundo capítulo, será apresentado a definição de espaços métricos. No terceiro capítulo serão apresentados alguns exemplos de métricas. As três primeiras métricas, as mais comuns: métricas euclidiana e máxima em R e R2. Também serão apresentadas as generalizações de cada uma delas em Rn. O próximo capítulo, o quarto, é destinado a apresentar o estudo sobre espaços normados, pois por meio desses conceitos pode-se analisar as distâncias entre vetores e matrizes. Veremos que a relevância dessas distâncias auxilia, por exemplo, na compreensão de aproximações de soluções de sistemas. No capítulo de distância de funções será apresentado um breve comentário sobre a série de Fourier, com relação ao método da aproximação através da decomposição de funções periódicas. Para analisar o quanto as funções trigonométricas estão se aproximando, usa-se o conceito de distância entre funções, as medições são feitas de acordo com as aproximações vão aumentando, essa distância \"erro\" entre elas tende a zero. Na teoria dos códigos, é preciso introduzir o conceito de distância entre \"palavras\", isso permite verificar se o código enviado teve alguma alteração, provocada por uma interferência ou ruídos durante a trajetória. Em algumas situações, o código consegue corrigir e compreender a palavra enviada mesmo tendo sofrido alterações no percurso. Nestes casos, há o estudo da métrica de Hamming. Já pela métrica de Hausdoorf, proposta pelo matemático de mesmo nome, é possível calcular com maior precisão a distância entre conjuntos fechados e limitados. Esta métrica pode ser utilizada em estudos de reconhecimento facial, por exemplo, pois as imagens das faces são transformadas em nuvens de pontos. Além disso, através do algoritmo de Dijkstra será apresentado a distância entre os vértices de um grafo convexo. Existem várias aplicações de distância entre grafos e uma delas é a questão de minimizar o custo decorrente do deslocamento entre uma transportadora e o local de entrega por exemplo. Para finalizar à discussão da importância do consenso de distância, será apresentada uma distância entre genes. Dentro deste tema, o principal cientista foi Thomas Morgan, que por meio de seus estudos conseguiu criar o primeiro mapeamento genético. Com isto, pode relacionar o conceito de distância entre genes à taxa de recombinação gênica. Finalmente, foi elaborada uma atividade com alunos do ensino médio com o objetivo de analisar os conhecimentos que os estudantes têm sobre distância. Esta atividade também foi importante para que os alunos pudessem compreender a necessidade de formalizar matematicamente este conceito e, principalmente, motivá-los por meio da apresentação de aplicações sobre distância, em diferentes âmbitos. / This work has as objective to discuss the formal concept of distance in mathematics, aiming to present examples of the distance concept in everyday situations. In general with this work we want the less familiar reader to understand the importance of the mathematical concept of distance. Distance is much more than the length of the segment between two points and this will be presented in each chapter. The subject was inspired by the book Encyclopedia of Distances Deza Michel Marie (2009), in which are presented, metric spaces, metrics in different areas and applications. In the second chapter, the definition of metric spaces will be presented. In the third chapter some examples of metrics will be presented. The first three metrics, the most common: usual, Euclidean, and maximum metrics in R and R2. Also the generalizations of each of them were presented in Rn. The next chapter, the fourth, is intended to show the study on normed spaces, because through these concepts we can analyze the distances between vectors and matrices. We will see that the relevance of these distances helps in the understanding of systems solutions approximation. In the chapter on distance of functions, a brief comment about Fourier series was presented, regarding the method of approximation through the decomposition of periodic functions. In order to analyze how the trigonometric functions are approaching, the concept of distance between functions is used, the measurements are made as the approximations increase, this distance \"error\" between them tends to zero. In codes theory, it is necessary to introduce the concept of distance between \"words\", this allows to verify if the code had some alteration, caused by an interference or noises during the trajectory. In some situations, the code can correct and understand the sent word even though it has undergone changes in the route. In these cases, there is Hammings metrics study. By the Hausdoorf metric, proposed by the mathematician of the same name, it is possible to calculate with more precision the distance between closed and limited sets. This metric can be used in face recognition studies, for example, because face images are transformed into clouds of dots. Then, through the Dijkstras algorithm will be presented the distance between the vertices of a convex graphic. There are several applications of distance between graphics and one of them is the issue of minimizing the cost of moving between a local carrier company and the place of delivery, for example. To finish the discussion about the importance of distance consensus, the distance between genes will be presented. Within this theme, the main scientist was Thomas Morgan, who through his studies managed to create the first genetic mapping. With this, he was able to relate the concept of distance between genes to the rate of gene recombination. Finally, an activity was elaborated with high school students with the objective of analyzing students knowledge about distance. This activity was also important so that the students could understand about a necessity to formalize this concept mathematically and, mainly, to motivate them through the presentation of applications on distance, in different scopes.
|
Page generated in 0.0462 seconds