Spelling suggestions: "subject:"kolmogorov"" "subject:"olmogorov""
91 |
Deviating time-to-onset in predictive models : detecting new adverse effects from medicinesWärn, Caroline January 2015 (has links)
Identifying previously unknown adverse drug reactions becomes more important as the number of drugs and the extent of their use increases. The aim of this Master’s thesis project was to evaluate the performance of a novel approach for highlighting potential adverse drug reactions, also known as signal detection. The approach was based on deviating time-to-onset patterns and was implemented as a two-sample Kolmogorov-Smirnov test for non-vaccine data in the safety report database, VigiBase. The method was outperformed by both disproportionality analysis and the multivariate predictive model vigiRank. Performance estimates indicate that deviating time-to-onset patterns is not a suitable approach for signal detection for non-vaccine data in VigiBase.
|
92 |
Nouvelles méthodes de traitement de signaux multidimensionnels par décomposition suivant le théorème de Superposition de Kolmogorov / Novel processing methods for multidimensional signals using decompositions by the Kolmogorov superposition theoremLeni, Pierre-Emmanuel 23 November 2010 (has links)
Le traitement de signaux multidimensionnels reste un problème délicat lorsqu’il s’agit d’utiliser des méthodes conçues pour traiter des signaux monodimensionnels. Il faut alors étendre les méthodes monodimensionnelles à plusieurs dimensions, ce qui n’est pas toujours possible, ou bien convertir les signaux multidimensionnels en signaux 1D. Dans ce cas, l’objectif est de conserver le maximum des propriétés du signal original. Dans ce contexte, le théorème de superposition de Kolmogorov fournit un cadre théorique prometteur pour la conversion de signaux multidimensionnels. En effet, en 1957, Kolmogorov a démontré que toute fonction multivariée pouvait s’écrire comme sommes et compositions de fonctions monovariées. Notre travail s’est focalisé sur la décomposition d’images suivant le schéma proposé par le théorème de superposition, afin d’´etudier les applications possibles de cette d´ecomposition au traitement d’image. Pour cela, nous avons tout d’abord ´etudi´e la construction des fonctions monovari´ees. Ce probl`eme a fait l’objet de nombreuses ´etudes, et r´ecemment, deux algorithmes ont ´et´e propos´es. Sprecher a propos´e dans [Sprecher, 1996; Sprecher, 1997] un algorithme dans lequel il d´ecrit explicitement la m´ethode pour construire exactement les fonctions monovari´ees, tout en introduisant des notions fondamentales `a la compr´ehension du th´eor`eme. Par ailleurs, Igelnik et Parikh ont propos´e dans [Igelnik and Parikh, 2003; Igelnik, 2009] un algorithme pour approximer les fonctions monovariéees par un réseau de splines. Nous avons appliqué ces deux algorithmes à la décomposition d’images. Nous nous sommes ensuite focalisés sur l'étude de l’algorithme d’Igelnik, qui est plus facilement modifiable et offre une repréesentation analytique des fonctions, pour proposer deux applications originales répondant à des problématiques classiques de traitement de l’image : pour la compression : nous avons étudié la qualité de l’image reconstruite par un réseau de splines généré avec seulement une partie des pixels de l’image originale. Pour améliorer cette reconstruction, nous avons proposé d’effectuer cette décomposition sur des images de détails issues d’une transformée en ondelettes. Nous avons ensuite combiné cette méthode à JPEG 2000, et nous montrons que nous améliorons ainsi le schéma de compression JPEG 2000, même à bas bitrates. Pour la transmission progressive : en modifiant la génération du réseau de splines, l’image peut être décomposée en une seule fonction monovariée. Cette fonction peut être transmise progressivement, ce qui permet de reconstruire l’image en augmentant progressivement sa résolution. De plus, nous montrons qu’une telle transmission est résistante à la perte d’information. / The processing of multidimensional signal remains difficult when using monodimensional-based methods. Therefore, it is either required to extend monodimensional methods to several dimensions, which is not always possible, or to convert the multidimensional signals into 1D signals. In this case, the priority is to preserve most of the properties of the original signal. In this context, the Kolmogorov Superposition Theorem offers a promising theoretical framework for multidimensional signal conversion. In 1957, Kolmogorov demonstrated that any multivariate function can be written as sums and compositions of monovariate functions.We have focused on the image decomposition according to the superposition theorem scheme, to study the possible applications of this decomposition to image processing. We have first studied the monovariate function constructions. Various studies have dealt with this problem, and recently, two algorithms have been proposed. Sprecher has proposed in [Sprecher, 1996; Sprecher, 1997] an algorithm in which the method to exactly build the monovariate functions is described, as well as fundamental notions for the understanding of the theorem. Igelnik and Parikh have proposed in [Igelnik and Parikh, 2003; Igelnik, 2009] an algorithm to approximate the monovariate functions by a Spline network. We have applied both algorithms to image decomposition. We have chosen to use Igelnik’s algorithm which is easier to modify and provides an analytic representation of the functions, to propose two novel applications for classical problems in image processing : for compression : we have studied the quality of a reconstructed image using a spline network built with only a fraction of the pixels of the original image. To improve this reconstruction, we have proposed to apply this decomposition on images of details obtained by wavelet transform. We have then combined this method with JPEG 2000, and we show that the JPEG 2000 compression scheme is improved, even at low bitrates. For progressive transmission : by modifying the spline network construction, the image can be decomposed into one monovariate function. This function can be progressively transmitted, which allows to reconstruct the image by progressively increasing its resolution. Moreover, we show that such a transmission is resilient to information lost.
|
93 |
Possible Difficulties in Evaluating University PerformanceBased on Publications Due to Power Law Distributions : Evidence from SwedenSadric, Haroon, Zia, Sarah January 2023 (has links)
Measuring the research performance of a university is important to the universities themselves, governments, and students alike. Among other metrics, the number of publications is easy to obtain, and due to the large number of publications each university produces during one year, it suggests to be one accurate metric. However, the number of publications depends largely on the size of the institution, suggesting, if not addressed, that larger universities are better. Thus, one might intuitively try to normalize by size and use publications per researcher instead. A better institution would allow individual researchers to have more publications each year. However, publications, like many other things, might follow a power-law distribution, where most researchers have few, and only a few researchers have very many publications. These power-law distributions violate the assumptions the central limit the orem makes, for example, having a well-defined mean or variance. Specifically, one can not normalize or use averages from power-law distributed data, making the comparison of university publications impossible if they indeed follow a power-law distribution. While it has been shown that some scientific domains or universities show this power-law distribution, it is not known if Swedish universities also show this phenomenon. Thus, here we collect publication data for Swedish universities and determine whether or not, they are power-law distributed. Interestingly, if they are, one might use the slope of the power-law distribution as a proxy to determine research output. If the slope is steep, it suggests that the ratio between highly published authors and those with few publications is small. Where as a flatter slope suggests that a university has more highly published authors than a university with a steeper slope. Thus, the second objective here is to assess if the slope of the distribution can be determined or to which extent this is possible. This study will show that eight of the fifteen Swedish universities considered follow a power-law distribution (Kolmogorov-Smirnov statistic<0.05), while the remaining seven do not. The key determinant is the total number of publications. The difficulty here is that often the total number of publications is so small that one can not reject a power-law distribution, and it is also impossible to determine the slope of the distribution with any accuracy in those cases. While this study suggests that in principle, the slopes of the power-law distributions can be used as a comparative metric, it also showed that for half of Sweden’s universities, the data is insufficient for this type of analysis.
|
94 |
Segmenting Observed Time Series Using Comovement and Complexity Measures / Segmentering av Observerade Tidsserier med hjälp av Comovement- och KomplexitetsmåttNorgren, Lee January 2019 (has links)
Society depends on unbiased, efficient and replicable measurement tools to tell us more truthfully what is happening when our senses would otherwise fool us. A new approach is made to consistently detect the start and end of historic recessions as defined by the US Federal Reserve. To do this, three measures, correlation (Spearman and Pearson), Baur comovement and Kolmogorov complexity, are used to quantify market behaviour to detect recessions. To compare the effectiveness of each measure the normalized correct Area Under Curve (AUC) fraction is introduced. It is found that for all three measures, the performance is mostly dependent on the type of data and that financial market data does not perform as good as fundamental economical data to detect recessions. Furthermore, comovement is found to be the most efficient individual measure and also most efficient of all measures when compared against several measures merged together. / Samhället är beronde förväntningsriktiga, effektiva och replikerbara mätverktyg för att mer sanningsenligt informera vad som händer när våra sinnen lurar oss. Ett nytt tillvägagångssätt utvecklas för att konsekvent uppmäta början och slut av historiska lågkonjunkturer så som definierats av US Federal Reserve. För att göra detta används tre mätmetoder, korrelation (Spearman och Pearson), Baur comovement och Kolmogorovkomplexitet, för att kvantifiera marknadsbeteendet i avsikt att upptäcka lågkonjunkturer. För att jämföra effektiviteten hos varje metod introduceras normalized correct Area Under Curve (AUC) fraktionen. Det konstateras att effektiviteten hos alla tre metoder är främst beroende av vilken typ av data som används och att finansiell data inte fungerar lika bra som real ekonomiska data för att upptäcka lågkonjunkturer. Vidare visas att comovement är den mest effektiva individualla mätmetoden och även den mest effektiva metoden jämfört med sammanslagna metoder
|
95 |
A Tree-based Framework for Difference SummarizationLi, Rong 19 April 2012 (has links)
No description available.
|
96 |
Chaos and Learning in Discrete-Time Neural NetworksBanks, Jess M. 27 October 2015 (has links)
No description available.
|
97 |
Chaos in Pulsed Laminar FlowKumar, Pankaj 01 September 2010 (has links)
Fluid mixing is a challenging problem in laminar flow systems. Chaotic advection can play an important role in enhancing mixing in such flow. In this thesis, different approaches are used to enhance fluid mixing in two laminar flow systems.
In the first system, chaos is generated in a flow between two closely spaced parallel circular plates by pulsed operation of fluid extraction and reinjection through singularities in the domain. A singularity through which fluid is injected (or extracted) is called a source (or a sink). In a bounded domain, one source and one sink with equal strength operate together as a source-sink pair to conserve the fluid volume. Fluid flow between two closely spaced parallel plates is modeled as Hele-Shaw flow with the depth averaged velocity proportional to the gradient of the pressure. So, with the depth-averaged velocity, the flow between the parallel plates can effectively be modeled as two-dimensional potential flow. This thesis discusses pulsed source-sink systems with two source-sink pairs operating alternately to generate zig-zag trajectories of fluid particles in the domain. For reinjection purpose, fluid extracted through a sink-type singularity can either be relocated to a source-type one, or the same sink-type singularity can be activated as a source to reinject it without relocation. Relocation of fluid can be accomplished using either "first out first in" or "last out first in" scheme. Both relocation methods add delay to the pulse time of the system. This thesis analyzes mixing in pulsed source-sink systems both with and without fluid relocation. It is shown that a pulsed source-sink system with "first out first in" scheme generates comparatively complex fluid flow than pulsed source-sink systems with "last out first in" scheme. It is also shown that a pulsed source-sink system without fluid relocation can generate complex fluid flow.
In the second system, mixing and transport is analyzed in a two-dimensional Stokes flow system. Appropriate periodic motions of three rods or periodic points in a two-dimensional flow are determined using the Thurston-Nielsen Classification Theorem (TNCT), which also predicts a lower bound on the complexity generated in the fluid flow. This thesis extends the TNCT -based framework by demonstrating that, in a perturbed system with no lower order fixed points, almost invariant sets are natural objects on which to apply the TNCT. In addition, a method is presented to compute line stretching by tracking appropriate motion of finite size rods. This method accounts for the effect of the rod size in computing the complexity generated in the fluid flow. The last section verifies the existence of almost invariant sets in a two-dimensional flow at finite Reynolds number. The almost invariant set structures move with appropriate periodic motion validating the application of the TNCT to predict a lower bound on the complexity generated in the fluid flow. / Ph. D.
|
98 |
Turbulence modelling of shallow water flows using Kolmogorov approachPu, Jaan H. 20 March 2015 (has links)
Yes / This study uses an improved k –ε coupled shallow water equations (SWE) model that equipped with the numerical computation of the velocity fluctuation terms to investigate the turbulence structures of the open channel flows. We adapted the Kolmogorov K41 scaling model into the k –ε equations to calculate the turbulence intensities and Reynolds stresses of the SWE model. The presented model was also numerically improved by a recently proposed surface gradient upwind method (SGUM) to allow better accuracy in simulating the combined source terms from both the SWE and k –ε equations as proven in the recent studies. The proposed model was first tested using the flows induced by multiple obstructions to investigate the utilised k –ε and SGUM approaches in the model. The laboratory experiments were also conducted under the non-uniform flow conditions, where the simulated velocities, total kinetic energies (TKE) and turbulence intensities by the proposed model were used to compare with the measurements under different flow non-uniformity conditions. Lastly, the proposed numerical simulation was compared with a standard Boussinesq model to investigate its capability to simulate the measured Reynolds stress. The comparison outcomes showed that the proposed Kolmogorov k –ε SWE model can capture the flow turbulence characteristics reasonably well in all the investigated flows. / The Major State Basic Research Development Program (973 program) of China (No. 2013CB036402)
|
99 |
Nelinearna dinamička analiza fizičkih procesa u žiivotnoj sredini / Nonlinear dynamical analysis of the physical processes in the environmentMimić Gordan 29 September 2016 (has links)
<p>Ispitivan je spregnut sistem jednačina za prognozu temperature na površini i u dubljem sloju zemljišta. Računati su Ljapunovljevi eksponenti, bifurkacioni dijagram, atraktor i analiziran je domen rešenja. Uvedene su nove informacione mere bazirane na<br />Kolmogorovljevoj kompleksnosti, za kvantifikaciju stepena nasumičnosti u vremenskim serijama,. Nove mere su primenjene na razne serije dobijene merenjem fizičkih faktora životne sredine i pomoću klimatskih modela.</p> / <p>Coupled system of prognostic equations for the ground surface temperature and the deeper layer temperature was examind. Lyapunov exponents, bifurcation diagrams, attractor and the domain of solutions were analyzed. Novel information measures based on Kolmogorov complexity and used for the quantification of randomness in time series, were presented.Novel measures were tested on various time series obtained by measuring physical factors of the environment or as the climate model outputs.</p>
|
100 |
Méthodes Combinatoires et Algébriques en Complexité de la CommunicationKaplan, Marc 28 September 2009 (has links) (PDF)
La complexité de la communication a été introduite en 1979 par Andrew Chi-Chi Yao. Elle est depuis devenue l'un des modèles de calcul les plus étudiés. L'objectif de celle-ci est d'étudier des problèmes dont les entrées sont distribuées entre plusieurs joueurs, en quantifiant la communication que ceux-ci doivent échanger. Nous utilisons d'abord la complexité de Kolmogorov, une caractérisation algorithmique de l'aléatoire, pour prouver des bornes inférieures sur la complexité de la communication. Notre méthode constitue une généralisation de la méthode d'incompressibilité. L'avantage de cette approche est de mettre en valeur la nature combinatoire des preuves. Nous étudions ensuite la simulation des distributions de probabilité causales avec de la communication. Ce modèle généralise la complexité de la communication traditionnelle et comprend en particulier les distributions quantiques. Nous montrons pour ce problème des bornes inférieures et supérieures. Dans le cas des fonctions booléennes, la borne inférieure que nous proposons est équivalente aux normes de factorisation, une puissante méthode introduite par Linial et Shraibman en 2006. Enfin, nous étudions la complexité en boîte non-locale. Cette ressource a été introduite par Popescu et Rohrlich pour étudier la non-localité. Le problème est de quantifier le nombre de boîtes nécessaire et suffisant pour calculer une fonction ou simuler une distributions. Nous donnons encore des bornes inférieures et supérieures pour ces problèmes, ainsi que des applications à l'évaluation sécurisée, un problème cryptographique très important.
|
Page generated in 0.0264 seconds