Spelling suggestions: "subject:"complexity analysis"" "subject:"komplexity analysis""
1 |
Análise de complexidade aplicada à antecipação de crises no mercado de bens minerais. / Analysis applied to complex crises in anticipation of goods market minerals.Dompieri, Mauricio 20 October 2014 (has links)
O objetivo deste estudo foi o de investigar as oportunidades de aplicação da análise de complexidade como método de análise da Economia Mineral de um bem mineral, utilizando o níquel como estudo de caso. Para tanto foram estudadas as particularidades do mercado de commodities, com maior profundidade no caso do níquel, os fatores que nele influem e alguns modelos desenvolvidos para simulação, compreensão e predição do comportamento do sistema composto por este mercado. Foram verificadas as condições para que o mercado de bens minerais tenha sido considerado um sistema complexo. No caso do níquel foi também analisado o estado atual da tecnologia de extração, incluindo os desenvolvimentos mais recentes. Passou-se então à descrição do método utilizado na análise de complexidade que define a complexidade de um sistema como uma grandeza quantificável em função de sua topologia, representada pela estrutura das correlações entre suas variáveis, e da entropia total do sistema. A entropia total do sistema é a integração das entropias de Shannon das variáveis que participam de sua estrutura e é uma medida da sua incerteza. Neste método, o cálculo das correlações entre as variáveis não é feito estatisticamente, mas sim por meio do cálculo da entropia mútua. A vantagem deste método é que revela correlações entre pares de variáveis que apresentam relações não lineares ou até mesmo bifurcações, clustering e outras patologias de difícil tratamento estatístico. Desta forma, evita-se o termo correlação, que remete ao tratamento estatístico, preferindo-se acoplamento em seu lugar, para identificar a dependência entre duas variáveis. A seguir, foram abordadas as duas modalidades de análise de complexidade utilizadas: estática e dinâmica. A análise estática revela, por meio de um mapa cognitivo, a estrutura do sistema e as forças de acoplamento entre seus componentes, bem como os índices de complexidade, compostos das complexidades crítica, operacional e mínima, da entropia e da robustez. O índice de maior destaque é o da robustez, que mede a resiliência do sistema por meio da diferença entre as complexidades crítica e operacional, e é um indicador de sua sustentabilidade. A análise dinâmica revela, para sistemas que variam com o tempo, a evolução dos indicadores de complexidade ao longo do tempo. O interesse nesse tipo de análise é que o criador do método identificou experimentalmente que o colapso de um sistema é quase sempre precedido de um aumento brusco em sua complexidade. Esta característica é então explorada na análise do mercado do níquel para procurar antecipar crises. Na parte experimental pode-se então revelar a estrutura de acoplamentos de uma cesta de metais e do mercado específico do níquel, usando-se a análise estática. A seguir, passou-se a investigar a evolução dos indicadores de complexidade ao longo do tempo, tendo sido possível identificar as situações de crise no mercado pelo aumento de complexidade e entropia e, no caso específico da crise de 2008-2009 foi possível perceber o aumento significativo da complexidade e entropia antes mesmo da instalação da crise, fornecendo assim um aviso prévio do evento. / This study aimed at investigating the opportunities for application of complexity analysis as a method of analysis of mineral commodities economics, using nickel as case study. With that intention, the particularities of commodities were studied, in a deeper fashion in the case of nickel, its influencing factors and respective models which have been developed for simulating, understanding and predicting the behavior of the commodity market system. The conditions which allow the mineral commodities market to be considered a complex system have been verified. In the case of nickel the current state of the extraction technology including the latest developments has also been analyzed. Then focus goes to the description of the method used in complexity analysis, where complexity of a system is defined as a measurable quantity based on its topology, represented by the structure of the correlation between its variables, and the total entropy of the system. The total entropy of the system is the integration of the Shannon entropy of the variables that participate in its structure and is a measure of the systems uncertainty, i.e., its departure from a deterministic operating fashion. Calculation of correlations between variables in this method is not done statistically, but by calculating the mutual entropy between each pair of variables. The advantage of this method is that it reveals correlations between pairs of variables that exhibit nonlinear relationships or even bifurcations, clustering and other pathologies of difficult statistical treatment. Thus, the term correlation is avoided, which refers to the statistical treatment, being coupling the preferred expression to identify the dependence between two variables. The two types of complexity analysis were then performed: static and dynamic. Static analysis reveals the system structure and strength of couplings between the components by means of a cognitive map, as well as the complexity indices consisting of critical complexity, operational and minimum entropy and robustness. Robustness is the most interesting index in this case, as it measures the resilience of the system using the difference between the critical and operative complexities, and is an indicator of its sustainability. The dynamic analysis reveals, for time variant systems, the evolution of complexity indicators over time. Interest in this type of analysis is that the methods developer has experimentally identified that the collapse of a system is almost always preceded by a sharp increase in their complexity. This feature is then exploited in the analysis of the nickel market in trying to anticipate crises. Then, in the experimental section, structures of couplings were identified for a basket of metals and for the specific nickel market, using static analysis. Finally the evolution of indicators of complexity over time has been investigated, which revealed to be possible to identify a crisis in the market by the increasing complexity and entropy and, in the particular case of the 2008-2009 crisis its been also was possible to observe a significant increase in complexity and entropy just before installation of the crisis itself, providing a pre-alarm of the event.
|
2 |
Análise de complexidade aplicada à antecipação de crises no mercado de bens minerais. / Analysis applied to complex crises in anticipation of goods market minerals.Mauricio Dompieri 20 October 2014 (has links)
O objetivo deste estudo foi o de investigar as oportunidades de aplicação da análise de complexidade como método de análise da Economia Mineral de um bem mineral, utilizando o níquel como estudo de caso. Para tanto foram estudadas as particularidades do mercado de commodities, com maior profundidade no caso do níquel, os fatores que nele influem e alguns modelos desenvolvidos para simulação, compreensão e predição do comportamento do sistema composto por este mercado. Foram verificadas as condições para que o mercado de bens minerais tenha sido considerado um sistema complexo. No caso do níquel foi também analisado o estado atual da tecnologia de extração, incluindo os desenvolvimentos mais recentes. Passou-se então à descrição do método utilizado na análise de complexidade que define a complexidade de um sistema como uma grandeza quantificável em função de sua topologia, representada pela estrutura das correlações entre suas variáveis, e da entropia total do sistema. A entropia total do sistema é a integração das entropias de Shannon das variáveis que participam de sua estrutura e é uma medida da sua incerteza. Neste método, o cálculo das correlações entre as variáveis não é feito estatisticamente, mas sim por meio do cálculo da entropia mútua. A vantagem deste método é que revela correlações entre pares de variáveis que apresentam relações não lineares ou até mesmo bifurcações, clustering e outras patologias de difícil tratamento estatístico. Desta forma, evita-se o termo correlação, que remete ao tratamento estatístico, preferindo-se acoplamento em seu lugar, para identificar a dependência entre duas variáveis. A seguir, foram abordadas as duas modalidades de análise de complexidade utilizadas: estática e dinâmica. A análise estática revela, por meio de um mapa cognitivo, a estrutura do sistema e as forças de acoplamento entre seus componentes, bem como os índices de complexidade, compostos das complexidades crítica, operacional e mínima, da entropia e da robustez. O índice de maior destaque é o da robustez, que mede a resiliência do sistema por meio da diferença entre as complexidades crítica e operacional, e é um indicador de sua sustentabilidade. A análise dinâmica revela, para sistemas que variam com o tempo, a evolução dos indicadores de complexidade ao longo do tempo. O interesse nesse tipo de análise é que o criador do método identificou experimentalmente que o colapso de um sistema é quase sempre precedido de um aumento brusco em sua complexidade. Esta característica é então explorada na análise do mercado do níquel para procurar antecipar crises. Na parte experimental pode-se então revelar a estrutura de acoplamentos de uma cesta de metais e do mercado específico do níquel, usando-se a análise estática. A seguir, passou-se a investigar a evolução dos indicadores de complexidade ao longo do tempo, tendo sido possível identificar as situações de crise no mercado pelo aumento de complexidade e entropia e, no caso específico da crise de 2008-2009 foi possível perceber o aumento significativo da complexidade e entropia antes mesmo da instalação da crise, fornecendo assim um aviso prévio do evento. / This study aimed at investigating the opportunities for application of complexity analysis as a method of analysis of mineral commodities economics, using nickel as case study. With that intention, the particularities of commodities were studied, in a deeper fashion in the case of nickel, its influencing factors and respective models which have been developed for simulating, understanding and predicting the behavior of the commodity market system. The conditions which allow the mineral commodities market to be considered a complex system have been verified. In the case of nickel the current state of the extraction technology including the latest developments has also been analyzed. Then focus goes to the description of the method used in complexity analysis, where complexity of a system is defined as a measurable quantity based on its topology, represented by the structure of the correlation between its variables, and the total entropy of the system. The total entropy of the system is the integration of the Shannon entropy of the variables that participate in its structure and is a measure of the systems uncertainty, i.e., its departure from a deterministic operating fashion. Calculation of correlations between variables in this method is not done statistically, but by calculating the mutual entropy between each pair of variables. The advantage of this method is that it reveals correlations between pairs of variables that exhibit nonlinear relationships or even bifurcations, clustering and other pathologies of difficult statistical treatment. Thus, the term correlation is avoided, which refers to the statistical treatment, being coupling the preferred expression to identify the dependence between two variables. The two types of complexity analysis were then performed: static and dynamic. Static analysis reveals the system structure and strength of couplings between the components by means of a cognitive map, as well as the complexity indices consisting of critical complexity, operational and minimum entropy and robustness. Robustness is the most interesting index in this case, as it measures the resilience of the system using the difference between the critical and operative complexities, and is an indicator of its sustainability. The dynamic analysis reveals, for time variant systems, the evolution of complexity indicators over time. Interest in this type of analysis is that the methods developer has experimentally identified that the collapse of a system is almost always preceded by a sharp increase in their complexity. This feature is then exploited in the analysis of the nickel market in trying to anticipate crises. Then, in the experimental section, structures of couplings were identified for a basket of metals and for the specific nickel market, using static analysis. Finally the evolution of indicators of complexity over time has been investigated, which revealed to be possible to identify a crisis in the market by the increasing complexity and entropy and, in the particular case of the 2008-2009 crisis its been also was possible to observe a significant increase in complexity and entropy just before installation of the crisis itself, providing a pre-alarm of the event.
|
3 |
Distributed calculations using mobile agents / Calculs Distribués par des Agents MobilesAbbas, Shehla 15 December 2008 (has links)
Cette thèse traite l’utilisation des agents mobiles dans le domaine des algo- rithmes distribués en les déplaçant de manière aléatoire dans le réseau. Initialement k agents mobiles ayant les identités uniques sont placés dans le réseau. On décrit un algorithme distribué pour calculer un arbre couvrant dans les réseaux dynamiques en utilisant les agents mobiles. Les agents marquent les noeuds sur les quelles ils arrivent. Ils utilisent deux techniques di?érentes : le clonage dans lequel un agent crée son propre clone pour faire quelques tâches et le marquage sur la tableau de bord (un espace mémoire sur les noeuds). Ces techniques sont utilisés dans les applications comme l’arbre couvrant, le rassemblement et la collecte d’information. Chacun des agents détient une information partielle. Quand deux ou plusieurs agents se rencontrent sur un noeud, ils fusionnent en un seul agent. On s’intéresse alors au temps nécessaire ou tous les k agents fusionnent en un seul et unique agent. On présent une chaîne de Markov pour le comportement des agents, et on montre comment on peut utiliser cette technique pour calculer la bourne supérieur. On étudie le même problème quand les agents mobile commencent la marche aléatoire sous un régime stationnaire. On a aussi étudié le problème de Handshake et on l’a analysé en utilisant les agents mobiles. / This thesis deals with the use of mobile agents in distributed algorithms by performing random walks in the network. k mobile agents having unique identities are placed initially in a network. We describe a distributed algorithm for computing spanning trees in dynamic networks by using mobile agents. The agents mark the nodes on which they arrive. They use two di?erent techniques. In one problem they use the cloning in which an agent creates its own clone to do some task assigned. In the second, the mobile agents mark on the whiteboard (a memory location on the nodes). These techniques are used in applications such as spanning tree, gathering and collecting information. The mobile agents have limited knowledge and hence, they are not intelligent and do not have computational capabilities. When two or more agents meet at a node of the underlying graph, they merge into a single agent. The parameter of interest is the expected time for all the agents to merge into a single agent. We present a Markov chain, modelling the agents behavior, and show how this can be used to upper bound the expected time for all the k agents to merge into a single agent. We study the same problem when the mobile agents start their walk directly under stationary regime. Handshake problem is also studied and analyzed using mobile agents.
|
4 |
Rūšiavimo algoritmų vizualizavimas ir sudėtingumo analizė / Visualization and Complexity Analysis of Sorting AlgorithmsSaročka, Gediminas 02 July 2012 (has links)
Rūšiavimo algoritmų sudėtingumo analizių galima atrasti be problemų, todėl pagrindinė šio darbo idėja buvo sukurti rūšiavimo algoritmų vizualizavimą. Šiame darbe buvo sukurtas trijų paprastųjų rūšiavimo algoritmų (įterpimo, burbulo ir išrinkimo), bei dviejų greitųjų rūšiavimo algoritmų (Šelo ir sąlajos) vizualizavimas. Darbe taip pat galima skaičiuoti rūšiavimo algoritmų rūšiuojamą laiką. / There is a lot of complexity analysis of sorting algorithms can be found without problems, so the main idea of this work was to create a visualization of sorting algorithms. This work was created three simple sorting algorithms (insertion sort, bubble sort and selection sort), and two high-speed sorting algorithms (Shell sort and merge sort) visualization. This program is capable of calculating sorting time of sorting algorithm for further sorting algorithm complexity analysis.
|
5 |
Analyzing the Computational Complexity of Abstract Dialectical Frameworks via Approximation Fixpoint TheoryStraß, Hannes, Wallner, Johannes Peter 22 January 2014 (has links) (PDF)
Abstract dialectical frameworks (ADFs) have recently been proposed as a versatile generalization of Dung's abstract argumentation frameworks (AFs). In this paper, we present a comprehensive analysis of the computational complexity of ADFs. Our results show that while ADFs are one level up in the polynomial hierarchy compared to AFs, there is a useful subclass of ADFs which is as complex as AFs while arguably offering more modeling capacities. As a technical vehicle, we employ the approximation fixpoint theory of Denecker, Marek and Truszczyński, thus showing that it is also a useful tool for complexity analysis of operator-based semantics.
|
6 |
Low complexity differential geometric computations with applications to human activity analysisJanuary 2012 (has links)
abstract: In this thesis, we consider the problem of fast and efficient indexing techniques for time sequences which evolve on manifold-valued spaces. Using manifolds is a convenient way to work with complex features that often do not live in Euclidean spaces. However, computing standard notions of geodesic distance, mean etc. can get very involved due to the underlying non-linearity associated with the space. As a result a complex task such as manifold sequence matching would require very large number of computations making it hard to use in practice. We believe that one can device smart approximation algorithms for several classes of such problems which take into account the geometry of the manifold and maintain the favorable properties of the exact approach. This problem has several applications in areas of human activity discovery and recognition, where several features and representations are naturally studied in a non-Euclidean setting. We propose a novel solution to the problem of indexing manifold-valued sequences by proposing an intrinsic approach to map sequences to a symbolic representation. This is shown to enable the deployment of fast and accurate algorithms for activity recognition, motif discovery, and anomaly detection. Toward this end, we present generalizations of key concepts of piece-wise aggregation and symbolic approximation for the case of non-Euclidean manifolds. Experiments show that one can replace expensive geodesic computations with much faster symbolic computations with little loss of accuracy in activity recognition and discovery applications. The proposed methods are ideally suited for real-time systems and resource constrained scenarios. / Dissertation/Thesis / M.S. Electrical Engineering 2012
|
7 |
MARRT Pipeline: Pipeline for Markerless Augmented Reality Systems Based on Real-Time Structure from MotionPaulo Gomes Neto, Severino 31 January 2009 (has links)
Made available in DSpace on 2014-06-12T15:53:49Z (GMT). No. of bitstreams: 2
arquivo1931_1.pdf: 3171518 bytes, checksum: 18e05da39f750dea38eaa754f1aa4735 (MD5)
license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5)
Previous issue date: 2009 / Atualmente, com o aumento do poder computacional e os estudos em usabilidade, sistemas de tempo real e foto-realismo, os requisitos de qualquer sistema de computador são mais complexos e sofisticados.
Sistemas de Realidade Aumentada não são exceção em sua tentativa de resolver problemas da vida real do usuário com um nível reduzido de risco, tempo gasto ou complexidade de aprendizado. Tais sistemas podem ser classificados como baseados em marcadores ou livres de marcadores.
O papel essencial da realidade aumentada sem marcadores é evitar o uso desnecessário e indesejável de marcadores nas aplicações.
Para atender à demanda por tecnologias de realidade aumentada robustas e não-intrusivas, esta dissertação propõe uma cadeia de execução para o desenvolvimento de aplicações de realidade aumentada sem marcadores, especialmente baseadas na técnica de recuperação da estrutura a partir do movimento em tempo real
|
8 |
The Lifted Heston Stochastic Volatility ModelBroodryk, Ryan 04 January 2021 (has links)
Can we capture the explosive nature of volatility skew observed in the market, without resorting to non-Markovian models? We show that, in terms of skew, the Heston model cannot match the market at both long and short maturities simultaneously. We introduce Abi Jaber (2019)'s Lifted Heston model and explain how to price options with it using both the cosine method and standard Monte-Carlo techniques. This allows us to back out implied volatilities and compute skew for both models, confirming that the Lifted Heston nests the standard Heston model. We then produce and analyze the skew for Lifted Heston models with a varying number N of mean reverting terms, and give an empirical study into the time complexity of increasing N. We observe a weak increase in convergence speed in the cosine method for increased N, and comment on the number of factors to implement for practical use.
|
9 |
Analyzing the Computational Complexity of Abstract Dialectical Frameworks via Approximation Fixpoint TheoryStraß, Hannes, Wallner, Johannes Peter 22 January 2014 (has links)
Abstract dialectical frameworks (ADFs) have recently been proposed as a versatile generalization of Dung''s abstract argumentation frameworks (AFs). In this paper, we present a comprehensive analysis of the computational complexity of ADFs. Our results show that while ADFs are one level up in the polynomial hierarchy compared to AFs, there is a useful subclass of ADFs which is as complex as AFs while arguably offering more modeling capacities. As a technical vehicle, we employ the approximation fixpoint theory of Denecker, Marek and Truszczyński, thus showing that it is also a useful tool for complexity analysis of operator-based semantics.
|
10 |
Program analysis for quantitative-reachability propertiesLiu, Jiawen 06 September 2024 (has links)
Program analysis studies the execution behaviors of computer programs including programs’ safety behavior, privacy behavior, resource usage, etc. The kind of program analysis on the safety behavior of a program involves analyzing if a particular line of code leaks a secret and how much secret is leaked by this line of code. When studying the resource usage of a program, certain program analysis mainly focuses on analyzing whether a piece of code consumes a certain resource and how much resource is used by this piece of code. Yet another kind of program analysis is studying the program privacy behavior by analyzing whether a specific private data is dependent on other data and how many times they are dependent during multiple executions. We notice that when studying the aforementioned behaviors, there are two dominant program properties that we are analyzing – “How Much” and “Whether”, namely quantitative properties and reachability properties. In other words, we are analyzing the kind of program property that contains two sub-properties – quantitative and reachability. A property is a hyper-property if it has two or more sub-properties. For the class of properties that has quantitative and reachability sub-properties, I refer to them as quantitative-reachability hyperproperties. Most existing program analysis methods can analyze only one subproperty of a program’s quantitative-reachability hyper-property. For example, the reachability analysis methods only tell us whether some code pieces are executed, whether the confidential data is leaked, whether certain data relies on another data, etc., which are only the reachability sub-properties. These analysis methods do not address how many times or how long these properties hold with respect to some particular code or data. Quantitative analysis methods, such as program complexity analysis, resource cost analysis, execution time estimation, etc., only tell us the upper bound on the overall quantity, i.e., the quantitative sub-property. However, these quantities are not associated with a specific piece of code, program location, private data, etc., which are related to the reachability sub-properties. This thesis presents new program analysis methodology for analyzing two representative quantitative-reachability properties. The new methodology mitigates the limitations in both reachability analysis methods and quantitative analysis methods and help to control the program’s execution behaviors in higher granularity. The effectiveness of the new analysis method is validated through prototype implementations and experimental evaluations.
The first noteworthy quantitative-reachability property I look into is the adaptivity in the programs that implement certain adaptive data analyses. Data analyses are usually designed to identify some properties of the population from which the data are drawn, generalizing beyond the specific data sample. For this reason, data analyses are often designed in a way that guarantees that they produce a low generalization error. An adaptive data analysis can be seen as a process composed by multiple queries interrogating some data, where the choice of which query to run next may rely on the results of previous queries. The generalization error of each individual query/analysis can be controlled by using an array of well-established statistical techniques. However, when queries are arbitrarily composed, the different errors can propagate through the chain of different queries and result in high generalization error. To address this issue, data analysts are designing several techniques that not only guarantee bounds on the generalization errors of single queries, but that also guarantee bounds on the generalization error of the composed analyses. The choice of which of these techniques to use, often depends on the chain of queries that an adaptive data analysis can generate, intuitively the adaptivity level in an adaptive data analysis. To help analysts with identifying which technique to use to control their generalization error, we consider adaptive data analyses implemented as while-like programs, and we design a program analysis framework. In this framework, we first formalize the intuitive notion of adaptivity as a quantitative-reachability property, which is a key measure of an adaptive data analysis to choose the appropriate technique. Then we design a program analysis algorithm that estimates a sound upper bound on the adaptivity of the program that implements an adaptive data analysis. We also implement my program analysis and show that it can help to analyze the adaptivity of several concrete data analyses with different adaptivity structures.
As a continuation of the previous work, to get a more precise bound on a program’s adaptivity level, I look at another quantitative-reachability hyper-property – the number of times a given location inside a procedure is visited during the program execution. The upper bound on this hyper-property is referred to as the reachability-bound. It can help to improve the program analysis results when studying other different program features. For example, the reachability-bound on each program location can be used by some resource cost analysis techniques to compute a precise bound on a program’s worst-case resource consumption. When we analyze the adaptivity in an adaptive data analysis program as discussed above, the accuracy of my program analysis result can also be improved through a tight reachability-bound on every program location. Some existing program complexity analysis methods can be repurposed to analyze and estimate the reachability-bound. However, these methods focus only on the overall quantity and ignore the path sensitivity in the program. For this reason, the reachability-bounds of the locations in different sub-procedures are usually over-approximated. As far as we know, there is no general analysis algorithm that computes the reachability-bound for every program location directly and path-sensitively. To this end, I present a pathsensitive reachability-bound algorithm, which exploit the path sensitivity to compute a precise reachability-bound for every program location. We implement this path-sensitive reachability-bound algorithm in a prototype, and report on an experimental comparison with state-of-art tools over four different sets of benchmarks.
|
Page generated in 0.077 seconds