21 |
Efficient Algorithms for Calculating the System Matrix and the Kleene Star Operator for Systems Defined by Directed Acyclic Graphs over DioidsBahalkeh, Esmaeil January 2015 (has links)
No description available.
|
22 |
[en] ARGUING NP = PSPACE: ON THE COVERAGE AND SOUNDNESS OF THE HORIZONTAL COMPRESSION ALGORITHM / [pt] ARGUMENTANDO NP = PSPACE: SOBRE A COBERTURA E CORRETUDE DO ALGORITMO DE COMPRESSÃO HORIZONTALROBINSON CALLOU DE M BRASIL FILHO 12 September 2024 (has links)
[pt] Este trabalho é uma elaboração, com exemplos, e evolução do Algoritmo de Compressão Horizontal (HC) apresentado e seu Conjunto de Regras de Compressão. Este trabalho apresenta uma prova, feita no Provador Interativo de Teoremas Lean, de que o algoritmo HC pode obter uma Derivação Comprimida, representada por um Grafo Acíclico Dirigido, a partir de qualquer Derivação Tipo-Árvore em Dedução Natural para a Lógica Minimal Puramente Implicacional. Finalmente, a partir da Cobertura e Corretude do algoritmo HC, pode-se argumentar que NP = PSPACE. / [en] This work is an elaboration, with examples, and evolution of the presented Horizontal Compression Algorithm (HC) and its set of Compression Rules.
This work argues a proof, done in the Lean Interactive Theorem Prover, that
the HC algorithm can obtain a Compressed Derivation, represented by a Directed Acyclic Graph, from any Tree-Like Natural Deduction Derivation in Minimal Purely Implicational Logic. Finally, from the Coverage and Soundness of
the HC algorithm, one can argue that NP = PSPACE.
|
23 |
Graphical representation of independence structuresSadeghi, Kayvan January 2012 (has links)
In this thesis we describe subclasses of a class of graphs with three types of edges, called loopless mixed graphs (LMGs). The class of LMGs contains almost all known classes of graphs used in the literature of graphical Markov models. We focus in particular on the subclass of ribbonless graphs (RGs), which as special cases include undirected graphs, bidirected graphs, and directed acyclic graphs, as well as ancestral graphs and summary graphs. We define a unifying interpretation of independence structure for LMGs and pairwise and global Markov properties for RGs, discuss their maximality, and, in particular, prove the equivalence of pairwise and global Markov properties for graphoids defined over the nodes of RGs. Three subclasses of LMGs (MC, summary, and ancestral graphs) capture the modified independence model after marginalisation over unobserved variables and conditioning on selection variables of variables satisfying independence restrictions represented by a directed acyclic graph (DAG). We derive algorithms to generate these graphs from a given DAG or from a graph of a specific subclass, and we study the relationships between these classes of graphs. Finally, a manual and codes are provided that explain methods and functions in R for implementing and generating various graphs studied in this thesis.
|
24 |
Enumerating functional substructures of genome-scale metabolic networks : stories, precursors and organisations / Énumération de sous-structures fonctionnelle dans des réseaux métaboliques complets : Histoires métaboliques, précurseurs et organisations chimiquesVieira Milreu, Paulo 19 December 2012 (has links)
Dans cette thèse, nous avons présenté trois méthodes différentes pour l’énumération de sousréseauxparticuliers d’un réseau métabolique: les histoires métaboliques, les ensembles minimaux deprécurseurs et les organisations chimiques. Pour chacune de ces trois méthodes, nous avons présentédes résultats théoriques, et pour les deux premières, nous avons en outre fourni une illustration surcomment les appliquer afin d’étudier le comportement métabolique des organismes vivants. Les histoiresmétaboliques sont définies comme des graphes acycliques dirigés maximaux dont les ensemblesde sources et de cibles sont limités à un sous-ensemble des noeuds. La motivation initiale de cette définitionétait d’analyser des données expérimentales de métabolomique, mais la méthode a égalementété explorée dans un contexte différent. Les ensembles de précurseurs métaboliques sont des ensemblesminimaux de nutriments qui permettent de produire des métabolites d’intérêt. Nous présentons troisméthodes différentes pour l’énumération de tels ensembles minimaux de précurseurs, et nous illustronsleur application dans une étude des échanges métaboliques dans un système symbiotique. Les organisationschimiques sont des ensembles de métabolites qui à la fois sont fermés et s’auto-maintiennent,ce qui reflète des caractéristiques de stabilité dans le sens où aucun nouveau métabolite ne peut êtreproduit et qu’aucun des métabolites déjà présents dans le système ne peut disparaître. / In this thesis, we presented three different methods for enumerating special subnetworks containedin a metabolic network: metabolic stories, minimal precursor sets and chemical organisations. Foreach of the three methods, we gave theoretical results, and for the two first ones, we further providedan illustration on how to apply them in order to study the metabolic behaviour of living organisms.Metabolic stories are defined as maximal directed acyclic graphs whose sets of sources and targets arerestricted to a subset of the nodes. The initial motivation of this definition was to analyse metabolomicsexperimental data, but the method was also explored in a different context. Metabolic precursor setsare minimal sets of nutrients that are able to produce metabolites of interest. We present threedifferent methods for enumerating minimal precursor sets and we illustrate the application in a studyof the metabolic exchanges in a symbiotic system. Chemical organisations are sets of metabolites thatare simultaneously closed and self-maintaining, which captures some stability feature in the
|
25 |
Enumerating functional substructures of genome-scale metabolic networks : stories, precursors and organisationsVieira Milreu, Paulo 19 December 2012 (has links) (PDF)
In this thesis, we presented three different methods for enumerating special subnetworks containedin a metabolic network: metabolic stories, minimal precursor sets and chemical organisations. Foreach of the three methods, we gave theoretical results, and for the two first ones, we further providedan illustration on how to apply them in order to study the metabolic behaviour of living organisms.Metabolic stories are defined as maximal directed acyclic graphs whose sets of sources and targets arerestricted to a subset of the nodes. The initial motivation of this definition was to analyse metabolomicsexperimental data, but the method was also explored in a different context. Metabolic precursor setsare minimal sets of nutrients that are able to produce metabolites of interest. We present threedifferent methods for enumerating minimal precursor sets and we illustrate the application in a studyof the metabolic exchanges in a symbiotic system. Chemical organisations are sets of metabolites thatare simultaneously closed and self-maintaining, which captures some stability feature in the
|
26 |
Predictive Resource Management for Scientific WorkflowsWitt, Carl Philipp 21 July 2020 (has links)
Um Erkenntnisse aus großen Mengen wissenschaftlicher Rohdaten zu gewinnen, sind komplexe Datenanalysen erforderlich. Scientific Workflows sind ein Ansatz zur Umsetzung solcher Datenanalysen. Um Skalierbarkeit zu erreichen, setzen die meisten Workflow-Management-Systeme auf bereits existierende Lösungen zur Verwaltung verteilter Ressourcen, etwa Batch-Scheduling-Systeme. Die Abschätzung der Ressourcen, die zur Ausführung einzelner Arbeitsschritte benötigt werden, wird dabei immer noch an die Nutzer:innen delegiert. Dies schränkt die Leistung und Benutzerfreundlichkeit von Workflow-Management-Systemen ein, da den Nutzer:innen oft die Zeit, das Fachwissen oder die Anreize fehlen, den Ressourcenverbrauch genau abzuschätzen.
Diese Arbeit untersucht, wie die Ressourcennutzung während der Ausführung von Workflows automatisch erlernt werden kann. Im Gegensatz zu früheren Arbeiten werden Scheduling und Vorhersage von Ressourcenverbrauch in einem engeren Zusammenhang betrachtet. Dies bringt verschiedene Herausforderungen mit sich, wie die Quantifizierung der Auswirkungen von Vorhersagefehlern auf die Systemleistung.
Die wichtigsten Beiträge dieser Arbeit sind:
1. Eine Literaturübersicht aktueller Ansätze zur Vorhersage von Spitzenspeicherverbrauch mittels maschinellen Lernens im Kontext von Batch-Scheduling-Systemen.
2. Ein Scheduling-Verfahren, das statistische Methoden verwendet, um vorherzusagen, welche Scheduling-Entscheidungen verbessert werden können.
3. Ein Ansatz zur Nutzung von zur Laufzeit gemessenem Spitzenspeicherverbrauch in Vorhersagemodellen, die die fortwährende Optimierung der Ressourcenallokation erlauben. Umfangreiche Simulationsexperimente geben Einblicke in Schlüsseleigenschaften von Scheduling-Heuristiken und Vorhersagemodellen.
4. Ein Vorhersagemodell, das die asymmetrischen Kosten überschätzten und unterschätzten Speicherverbrauchs berücksichtigt, sowie die Folgekosten von Vorhersagefehlern einbezieht. / Scientific experiments produce data at unprecedented volumes and resolutions. For the extraction of insights from large sets of raw data, complex analysis workflows are necessary. Scientific workflows enable such data analyses at scale. To achieve scalability, most workflow management systems are designed as an additional layer on top of distributed resource managers, such as batch schedulers or distributed data processing frameworks. However, like distributed resource managers, they do not automatically determine the amount of resources required for executing individual tasks in a workflow. The status quo is that workflow management systems delegate the challenge of estimating resource usage to the user. This limits the performance and ease-of-use of scientific workflow management systems, as users often lack the time, expertise, or incentives to estimate resource usage accurately.
This thesis is an investigation of how to learn and predict resource usage during workflow execution. In contrast to prior work, an integrated perspective on prediction and scheduling is taken, which introduces various challenges, such as quantifying the effects of prediction errors on system performance.
The main contributions are:
1. A survey of peak memory usage prediction in batch processing environments. It provides an overview of prior machine learning approaches, commonly used features, evaluation metrics, and data sets.
2. A static workflow scheduling method that uses statistical methods to predict which scheduling decisions can be improved.
3. A feedback-based approach to scheduling and predictive resource allocation, which is extensively evaluated using simulation. The results provide insights into the desirable characteristics of scheduling heuristics and prediction models.
4. A prediction model that reduces memory wastage. The design takes into account the asymmetric costs of overestimation and underestimation, as well as follow up costs of prediction errors.
|
27 |
Finite element modeling of electromagnetic radiation and induced heat transfer in the human bodyKim, Kyungjoo 24 September 2013 (has links)
This dissertation develops adaptive hp-Finite Element (FE) technology and a parallel sparse direct solver enabling the accurate modeling of the absorption of Electro-Magnetic (EM) energy in the human head. With a large and growing number of cell phone users, the adverse health effects of EM fields have raised public concerns. Most research that attempts to explain the relationship between exposure to EM fields and its harmful effects on the human body identifies temperature changes due to the EM energy as the dominant source of possible harm. The research presented here focuses on determining the temperature distribution within the human body exposed to EM fields with an emphasis on the human head. Major challenges in accurately determining the temperature changes lie in the dependence of EM material properties on the temperature. This leads to a formulation that couples the BioHeat Transfer (BHT) and Maxwell equations. The mathematical model is formed by the time-harmonic Maxwell equations weakly coupled with the transient BHT equation. This choice of equations reflects the relevant time scales. With a mobile device operating at a single frequency, EM fields arrive at a steady-state in the micro-second range. The heat sources induced by EM fields produce a transient temperature field converging to a steady-state distribution on a time scale ranging from seconds to minutes; this necessitates the transient formulation. Since the EM material properties depend upon the temperature, the equations are fully coupled; however, the coupling is realized weakly due to the different time scales for Maxwell and BHT equations. The BHT equation is discretized in time with a time step reflecting the thermal scales. After multiple time steps, the temperature field is used to determine the EM material properties and the time-harmonic Maxwell equations are solved. The resulting heat sources are recalculated and the process continued. Due to the weak coupling of the problems, the corresponding numerical models are established separately. The BHT equation is discretized with H¹ conforming elements, and Maxwell equations are discretized with H(curl) conforming elements. The complexity of the human head geometry naturally leads to the use of tetrahedral elements, which are commonly employed by unstructured mesh generators. The EM domain, including the head and a radiating source, is terminated by a Perfectly Matched Layer (PML), which is discretized with prismatic elements. The use of high order elements of different shapes and discretization types has motivated the development of a general 3D hp-FE code. In this work, we present new generic data structures and algorithms to perform adaptive local refinements on a hybrid mesh composed of different shaped elements. A variety of isotropic and anisotropic refinements that preserve conformity of discretization are designed. The refinement algorithms support one- irregular meshes with the constrained approximation technique. The algorithms are experimentally proven to be deadlock free. A second contribution of this dissertation lies with a new parallel sparse direct solver that targets linear systems arising from hp-FE methods. The new solver interfaces to the hierarchy of a locally refined mesh to build an elimination ordering for the factorization that reflects the h-refinements. By following mesh refinements, not only the computation of element matrices but also their factorization is restricted to new elements and their ancestors. The solver is parallelized by exploiting two-level task parallelism: tasks are first generated from a parallel post-order tree traversal on the assembly tree; next, those tasks are further refined by using algorithms-by-blocks to gain fine-grained parallelism. The resulting fine-grained tasks are asynchronously executed after their dependencies are analyzed. This approach effectively reduces scheduling overhead and increases flexibility to handle irregular tasks. The solver outperforms the conventional general sparse direct solver for a class of problems formulated by high order FEs. Finally, numerical results for a 3D coupled BHT with Maxwell equations are presented. The solutions of this Maxwell code have been verified using the analytic Mie series solutions. Starting with simple spherical geometry, parametric studies are conducted on realistic head models for a typical frequency band (900 MHz) of mobile phones. / text
|
28 |
Average case analysis of algorithms for the maximum subarray problemBashar, Mohammad Ehsanul January 2007 (has links)
Maximum Subarray Problem (MSP) is to find the consecutive array portion that maximizes the sum of array elements in it. The goal is to locate the most useful and informative array segment that associates two parameters involved in data in a 2D array. It's an efficient data mining method which gives us an accurate pattern or trend of data with respect to some associated parameters. Distance Matrix Multiplication (DMM) is at the core of MSP. Also DMM and MSP have the worst-case complexity of the same order. So if we improve the algorithm for DMM that would also trigger the improvement of MSP. The complexity of Conventional DMM is O(n³). In the average case, All Pairs Shortest Path (APSP) Problem can be modified as a fast engine for DMM and can be solved in O(n² log n) expected time. Using this result, MSP can be solved in O(n² log² n) expected time. MSP can be extended to K-MSP. To incorporate DMM into K-MSP, DMM needs to be extended to K-DMM as well. In this research we show how DMM can be extended to K-DMM using K-Tuple Approach to solve K-MSP in O(Kn² log² n log K) time complexity when K ≤ n/log n. We also present Tournament Approach which solves K-MSP in O(n² log² n + Kn²) time complexity and outperforms the K-Tuple
|
29 |
Spatial analysis of invasive alien plant distribution patterns and processes using Bayesian network-based data mining techniquesDlamini, Wisdom Mdumiseni Dabulizwe 03 1900 (has links)
Invasive alien plants have widespread ecological and socioeconomic impacts throughout many parts of the world, including Swaziland where the government declared them a national disaster. Control of these species requires knowledge on the invasion ecology of each species including how they interact with the invaded environment. Species distribution models are vital for providing solutions to such problems including the prediction of their niche and distribution. Various modelling approaches are used for species distribution modelling albeit with limitations resulting from statistical assumptions, implementation and interpretation of outputs.
This study explores the usefulness of Bayesian networks (BNs) due their ability to model stochastic, nonlinear inter-causal relationships and uncertainty. Data-driven BNs were used to explore patterns and processes influencing the spatial distribution of 16 priority invasive alien plants in Swaziland. Various BN structure learning algorithms were applied within the Weka software to build models from a set of 170 variables incorporating climatic, anthropogenic, topo-edaphic and landscape factors. While all the BN models produced accurate predictions of alien plant invasion, the globally scored networks, particularly the hill climbing algorithms, performed relatively well. However, when considering the probabilistic outputs, the constraint-based Inferred Causation algorithm which attempts to generate a causal BN structure, performed relatively better.
The learned BNs reveal that the main pathways of alien plants into new areas are ruderal areas such as road verges and riverbanks whilst humans and human activity are key driving factors and the main dispersal mechanism. However, the distribution of most of the species is constrained by climate particularly tolerance to very low temperatures and precipitation seasonality. Biotic interactions and/or associations among the species are also prevalent. The findings suggest that most of the species will proliferate by extending their range resulting in the whole country being at risk of further invasion.
The ability of BNs to express uncertain, rather complex conditional and probabilistic dependencies and to combine multisource data makes them an attractive technique for species distribution modeling, especially as joint invasive species distribution models (JiSDM). Suggestions for further research are provided including the need for rigorous invasive species monitoring, data stewardship and testing more BN learning algorithms. / Environmental Sciences / D. Phil. (Environmental Science)
|
30 |
Designing a Novel RPL Objective Function & Testing RPL Objective Functions PerformanceMardini, Khalil, Abdulsamad, Emad January 2023 (has links)
The use of Internet of Things systems has increased to meet the need for smart systems in various fields, such as smart homes, intelligent industries, medical systems, agriculture, and the military. IoT networks are expanding daily to include hundreds and thousands of IoT devices, which transmit information through other linked devices to reach the network sink or gateway. The information follows different routes to the network sink. Finding an ideal routing solution is a big challenge due to several factors, such as power, computation, storage, and memory limitation for IoT devices. In 2011, A new standardized routing protocol for low-power and lossy networks was released by the Internet Engineering task force (IETF). The IETF adopted a distance vector routing algorithm for the RPL protocol. RPL protocol utilizes the objective functions (OFs) to select the path depending on diffident metrics.These OFs with different metrics must be evaluated and tested to develop the best routing solution.This project aims to test the performance of standardized RPL objective functions in a simulation environment. Afterwards, a new objective function with a new metric will be implemented and tested in the same environmental conditions. The performance results of the standard objective functions and the newly implemented objective function will be analyzed and compared to evaluate whether the standard objective functions or the new objective function is better as a routing solution for the IoT devices network.
|
Page generated in 0.0553 seconds