• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 3
  • 3
  • 2
  • Tagged with
  • 30
  • 30
  • 30
  • 9
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Graphical representation of independence structures

Sadeghi, Kayvan January 2012 (has links)
In this thesis we describe subclasses of a class of graphs with three types of edges, called loopless mixed graphs (LMGs). The class of LMGs contains almost all known classes of graphs used in the literature of graphical Markov models. We focus in particular on the subclass of ribbonless graphs (RGs), which as special cases include undirected graphs, bidirected graphs, and directed acyclic graphs, as well as ancestral graphs and summary graphs. We define a unifying interpretation of independence structure for LMGs and pairwise and global Markov properties for RGs, discuss their maximality, and, in particular, prove the equivalence of pairwise and global Markov properties for graphoids defined over the nodes of RGs. Three subclasses of LMGs (MC, summary, and ancestral graphs) capture the modified independence model after marginalisation over unobserved variables and conditioning on selection variables of variables satisfying independence restrictions represented by a directed acyclic graph (DAG). We derive algorithms to generate these graphs from a given DAG or from a graph of a specific subclass, and we study the relationships between these classes of graphs. Finally, a manual and codes are provided that explain methods and functions in R for implementing and generating various graphs studied in this thesis.
22

Enumerating functional substructures of genome-scale metabolic networks : stories, precursors and organisations / Énumération de sous-structures fonctionnelle dans des réseaux métaboliques complets : Histoires métaboliques, précurseurs et organisations chimiques

Vieira Milreu, Paulo 19 December 2012 (has links)
Dans cette thèse, nous avons présenté trois méthodes différentes pour l’énumération de sousréseauxparticuliers d’un réseau métabolique: les histoires métaboliques, les ensembles minimaux deprécurseurs et les organisations chimiques. Pour chacune de ces trois méthodes, nous avons présentédes résultats théoriques, et pour les deux premières, nous avons en outre fourni une illustration surcomment les appliquer afin d’étudier le comportement métabolique des organismes vivants. Les histoiresmétaboliques sont définies comme des graphes acycliques dirigés maximaux dont les ensemblesde sources et de cibles sont limités à un sous-ensemble des noeuds. La motivation initiale de cette définitionétait d’analyser des données expérimentales de métabolomique, mais la méthode a égalementété explorée dans un contexte différent. Les ensembles de précurseurs métaboliques sont des ensemblesminimaux de nutriments qui permettent de produire des métabolites d’intérêt. Nous présentons troisméthodes différentes pour l’énumération de tels ensembles minimaux de précurseurs, et nous illustronsleur application dans une étude des échanges métaboliques dans un système symbiotique. Les organisationschimiques sont des ensembles de métabolites qui à la fois sont fermés et s’auto-maintiennent,ce qui reflète des caractéristiques de stabilité dans le sens où aucun nouveau métabolite ne peut êtreproduit et qu’aucun des métabolites déjà présents dans le système ne peut disparaître. / In this thesis, we presented three different methods for enumerating special subnetworks containedin a metabolic network: metabolic stories, minimal precursor sets and chemical organisations. Foreach of the three methods, we gave theoretical results, and for the two first ones, we further providedan illustration on how to apply them in order to study the metabolic behaviour of living organisms.Metabolic stories are defined as maximal directed acyclic graphs whose sets of sources and targets arerestricted to a subset of the nodes. The initial motivation of this definition was to analyse metabolomicsexperimental data, but the method was also explored in a different context. Metabolic precursor setsare minimal sets of nutrients that are able to produce metabolites of interest. We present threedifferent methods for enumerating minimal precursor sets and we illustrate the application in a studyof the metabolic exchanges in a symbiotic system. Chemical organisations are sets of metabolites thatare simultaneously closed and self-maintaining, which captures some stability feature in the
23

Enumerating functional substructures of genome-scale metabolic networks : stories, precursors and organisations

Vieira Milreu, Paulo 19 December 2012 (has links) (PDF)
In this thesis, we presented three different methods for enumerating special subnetworks containedin a metabolic network: metabolic stories, minimal precursor sets and chemical organisations. Foreach of the three methods, we gave theoretical results, and for the two first ones, we further providedan illustration on how to apply them in order to study the metabolic behaviour of living organisms.Metabolic stories are defined as maximal directed acyclic graphs whose sets of sources and targets arerestricted to a subset of the nodes. The initial motivation of this definition was to analyse metabolomicsexperimental data, but the method was also explored in a different context. Metabolic precursor setsare minimal sets of nutrients that are able to produce metabolites of interest. We present threedifferent methods for enumerating minimal precursor sets and we illustrate the application in a studyof the metabolic exchanges in a symbiotic system. Chemical organisations are sets of metabolites thatare simultaneously closed and self-maintaining, which captures some stability feature in the
24

Predictive Resource Management for Scientific Workflows

Witt, Carl Philipp 21 July 2020 (has links)
Um Erkenntnisse aus großen Mengen wissenschaftlicher Rohdaten zu gewinnen, sind komplexe Datenanalysen erforderlich. Scientific Workflows sind ein Ansatz zur Umsetzung solcher Datenanalysen. Um Skalierbarkeit zu erreichen, setzen die meisten Workflow-Management-Systeme auf bereits existierende Lösungen zur Verwaltung verteilter Ressourcen, etwa Batch-Scheduling-Systeme. Die Abschätzung der Ressourcen, die zur Ausführung einzelner Arbeitsschritte benötigt werden, wird dabei immer noch an die Nutzer:innen delegiert. Dies schränkt die Leistung und Benutzerfreundlichkeit von Workflow-Management-Systemen ein, da den Nutzer:innen oft die Zeit, das Fachwissen oder die Anreize fehlen, den Ressourcenverbrauch genau abzuschätzen. Diese Arbeit untersucht, wie die Ressourcennutzung während der Ausführung von Workflows automatisch erlernt werden kann. Im Gegensatz zu früheren Arbeiten werden Scheduling und Vorhersage von Ressourcenverbrauch in einem engeren Zusammenhang betrachtet. Dies bringt verschiedene Herausforderungen mit sich, wie die Quantifizierung der Auswirkungen von Vorhersagefehlern auf die Systemleistung. Die wichtigsten Beiträge dieser Arbeit sind: 1. Eine Literaturübersicht aktueller Ansätze zur Vorhersage von Spitzenspeicherverbrauch mittels maschinellen Lernens im Kontext von Batch-Scheduling-Systemen. 2. Ein Scheduling-Verfahren, das statistische Methoden verwendet, um vorherzusagen, welche Scheduling-Entscheidungen verbessert werden können. 3. Ein Ansatz zur Nutzung von zur Laufzeit gemessenem Spitzenspeicherverbrauch in Vorhersagemodellen, die die fortwährende Optimierung der Ressourcenallokation erlauben. Umfangreiche Simulationsexperimente geben Einblicke in Schlüsseleigenschaften von Scheduling-Heuristiken und Vorhersagemodellen. 4. Ein Vorhersagemodell, das die asymmetrischen Kosten überschätzten und unterschätzten Speicherverbrauchs berücksichtigt, sowie die Folgekosten von Vorhersagefehlern einbezieht. / Scientific experiments produce data at unprecedented volumes and resolutions. For the extraction of insights from large sets of raw data, complex analysis workflows are necessary. Scientific workflows enable such data analyses at scale. To achieve scalability, most workflow management systems are designed as an additional layer on top of distributed resource managers, such as batch schedulers or distributed data processing frameworks. However, like distributed resource managers, they do not automatically determine the amount of resources required for executing individual tasks in a workflow. The status quo is that workflow management systems delegate the challenge of estimating resource usage to the user. This limits the performance and ease-of-use of scientific workflow management systems, as users often lack the time, expertise, or incentives to estimate resource usage accurately. This thesis is an investigation of how to learn and predict resource usage during workflow execution. In contrast to prior work, an integrated perspective on prediction and scheduling is taken, which introduces various challenges, such as quantifying the effects of prediction errors on system performance. The main contributions are: 1. A survey of peak memory usage prediction in batch processing environments. It provides an overview of prior machine learning approaches, commonly used features, evaluation metrics, and data sets. 2. A static workflow scheduling method that uses statistical methods to predict which scheduling decisions can be improved. 3. A feedback-based approach to scheduling and predictive resource allocation, which is extensively evaluated using simulation. The results provide insights into the desirable characteristics of scheduling heuristics and prediction models. 4. A prediction model that reduces memory wastage. The design takes into account the asymmetric costs of overestimation and underestimation, as well as follow up costs of prediction errors.
25

Finite element modeling of electromagnetic radiation and induced heat transfer in the human body

Kim, Kyungjoo 24 September 2013 (has links)
This dissertation develops adaptive hp-Finite Element (FE) technology and a parallel sparse direct solver enabling the accurate modeling of the absorption of Electro-Magnetic (EM) energy in the human head. With a large and growing number of cell phone users, the adverse health effects of EM fields have raised public concerns. Most research that attempts to explain the relationship between exposure to EM fields and its harmful effects on the human body identifies temperature changes due to the EM energy as the dominant source of possible harm. The research presented here focuses on determining the temperature distribution within the human body exposed to EM fields with an emphasis on the human head. Major challenges in accurately determining the temperature changes lie in the dependence of EM material properties on the temperature. This leads to a formulation that couples the BioHeat Transfer (BHT) and Maxwell equations. The mathematical model is formed by the time-harmonic Maxwell equations weakly coupled with the transient BHT equation. This choice of equations reflects the relevant time scales. With a mobile device operating at a single frequency, EM fields arrive at a steady-state in the micro-second range. The heat sources induced by EM fields produce a transient temperature field converging to a steady-state distribution on a time scale ranging from seconds to minutes; this necessitates the transient formulation. Since the EM material properties depend upon the temperature, the equations are fully coupled; however, the coupling is realized weakly due to the different time scales for Maxwell and BHT equations. The BHT equation is discretized in time with a time step reflecting the thermal scales. After multiple time steps, the temperature field is used to determine the EM material properties and the time-harmonic Maxwell equations are solved. The resulting heat sources are recalculated and the process continued. Due to the weak coupling of the problems, the corresponding numerical models are established separately. The BHT equation is discretized with H¹ conforming elements, and Maxwell equations are discretized with H(curl) conforming elements. The complexity of the human head geometry naturally leads to the use of tetrahedral elements, which are commonly employed by unstructured mesh generators. The EM domain, including the head and a radiating source, is terminated by a Perfectly Matched Layer (PML), which is discretized with prismatic elements. The use of high order elements of different shapes and discretization types has motivated the development of a general 3D hp-FE code. In this work, we present new generic data structures and algorithms to perform adaptive local refinements on a hybrid mesh composed of different shaped elements. A variety of isotropic and anisotropic refinements that preserve conformity of discretization are designed. The refinement algorithms support one- irregular meshes with the constrained approximation technique. The algorithms are experimentally proven to be deadlock free. A second contribution of this dissertation lies with a new parallel sparse direct solver that targets linear systems arising from hp-FE methods. The new solver interfaces to the hierarchy of a locally refined mesh to build an elimination ordering for the factorization that reflects the h-refinements. By following mesh refinements, not only the computation of element matrices but also their factorization is restricted to new elements and their ancestors. The solver is parallelized by exploiting two-level task parallelism: tasks are first generated from a parallel post-order tree traversal on the assembly tree; next, those tasks are further refined by using algorithms-by-blocks to gain fine-grained parallelism. The resulting fine-grained tasks are asynchronously executed after their dependencies are analyzed. This approach effectively reduces scheduling overhead and increases flexibility to handle irregular tasks. The solver outperforms the conventional general sparse direct solver for a class of problems formulated by high order FEs. Finally, numerical results for a 3D coupled BHT with Maxwell equations are presented. The solutions of this Maxwell code have been verified using the analytic Mie series solutions. Starting with simple spherical geometry, parametric studies are conducted on realistic head models for a typical frequency band (900 MHz) of mobile phones. / text
26

Average case analysis of algorithms for the maximum subarray problem

Bashar, Mohammad Ehsanul January 2007 (has links)
Maximum Subarray Problem (MSP) is to find the consecutive array portion that maximizes the sum of array elements in it. The goal is to locate the most useful and informative array segment that associates two parameters involved in data in a 2D array. It's an efficient data mining method which gives us an accurate pattern or trend of data with respect to some associated parameters. Distance Matrix Multiplication (DMM) is at the core of MSP. Also DMM and MSP have the worst-case complexity of the same order. So if we improve the algorithm for DMM that would also trigger the improvement of MSP. The complexity of Conventional DMM is O(n³). In the average case, All Pairs Shortest Path (APSP) Problem can be modified as a fast engine for DMM and can be solved in O(n² log n) expected time. Using this result, MSP can be solved in O(n² log² n) expected time. MSP can be extended to K-MSP. To incorporate DMM into K-MSP, DMM needs to be extended to K-DMM as well. In this research we show how DMM can be extended to K-DMM using K-Tuple Approach to solve K-MSP in O(Kn² log² n log K) time complexity when K ≤ n/log n. We also present Tournament Approach which solves K-MSP in O(n² log² n + Kn²) time complexity and outperforms the K-Tuple
27

Spatial analysis of invasive alien plant distribution patterns and processes using Bayesian network-based data mining techniques

Dlamini, Wisdom Mdumiseni Dabulizwe 03 1900 (has links)
Invasive alien plants have widespread ecological and socioeconomic impacts throughout many parts of the world, including Swaziland where the government declared them a national disaster. Control of these species requires knowledge on the invasion ecology of each species including how they interact with the invaded environment. Species distribution models are vital for providing solutions to such problems including the prediction of their niche and distribution. Various modelling approaches are used for species distribution modelling albeit with limitations resulting from statistical assumptions, implementation and interpretation of outputs. This study explores the usefulness of Bayesian networks (BNs) due their ability to model stochastic, nonlinear inter-causal relationships and uncertainty. Data-driven BNs were used to explore patterns and processes influencing the spatial distribution of 16 priority invasive alien plants in Swaziland. Various BN structure learning algorithms were applied within the Weka software to build models from a set of 170 variables incorporating climatic, anthropogenic, topo-edaphic and landscape factors. While all the BN models produced accurate predictions of alien plant invasion, the globally scored networks, particularly the hill climbing algorithms, performed relatively well. However, when considering the probabilistic outputs, the constraint-based Inferred Causation algorithm which attempts to generate a causal BN structure, performed relatively better. The learned BNs reveal that the main pathways of alien plants into new areas are ruderal areas such as road verges and riverbanks whilst humans and human activity are key driving factors and the main dispersal mechanism. However, the distribution of most of the species is constrained by climate particularly tolerance to very low temperatures and precipitation seasonality. Biotic interactions and/or associations among the species are also prevalent. The findings suggest that most of the species will proliferate by extending their range resulting in the whole country being at risk of further invasion. The ability of BNs to express uncertain, rather complex conditional and probabilistic dependencies and to combine multisource data makes them an attractive technique for species distribution modeling, especially as joint invasive species distribution models (JiSDM). Suggestions for further research are provided including the need for rigorous invasive species monitoring, data stewardship and testing more BN learning algorithms. / Environmental Sciences / D. Phil. (Environmental Science)
28

Designing a Novel RPL Objective Function & Testing RPL Objective Functions Performance

Mardini, Khalil, Abdulsamad, Emad January 2023 (has links)
The use of Internet of Things systems has increased to meet the need for smart systems in various fields, such as smart homes, intelligent industries, medical systems, agriculture, and the military. IoT networks are expanding daily to include hundreds and thousands of IoT devices, which transmit information through other linked devices to reach the network sink or gateway. The information follows different routes to the network sink. Finding an ideal routing solution is a big challenge due to several factors, such as power, computation, storage, and memory limitation for IoT devices. In 2011, A new standardized routing protocol for low-power and lossy networks was released by the Internet Engineering task force (IETF). The IETF adopted a distance vector routing algorithm for the RPL protocol. RPL protocol utilizes the objective functions (OFs) to select the path depending on diffident metrics.These OFs with different metrics must be evaluated and tested to develop the best routing solution.This project aims to test the performance of standardized RPL objective functions in a simulation environment. Afterwards, a new objective function with a new metric will be implemented and tested in the same environmental conditions. The performance results of the standard objective functions and the newly implemented objective function will be analyzed and compared to evaluate whether the standard objective functions or the new objective function is better as a routing solution for the IoT devices network.
29

Structure learning of Bayesian networks via data perturbation / Aprendizagem estrutural de Redes Bayesianas via perturbação de dados

Gross, Tadeu Junior 29 November 2018 (has links)
Structure learning of Bayesian Networks (BNs) is an NP-hard problem, and the use of sub-optimal strategies is essential in domains involving many variables. One of them is to generate multiple approximate structures and then to reduce the ensemble to a representative structure. It is possible to use the occurrence frequency (on the structures ensemble) as the criteria for accepting a dominant directed edge between two nodes and thus obtaining the single structure. In this doctoral research, it was made an analogy with an adapted one-dimensional random-walk for analytically deducing an appropriate decision threshold to such occurrence frequency. The obtained closed-form expression has been validated across benchmark datasets applying the Matthews Correlation Coefficient as the performance metric. In the experiments using a recent medical dataset, the BN resulting from the analytical cutoff-frequency captured the expected associations among nodes and also achieved better prediction performance than the BNs learned with neighbours thresholds to the computed. In literature, the feature accounted along of the perturbed structures has been the edges and not the directed edges (arcs) as in this thesis. That modified strategy still was applied to an elderly dataset to identify potential relationships between variables of medical interest but using an increased threshold instead of the predict by the proposed formula - such prudence is due to the possible social implications of the finding. The motivation behind such an application is that in spite of the proportion of elderly individuals in the population has increased substantially in the last few decades, the risk factors that should be managed in advance to ensure a natural process of mental decline due to ageing remain unknown. In the learned structural model, it was graphically investigated the probabilistic dependence mechanism between two variables of medical interest: the suspected risk factor known as Metabolic Syndrome and the indicator of mental decline referred to as Cognitive Impairment. In this investigation, the concept known in the context of BNs as D-separation has been employed. Results of the carried out study revealed that the dependence between Metabolic Syndrome and Cognitive Variables indeed exists and depends on both Body Mass Index and age. / O aprendizado da estrutura de uma Rede Bayesiana (BN) é um problema NP-difícil, e o uso de estratégias sub-ótimas é essencial em domínios que envolvem muitas variáveis. Uma delas consiste em gerar várias estruturas aproximadas e depois reduzir o conjunto a uma estrutura representativa. É possível usar a frequência de ocorrência (no conjunto de estruturas) como critério para aceitar um arco dominante entre dois nós e assim obter essa estrutura única. Nesta pesquisa de doutorado, foi feita uma analogia com um passeio aleatório unidimensional adaptado para deduzir analiticamente um limiar de decisão apropriado para essa frequência de ocorrência. A expressão de forma fechada obtida foi validada usando bases de dados de referência e aplicando o Coeficiente de Correlação de Matthews como métrica de desempenho. Nos experimentos utilizando dados médicos recentes, a BN resultante da frequência de corte analítica capturou as associações esperadas entre os nós e também obteve melhor desempenho de predição do que as BNs aprendidas com limiares vizinhos ao calculado. Na literatura, a característica contabilizada ao longo das estruturas perturbadas tem sido as arestas e não as arestas direcionadas (arcos) como nesta tese. Essa estratégia modificada ainda foi aplicada a um conjunto de dados de idosos para identificar potenciais relações entre variáveis de interesse médico, mas usando um limiar aumentado em vez do previsto pela fórmula proposta - essa cautela deve-se às possíveis implicações sociais do achado. A motivação por trás dessa aplicação é que, apesar da proporção de idosos na população ter aumentado substancialmente nas últimas décadas, os fatores de risco que devem ser controlados com antecedência para garantir um processo natural de declínio mental devido ao envelhecimento permanecem desconhecidos. No modelo estrutural aprendido, investigou-se graficamente o mecanismo de dependência probabilística entre duas variáveis de interesse médico: o fator de risco suspeito conhecido como Síndrome Metabólica e o indicador de declínio mental denominado Comprometimento Cognitivo. Nessa investigação, empregou-se o conceito conhecido no contexto de BNs como D-separação. Esse estudo revelou que a dependência entre Síndrome Metabólica e Variáveis Cognitivas de fato existe e depende tanto do Índice de Massa Corporal quanto da idade.
30

Structure learning of Bayesian networks via data perturbation / Aprendizagem estrutural de Redes Bayesianas via perturbação de dados

Tadeu Junior Gross 29 November 2018 (has links)
Structure learning of Bayesian Networks (BNs) is an NP-hard problem, and the use of sub-optimal strategies is essential in domains involving many variables. One of them is to generate multiple approximate structures and then to reduce the ensemble to a representative structure. It is possible to use the occurrence frequency (on the structures ensemble) as the criteria for accepting a dominant directed edge between two nodes and thus obtaining the single structure. In this doctoral research, it was made an analogy with an adapted one-dimensional random-walk for analytically deducing an appropriate decision threshold to such occurrence frequency. The obtained closed-form expression has been validated across benchmark datasets applying the Matthews Correlation Coefficient as the performance metric. In the experiments using a recent medical dataset, the BN resulting from the analytical cutoff-frequency captured the expected associations among nodes and also achieved better prediction performance than the BNs learned with neighbours thresholds to the computed. In literature, the feature accounted along of the perturbed structures has been the edges and not the directed edges (arcs) as in this thesis. That modified strategy still was applied to an elderly dataset to identify potential relationships between variables of medical interest but using an increased threshold instead of the predict by the proposed formula - such prudence is due to the possible social implications of the finding. The motivation behind such an application is that in spite of the proportion of elderly individuals in the population has increased substantially in the last few decades, the risk factors that should be managed in advance to ensure a natural process of mental decline due to ageing remain unknown. In the learned structural model, it was graphically investigated the probabilistic dependence mechanism between two variables of medical interest: the suspected risk factor known as Metabolic Syndrome and the indicator of mental decline referred to as Cognitive Impairment. In this investigation, the concept known in the context of BNs as D-separation has been employed. Results of the carried out study revealed that the dependence between Metabolic Syndrome and Cognitive Variables indeed exists and depends on both Body Mass Index and age. / O aprendizado da estrutura de uma Rede Bayesiana (BN) é um problema NP-difícil, e o uso de estratégias sub-ótimas é essencial em domínios que envolvem muitas variáveis. Uma delas consiste em gerar várias estruturas aproximadas e depois reduzir o conjunto a uma estrutura representativa. É possível usar a frequência de ocorrência (no conjunto de estruturas) como critério para aceitar um arco dominante entre dois nós e assim obter essa estrutura única. Nesta pesquisa de doutorado, foi feita uma analogia com um passeio aleatório unidimensional adaptado para deduzir analiticamente um limiar de decisão apropriado para essa frequência de ocorrência. A expressão de forma fechada obtida foi validada usando bases de dados de referência e aplicando o Coeficiente de Correlação de Matthews como métrica de desempenho. Nos experimentos utilizando dados médicos recentes, a BN resultante da frequência de corte analítica capturou as associações esperadas entre os nós e também obteve melhor desempenho de predição do que as BNs aprendidas com limiares vizinhos ao calculado. Na literatura, a característica contabilizada ao longo das estruturas perturbadas tem sido as arestas e não as arestas direcionadas (arcos) como nesta tese. Essa estratégia modificada ainda foi aplicada a um conjunto de dados de idosos para identificar potenciais relações entre variáveis de interesse médico, mas usando um limiar aumentado em vez do previsto pela fórmula proposta - essa cautela deve-se às possíveis implicações sociais do achado. A motivação por trás dessa aplicação é que, apesar da proporção de idosos na população ter aumentado substancialmente nas últimas décadas, os fatores de risco que devem ser controlados com antecedência para garantir um processo natural de declínio mental devido ao envelhecimento permanecem desconhecidos. No modelo estrutural aprendido, investigou-se graficamente o mecanismo de dependência probabilística entre duas variáveis de interesse médico: o fator de risco suspeito conhecido como Síndrome Metabólica e o indicador de declínio mental denominado Comprometimento Cognitivo. Nessa investigação, empregou-se o conceito conhecido no contexto de BNs como D-separação. Esse estudo revelou que a dependência entre Síndrome Metabólica e Variáveis Cognitivas de fato existe e depende tanto do Índice de Massa Corporal quanto da idade.

Page generated in 0.0643 seconds