Spelling suggestions: "subject:"directed cographs"" "subject:"directed bigraphs""
41 |
Spatial analysis of invasive alien plant distribution patterns and processes using Bayesian network-based data mining techniquesDlamini, Wisdom Mdumiseni Dabulizwe 03 1900 (has links)
Invasive alien plants have widespread ecological and socioeconomic impacts throughout many parts of the world, including Swaziland where the government declared them a national disaster. Control of these species requires knowledge on the invasion ecology of each species including how they interact with the invaded environment. Species distribution models are vital for providing solutions to such problems including the prediction of their niche and distribution. Various modelling approaches are used for species distribution modelling albeit with limitations resulting from statistical assumptions, implementation and interpretation of outputs.
This study explores the usefulness of Bayesian networks (BNs) due their ability to model stochastic, nonlinear inter-causal relationships and uncertainty. Data-driven BNs were used to explore patterns and processes influencing the spatial distribution of 16 priority invasive alien plants in Swaziland. Various BN structure learning algorithms were applied within the Weka software to build models from a set of 170 variables incorporating climatic, anthropogenic, topo-edaphic and landscape factors. While all the BN models produced accurate predictions of alien plant invasion, the globally scored networks, particularly the hill climbing algorithms, performed relatively well. However, when considering the probabilistic outputs, the constraint-based Inferred Causation algorithm which attempts to generate a causal BN structure, performed relatively better.
The learned BNs reveal that the main pathways of alien plants into new areas are ruderal areas such as road verges and riverbanks whilst humans and human activity are key driving factors and the main dispersal mechanism. However, the distribution of most of the species is constrained by climate particularly tolerance to very low temperatures and precipitation seasonality. Biotic interactions and/or associations among the species are also prevalent. The findings suggest that most of the species will proliferate by extending their range resulting in the whole country being at risk of further invasion.
The ability of BNs to express uncertain, rather complex conditional and probabilistic dependencies and to combine multisource data makes them an attractive technique for species distribution modeling, especially as joint invasive species distribution models (JiSDM). Suggestions for further research are provided including the need for rigorous invasive species monitoring, data stewardship and testing more BN learning algorithms. / Environmental Sciences / D. Phil. (Environmental Science)
|
42 |
Noções de grafos dirigidos, cadeias de Markov e as buscas do GoogleOliveira, José Carlos Francisco de 30 August 2014 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / This paper has as its main purpose to highlight some mathematical concepts,
which are behind the ranking given by a research made on the website mostly
used in the world: Google. At the beginning, we briefly approached some High
School’s concepts, such as: Matrices, Linear Systems and Probability. After that,
we presented some basic notions related to Directed Graphs and Markov Chains
of Discrete Time. From this last one, we gave more emphasis to the Steady State
Vector because it ensures foreknowledge results from long-term. These concepts
are extremely important to our paper, because they will be used to explain the
involvement of Mathematic behind the web search “Google”. Then, we tried to
detail the ranking operation of the search pages on Google, i.e., how the results of a
research are classified, determining which results are presented in a sequential way
in order of relevance. Finally we obtained “PageRank”, an algorithm which creates
what we call Google’s Matrices and ranks the pages of a search. We finished making
a brief comment about the historical arising of the web searches, from their founders
to the rise and hegemony of Google. / O presente trabalho tem como objetivo destacar alguns conceitos matemáticos
que estão por trás do ranqueamento dado por uma pesquisa feita no site de busca
mais usados do mundo, o “Google”. Inicialmente abordamos de forma breve alguns
conteúdos da matemática do ensino médio, a exemplo de: matrizes, sistemas lineares,
probabilidades. Em seguida são introduzidas noções básicas de grafos dirigidos e
cadeias de Markov de tempo discreto; essa última, é dada uma ênfase ao vetor estado
estacionário, por ele garantir resultados de previsão de longo prazo. Esses conceitos
são de grande importância em nosso trabalho, pois serão usados para explicar o
envolvimento da matemática por trás do site de buscas “Google”. Na sequência,
buscamos detalhar o funcionamento do ranqueamento das páginas de uma busca no
“Google”, isto é, como são classificados os resultados de uma pesquisa, determinando
quais resultados serão apresentados de modo sequencial em ordem de relevância.
Finalmente, chegamos na obtenção do “PageRank”, algoritmo que gera a chamada
Matriz do Google e ranqueia as páginas de uma busca. Encerramos com um breve
histórico do surgimento dos sites de buscas, desde os seus fundadores até a ascensão
e hegemonia do Google.
|
43 |
[en] EVOCATIVE METHODOLOGY FOR CAUSAL MAPPING AND ITS PERSPECTIVE IN THE OPERATIONS MANAGEMENT WITH INTERNET-BASED APPLICATIONS FOR SUPPLY CHAIN MANAGEMENT AND SERVICE MANAGEMENT / [pt] METODOLOGIA EVOCATIVA PARA MAPEAMENTO CAUSAL E SUA PERSPECTIVA NA GERÊNCIA DE OPERAÇÕES COM APLICAÇÕES VIA INTERNET EM GESTÃO DA CADEIA DE SUPRIMENTO E ADMINISTRAÇÃO DE SERVIÇOS25 August 2004 (has links)
[pt] A compreensão dos atuais processos produtivos é essencial
neste momento em que o conhecimento tornou-se um importante
gerador de valor. Uma visão holística dos conhecimentos que
estão disseminados, de forma dispersa, entre profissionais,
consultores e acadêmicos é necessária para a síntese de
novas teorias da produção. Pesquisadores de gerência de
operações freqüentemente usam mapeamento causal como um
mecanismo para construir e comunicar teorias,
particularmente em suporte à pesquisa empírica. As
abordagens mais usuais para capturar dados cognitivos
para um mapa causal são brainstorming e entrevistas, os
quais exigem muito tempo e apresentam um significativo
custo em sua implementação. Esta tese visa gerar uma
metodologia (Metodologia Evocativa para Mapeamento Causal -
ECMM) voltada para aplicação em pesquisa sobre gerência de
operações para coletar e estruturar dados disseminados de
forma desagregada, como conhecimento e experiência
profissional e acadêmica, contidos nas opiniões de um grande
número de especialistas dispersos demograficamente e
geograficamente. Isto é alcançado evocando opiniões,
codificando-as em variáveis e reduzindo o grupo em
conceitos e relações. Tem-se uma especial preocupação em
conseguir este objetivo em tempo factível e com baixo custo.
A coleta de dados é assíncrona, via Internet, possui dois
ou três turnos (à semelhança do método Delfos). A análise
de dados usa codificação, técnica de grupamento hierárquica
e escalamento multidimensional para identificar conceitos
na forma de mapas cognitivos. A ECMM foi ilustrada com
aplicações que demonstram sua viabilidade. Aplicou-se nas
áreas de gestão da cadeia de suprimento (SCM) e
administração de serviços (SM) com a participação de
aproximadamente 1.300 respondentes de empresas e
universidades de quase 100 países. Dentre os desdobramentos
para pesquisas futuras propõe-se aplicar nas áreas de ECMM
em SCM e SM visando a uni-las em um tema: gestão da cadeia
de suprimento de serviços. / [en] The understanding of the present productive processes is
essential at this moment when knowledge became an important
value creator. A holistic vision of the pieces of knowledge
that are spread out and dispersed among practitioners,
consultants and academics is necessary for the synthesis of
new theories of production. Operations management
researchers often use causal mapping as a key tool for
building and communicating theory, particularly in support
of empirical research. The widely accepted approaches for
capturing cognitive data for a causal map are informal
brainstorming and interviews, which require a time-
consuming and significant cost of implementation. This
dissertation aims at creating a methodology (Evocative
Causal Mapping Methodology - ECMM) intended for use in
operations management research for collecting and
structuring dispersed data spread out as practical and
research knowledge, and experience contained in the
opinions of a large number of specialists demographically
and geographically scattered. This is accomplished by
evoking opinions, encoding them into variables and reducing
the resulting set to concepts and relationships. A special
concern is to achieve this goal in a feasible time and cost-
efficient way. ECMM consists of two or three round, Delphi-
like, Internet-based asynchronous data collection, and a
data analysis that uses a coding panel of experts,
hierarchical cluster analysis and multidimensional scaling
for identifying concepts on cognitive map formats.
Applications illustrate ECMM and demonstrate its
feasibility. They were developed on supply chain management
(SCM) and service management (SM) involving about 1,300
respondents of companies and universities of about 100
countries. Among possible unfolding future studies, this
dissertation proposes to apply ECMM in SCM and SM aiming at
unifying them into a single topic: service supply chain
management.
|
44 |
Structural Similarity: Applications to Object Recognition and ClusteringCurado, Manuel 03 September 2018 (has links)
In this thesis, we propose many developments in the context of Structural Similarity. We address both node (local) similarity and graph (global) similarity. Concerning node similarity, we focus on improving the diffusive process leading to compute this similarity (e.g. Commute Times) by means of modifying or rewiring the structure of the graph (Graph Densification), although some advances in Laplacian-based ranking are also included in this document. Graph Densification is a particular case of what we call graph rewiring, i.e. a novel field (similar to image processing) where input graphs are rewired to be better conditioned for the subsequent pattern recognition tasks (e.g. clustering). In the thesis, we contribute with an scalable an effective method driven by Dirichlet processes. We propose both a completely unsupervised and a semi-supervised approach for Dirichlet densification. We also contribute with new random walkers (Return Random Walks) that are useful structural filters as well as asymmetry detectors in directed brain networks used to make early predictions of Alzheimer's disease (AD). Graph similarity is addressed by means of designing structural information channels as a means of measuring the Mutual Information between graphs. To this end, we first embed the graphs by means of Commute Times. Commute times embeddings have good properties for Delaunay triangulations (the typical representation for Graph Matching in computer vision). This means that these embeddings can act as encoders in the channel as well as decoders (since they are invertible). Consequently, structural noise can be modelled by the deformation introduced in one of the manifolds to fit the other one. This methodology leads to a very high discriminative similarity measure, since the Mutual Information is measured on the manifolds (vectorial domain) through copulas and bypass entropy estimators. This is consistent with the methodology of decoupling the measurement of graph similarity in two steps: a) linearizing the Quadratic Assignment Problem (QAP) by means of the embedding trick, and b) measuring similarity in vector spaces. The QAP problem is also investigated in this thesis. More precisely, we analyze the behaviour of $m$-best Graph Matching methods. These methods usually start by a couple of best solutions and then expand locally the search space by excluding previous clamped variables. The next variable to clamp is usually selected randomly, but we show that this reduces the performance when structural noise arises (outliers). Alternatively, we propose several heuristics for spanning the search space and evaluate all of them, showing that they are usually better than random selection. These heuristics are particularly interesting because they exploit the structure of the affinity matrix. Efficiency is improved as well. Concerning the application domains explored in this thesis we focus on object recognition (graph similarity), clustering (rewiring), compression/decompression of graphs (links with Extremal Graph Theory), 3D shape simplification (sparsification) and early prediction of AD. / Ministerio de Economía, Industria y Competitividad (Referencia TIN2012-32839 BES-2013-064482)
|
Page generated in 0.0587 seconds