• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 146
  • 60
  • 27
  • 14
  • 12
  • 11
  • 9
  • 8
  • 6
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 336
  • 336
  • 106
  • 91
  • 88
  • 67
  • 58
  • 51
  • 47
  • 45
  • 41
  • 41
  • 39
  • 37
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

The Application of Classification Trees to Pharmacy School Admissions

Karpen, Samuel C., Ellis, Steve C. 01 September 2018 (has links)
In recent years, the American Association of Colleges of Pharmacy (AACP) has encouraged the application of big data analytic techniques to pharmaceutical education. Indeed, the 2013-2014 Academic Affairs Committee Report included a "Learning Analytics in Pharmacy Education" section that reviewed the potential benefits of adopting big data techniques.1 Likewise, the 2014-2015 Argus Commission Report discussed uses for big data analytics in the classroom, practice, and admissions.2 While both of these reports were thorough, neither discussed specific analytic techniques. Consequently, this commentary will introduce classification trees, with a particular emphasis on their use in admission. With electronic applications, pharmacy schools and colleges now have access to detailed applicant records containing thousands of observations. With declining applications nationwide, admissions analytics may be more important than ever.3.
12

Decision Tree Pruning Using Expert Knowledge

Cai, Jingfeng January 2006 (has links)
No description available.
13

Characterization of Performance Anomalies in Hadoop

Gupta, Puja Makhanlal 20 May 2015 (has links)
No description available.
14

決策樹形式知識整合之研究 The Research on Decision-Tree-Based Knowledge Integration

馬芳資, Ma, Fang-tz Unknown Date (has links)
隨著知識經濟時代的來臨,掌握知識可幫助組織提昇其競爭力,因此對於知識的產生、儲存、應用和整合,已成為熱烈討論的議題,本研究針對知識整合議題進行探討;而在知識呈現方式中,決策樹(Decision Tree)形式知識為樹狀結構,可以用圖形化的方式來呈現,它的結構簡單且易於瞭解,本研究針對決策樹形式知識來探討其知識整合的課題。 本研究首先提出一個合併選擇決策樹方法MODT(Merging Optional Decision Tree),主要是在原始決策樹結構中增加一個選擇連結(Option Link),來結合具有相同祖先(Ancestor)的兩個子樹;而結合方式是以兩兩合併的方式,由上而下比對兩棵決策樹的節點(Node),利用接枝(Grafting)技術來結合兩棵樹的知識。再者利用強態法則(Strong Pattern Rule)概念來提昇合併樹的預測能力。 其次,由於MODT方法在合併兩棵不同根節點的決策樹時,會形成環狀連結的情形而破壞了原有的樹形結構,以及新增的選擇連結會增加儲存空間且不易維護,因此本研究提出決策樹合併修剪方法DTBMPA(Decision-Tree-Based Merging-Pruning Approach)方法,來改善MODT方法的問題,並且增加修剪程序來簡化合併樹。此方法包括三個主要程序:決策樹合併、合併樹修剪和決策樹驗證。其做法是先將兩棵原始樹經由合併程序結合成一棵合併樹,再透過修剪程序產生修剪樹,最後由驗證程序來評估修剪樹的準確度。本研究提出的DTBMPA方法藉由合併程序來擴大樹的知識,再利用修剪程序來取得更精簡的合併樹。 本研究利用實際信用卡客戶的信用資料來進行驗證。在MODT方法的實驗上,合併樹的準確度同時大於或等於兩棵原始樹的比例為79.5%;並且針對兩者的準確度進行統計檢定,我們發現合併樹的準確度是有顯著大於原始樹。而在DTBMPA方法的實驗中,合併樹的準確度優於原始一棵樹的比率有90%,而修剪樹的準確度大於或等於合併樹的比率有80%。在統計檢定中,合併樹和修剪樹的準確度優於一棵樹的準確度達顯著差異。且修剪樹的節點數較合併樹的節點數平均減少約15%。綜合上述,本研究所提之MODT方法和DTBMPA方法皆能使得合併樹的準確度優於一棵樹的準確度,而其中DTBMPA方法可以取得更精簡的合併樹。 此外,就決策樹形式知識整合的應用而言,本研究提出一個決策樹形式知識發掘預測系統架構,其主要的目在於提供一個Web-Based的知識發掘預測系統,以輔助企業進行知識學習、知識儲存、知識整合、知識流通和知識應用等知識管理的功能。期能藉由使用這套系統來發掘企業內部隱含的重要知識,並運用此發掘的知識進行分類和預測工作。它包含三個主要子系統,即知識學習子系統、合併決策樹子系統和線上預測子系統,其中合併決策樹子系統就是應用本研究所提出之決策樹形式知識整合方法來進行知識整合處理。 有關後續研究方面,可針對下列議題進行研究: 一、就決策樹形式知識整合架構中,探討決策樹形式知識清理單元,即前置處理部份的功能設計,期能讓合併樹結合有一定質量的決策樹形式知識。 二、就綜合多個預測值部份,可加入模糊邏輯理論,處理判定結果值之灰色地帶,以提昇合併樹的預測準確度。 三、就決策樹本身而言,可進一步探討結合選取多個屬性來進行往下分群的決策樹。針對分類性屬性的分支數目不同或可能值不同時的合併處理方法;以及數值性屬性選取不同的分割點時的合併處理方法。 四、探討分類性屬性的分支數目不同或可能值不同時之合併處理方法,以及數值性屬性選取不同的分割點時之合併處理方法。 五、對於合併樹的修剪方法,可考量利用額外修剪例子集來進行修剪的處理方法,並比較不同修剪法之修剪效果及準確度評估。 六、探討多次合併修剪後的決策樹之重整課題,期能藉由調整樹形結構來提昇其使用時的運作效率,且期能讓合併樹順應環境變化而進行其知識調整,並進一步觀察合併樹的樹形結構之變化情形。 七、就實際應用而言,可與廠商合作來建置決策樹形式知識發掘預測系統,配合該廠商的產業特性及業務需求來設計此系統,並導入此系統於企業內部的營運,期能藉此累積該企業的知識且輔助管理者決策的制定。 / In the knowledge economy era, mastering knowledge can improve organization competitive abilities. Therefore, knowledge creation, retention, application, and integration are becoming the hottest themes for discussion nowadays. Our research focuses on the discussion of knowledge integration and related subjects. Decision trees are one of the most common methods of knowledge representation. They show knowledge structure in a tree-shaped graph. Decision trees are simple and easily understood; thus we focus on decision-tree-based knowledge in connection with the theme of knowledge integration. First, this research proposes a method called MODT (Merging Optional Decision Tree), which merges two knowledge trees at once and adds an optional link to merge nodes which have the same ancestor. In MODT, we compare the corresponding nodes of two trees using the top-down traversal method. When their nodes are the same, we recount the number of samples and recalculate the degree of purity. When their nodes are not the same, we add the node of the second tree and its descendants to the first tree by the grafting technique. This yields a completely merged decision tree. The Strong Pattern Rule is used to strengthen the forecast accuracy during the merged decision trees. Secondly, when we use the MODT method to merge two trees which have different roots, the merged tree has cyclic link in the root. It makes the merged tree not a tree structure, so we propose another approach called DTBMPA (Decision-Tree-Based Merging-Pruning Approach) to solve this problem. There are three steps in this approach. In the merging step, the first step, two primitive decision trees are merged as a merged tree to enlarge the knowledge of primitive trees. In the pruning step, the second step, the merged tree from the first step is pruned as a pruned tree to cut off the bias branches of the merged tree. In the validating step, the last step, the performance of the pruned tree from the second step is validated. We took real credit card user data as our sample data. In the MODT experiments, the merged trees showed a 79.5% chance of being equal or more accurate than the primitive trees. This research result supports our proposition that the merged decision tree method could achieve a better outcome with regard to knowledge integration and accumulation. In the DTBMPA simulation experiments, the percentage accuracy for the merged tree will have 90% of chance that is greater than or equal to the accuracy for those primitive trees, and the percentage accuracy for the pruned tree will have 80% of chance that is greater than or equal to the accuracy for merged tree. And we also find that the average number of nodes of the pruned tree will have 15% less than that of the merged tree. Eventually, our MODT and DTBMPA methods can improve the accuracy of the merged tree, and the DTBMPA method can produced smaller merged tree. Finally, in respect to the application of the Decision-Tree-Based Knowledge Integration, this research proposes an on-line Decision-Tree-Based knowledge discovery and predictive system architecture. It can aid businesses to discover their knowledge, store their knowledge, integrate their knowledge, and apply their knowledge to make decisions. It contains three components, including knowledge learning system, decision-tree merging system, and on-line predictive system. And we use the DTBMPA method to design the decision-tree merging system. Future directions of research are as follows. 1.Discussing the Decision-Tree preprocessing process in our Decision-Tree-Based Knowledge Integration Architecture. 2.Using the fuzzy theory to improve the accuracy of the merged tree when combining multiple predictions. 3.Discussing the merge of the complicated decision trees which are model trees, linear decision trees, oblique decision trees, regression trees, or fuzzy trees. 4.Discussing the process to merge two trees which have different possible values of non-numeric attributes or have different cut points of numeric attributes. 5.Comparing the performance of other pruning methods with ours. 6.Discussing the reconstruction of the merged trees after merging many new trees, discussing the adaptation of the merged trees to the changing environment, and observation of the evolution of the merged trees which are produced in different time stamp 7.Implementation of the on-line Decision-Tree-Based knowledge discovery in a real business environment.
15

The role of classifiers in feature selection : number vs nature

Chrysostomou, Kyriacos January 2008 (has links)
Wrapper feature selection approaches are widely used to select a small subset of relevant features from a dataset. However, Wrappers suffer from the fact that they only use a single classifier when selecting the features. The problem of using a single classifier is that each classifier is of a different nature and will have its own biases. This means that each classifier will select different feature subsets. To address this problem, this thesis aims to investigate the effects of using different classifiers for Wrapper feature selection. More specifically, it aims to investigate the effects of using different number of classifiers and classifiers of different nature. This aim is achieved by proposing a new data mining method called Wrapper-based Decision Trees (WDT). The WDT method has the ability to combine multiple classifiers from four different families, including Bayesian Network, Decision Tree, Nearest Neighbour and Support Vector Machine, to select relevant features and visualise the relationships among the selected features using decision trees. Specifically, the WDT method is applied to investigate three research questions of this thesis: (1) the effects of number of classifiers on feature selection results; (2) the effects of nature of classifiers on feature selection results; and (3) which of the two (i.e., number or nature of classifiers) has more of an effect on feature selection results. Two types of user preference datasets derived from Human-Computer Interaction (HCI) are used with WDT to assist in answering these three research questions. The results from the investigation revealed that the number of classifiers and nature of classifiers greatly affect feature selection results. In terms of number of classifiers, the results showed that few classifiers selected many relevant features whereas many classifiers selected few relevant features. In addition, it was found that using three classifiers resulted in highly accurate feature subsets. In terms of nature of classifiers, it was showed that Decision Tree, Bayesian Network and Nearest Neighbour classifiers caused signficant differences in both the number of features selected and the accuracy levels of the features. A comparison of results regarding number of classifiers and nature of classifiers revealed that the former has more of an effect on feature selection than the latter. The thesis makes contributions to three communities: data mining, feature selection, and HCI. For the data mining community, this thesis proposes a new method called WDT which integrates the use of multiple classifiers for feature selection and decision trees to effectively select and visualise the most relevant features within a dataset. For the feature selection community, the results of this thesis have showed that the number of classifiers and nature of classifiers can truly affect the feature selection process. The results and suggestions based on the results can provide useful insight about classifiers when performing feature selection. For the HCI community, this thesis has showed the usefulness of feature selection for identifying a small number of highly relevant features for determining the preferences of different users.
16

Making Sense of the Noise: Statistical Analysis of Environmental DNA Sampling for Invasive Asian Carp Monitoring Near the Great Lakes

Song, Jeffery W. 01 May 2017 (has links)
Sensitive and accurate detection methods are critical for monitoring and managing the spread of aquatic invasive species, such as invasive Silver Carp (SC; Hypophthalmichthys molitrix) and Bighead Carp (BH; Hypophthalmichthys nobilis) near the Great Lakes. A new detection tool called environmental DNA (eDNA) sampling, the collection and screening of water samples for the presence of the target species’ DNA, promises improved detection sensitivity compared to conventional surveillance methods. However, the application of eDNA sampling for invasive species management has been challenging due to the potential of false positives, from detecting species’ eDNA in the absence of live organisms. In this dissertation, I study the sources of error and uncertainty in eDNA sampling and develop statistical tools to show how eDNA sampling should be utilized for monitoring and managing invasive SC and BH in the United States. In chapter 2, I investigate the environmental and hydrologic variables, e.g. reverse flow, that may be contributing to positive eDNA sampling results upstream of the electric fish dispersal barrier in the Chicago Area Waterway System (CAWS), where live SC are not expected to be present. I used a beta-binomial regression model, which showed that reverse flow volume across the barrier has a statistically significant positive relationship with the probability of SC eDNA detection upstream of the barrier from 2009 to 2012 while other covariates, such as water temperature, season, chlorophyll concentration, do not. This is a potential alternative explanation for why SC eDNA has been detected upstream of the barrier but intact SC have not. In chapter 3, I develop and parameterize a statistical model to evaluate how changes made to the US Fish and Wildlife Service (USFWS)’s eDNA sampling protocols for invasive BH and SC monitoring from 2013 to 2015 have influenced their sensitivity. The model shows that changes to the protocol have caused the sensitivity to fluctuate. Overall, when assuming that eDNA is randomly distributed, the sensitivity of the current protocol is higher for BH eDNA detection and similar for SC eDNA detection compared to the original protocol used from 2009-2012. When assuming that eDNA is clumped, the sensitivity of the current protocol is slightly higher for BH eDNA detection but worse for SC eDNA detection. In chapter 4, I apply the model developed in chapter 3 to estimate the BH and SC eDNA concentration distributions in two pools of the Illinois River where BH and SC are considered to be present, one pool where they are absent, and upstream of the electric barrier in the CAWS given eDNA sampling data and knowledge of the eDNA sampling protocol used in 2014. The results show that the estimated mean eDNA concentrations in the Illinois River are highest in the invaded pools (La Grange; Marseilles) and are lower in the uninvaded pool (Brandon Road). The estimated eDNA concentrations in the CAWS are much lower compared to the concentrations in the Marseilles pool, which indicates that the few eDNA detections in the CAWS (3% of samples positive for SC and 0.4% samples positive for BH) do not signal the presence of live BH or SC. The model shows that >50% samples positive for BH or SC eDNA are needed to infer AC presence in the CAWS, i.e., that the estimated concentrations are similar to what is found in the Marseilles pool. Finally, in chapter 5, I develop a decision tree model to evaluate the value of information that monitoring provides for making decisions about BH and SC prevention strategies near the Great Lakes. The optimal prevention strategy is dependent on prior beliefs about the expected damage of AC invasion, the probability of invasion, and whether or not BH and SC have already invaded the Great Lakes (which is informed by monitoring). Given no monitoring, the optimal strategy is to stay with the status quo of operating electric barriers in the CAWS for low probabilities of invasion and low expected invasion costs. However, if the probability of invasion is greater than 30% and the cost of invasion is greater than $100 million a year, the optimal strategy changes to installing an additional barrier in the Brandon Road pool. Greater risk-aversion (i.e., aversion to monetary losses) causes less prevention (e.g., status quo instead of additional barriers) to be preferred. Given monitoring, the model shows that monitoring provides value for making this decision, only if the monitoring tool has perfect specificity (false positive rate = 0%).
17

Customer Churn Prediction Using Big Data Analytics

TANNEEDI, NAREN NAGA PAVAN PRITHVI January 2016 (has links)
Customer churn is always a grievous issue for the Telecom industry as customers do not hesitate to leave if they don’t find what they are looking for. They certainly want competitive pricing, value for money and above all, high quality service. Customer churning is directly related to customer satisfaction. It’s a known fact that the cost of customer acquisition is far greater than cost of customer retention, that makes retention a crucial business prototype. There is no standard model which addresses the churning issues of global telecom service providers accurately. BigData analytics with Machine Learning were found to be an efficient way for identifying churn. This thesis aims to predict customer churn using Big Data analytics, namely a J48 decision tree on a Java based benchmark tool, WEKA. Three different datasets from various sources were considered; first includes Telecom operator’s six month aggregate active and churned users’ data usage volumes, second includes globally surveyed data and third dataset comprises of individual weekly data usage analysis of 22 android customers along with their average quality, annoyance and churn scores by accompanying theses. Statistical analyses and J48 Decision trees were drawn for three different datasets. From the statistics of normalized volumes, autocorrelations were small owing to reliable confidence intervals, but confidence intervals were overlapping and close by, therefore no much significance could be noticed, henceforth no strong trends could be observed. From decision tree analytics, decision trees with 52%, 70% and 95% accuracies were achieved for three different data sources respectively.      Data preprocessing, data normalization and feature selection have shown to be prominently influential. Monthly data volumes have not shown much decision power. Average Quality, Churn Risk and to some extent, Annoyance scores may point out a probable churner. Weekly data volumes with customer’s recent history and necessary attributes like age, gender, tenure, bill, contract, data plan, etc., are pivotal for churn prediction.
18

Alokační model projektu Miss Sport / Allocation model of Miss Sport project

Kyselý, Ondřej January 2010 (has links)
The goal of the diploma thesis is to create allocation model for Miss Sport project. This project is a platform, which allows effective association of sponsors and female athletes, who are members of the project. It results in decision tree, whose biggest advantage is in transparency and rate of decision making. One of the objectives is to analyze most important criteria, which are necessary to segment female athletes. One part is a list of aspects, which are important to sponsorship, but they are not included in allocation model directly. Research target is focused on evaluation of attractiveness of female athletes as one of the criteria, which are important for potential sponsors.
19

Classificação da exatidão de coordenadas obtidas com a fase da portadora L1 do GPS / Accuracy's classification of GPS L1 carrier phase obtained coordinates

Menzori, Mauro 20 December 2005 (has links)
A fixação das duplas diferenças de ambigüidades no processamento dos dados da fase da portadora do Sistema de Posicionamento Global (GPS), é um dos pontos cruciais no posicionamento relativo estático. Esta fixação também é utilizada como um indicador de qualidade e fornece maior segurança quanto ao resultado do posicionamento. No entanto, ela é uma informação puramente estatística baseada na precisão da medida e dissociada da exatidão das coordenadas geradas na solução. A informação sobre a exatidão das coordenadas de pontos medidos através de um vetor simples, é sempre inacessível, independente de a solução ser fixa ou “float". Além disso, existe um risco maior em assumir um resultado de solução “float", mesmo que ele tenha uma boa, porém, desconhecida exatidão. Por estes motivos a solução “float" não é aceita por muitos contratantes de serviços GPS, feitos com a fase da portadora, que exigem uma nova coleta de dados, com o conseqüente dispêndio de tempo e dinheiro. Essa tese foi desenvolvida no sentido de encontrar um procedimento que melhore esta situação. Para tanto, se investigou o comportamento da exatidão em medidas obtidas com a fase da portadora L1 do GPS, monitorando os fatores variáveis presentes neste tipo de medição, o que tornou possível a classificação da exatidão de resultados. Inicialmente, a partir de um conjunto de dados GPS, coletados ao longo dos anos de 2003, 2004 e 2005 em duas bases de monitoramento contínuo da USP, se fez uma análise sistemática do comportamento das variáveis contidas nos dados. A seguir se estruturou um banco de dados, que foi usado como referência na indução de uma árvore de decisão adotada como paradigma. Por último, a partir desta árvore se pôde inferir a exatidão de soluções de posicionamento obtidas com o uso da portadora L1. A validação do procedimento foi feita através da classificação da exatidão de resultados de várias linhas base, coletadas em diferentes condições e locais do estado de São Paulo e do Brasil / The most crucial step on the relative static positioning, when using the Global Positioning System (GPS) carrier phase data, is the fixing ambiguities integer values. The integer ambiguity solution is also used as a quality indicator, ensuring quality to the positioning results. In despite of its capability, the ambiguity fix solution is purely statistical information, based on the precision of measurements and completely apart from the coordinate's solution accuracy. In a single baseline processing, the positioning coordinates accuracy is always inaccessible, no matter if the final solution is float or fixed. In fact, there is some inner risk when using the float solution, although they have a good, nevertheless, unknown accuracy. Probably that is why several GPS job contractors reject the float solutions and require a new data observation, with the consequent time and money loss. This research was developed to improve that situation, investigation the inner accuracy in several GPS L1 carrier phase measurements. Checking the variable factors existing on this kind of measurement it was possible to classify the results accuracy behavior. The investigation was developed in tree steps: started with the systematic analysis of a group of L1 observation data, collected during the years: 2003, 2004 and 2005, followed by the construction of a structured data bank which generated a decision tree, performing the paradigm used to classify the accuracy of any measurement made with GPS L1 carrier phase; and ended with the research validation, through the accuracy classification that was made on several baselines, collected on different conditions and places around the state of São Paulo and Brazil
20

Metodologias para mapeamento de suscetibilidade a movimentos de massa

Riffel, Eduardo Samuel January 2017 (has links)
O mapeamento de áreas com predisposição à ocorrência de eventos adversos, que resultam em ameaça e danos a sociedade, é uma demanda de elevada importância, principalmente pelo papel que exerce em ações de planejamento, gestão ambiental, territorial e de riscos. Diante disso, este trabalho busca contribuir na qualificação de metodologias e parâmetros morfométricos para mapeamento de suscetibilidade a movimentos de massa através de SIG e Sensoriamento Remoto, um dos objetivos é aplicar e comparar metodologias de suscetibilidade a movimentos de massa, entre elas o Shalstab, e a Árvore de Decisão que ainda é pouco utilizada nessa área. Buscando um consenso acerca da literatura, fez-se necessário organizar as informações referentes aos eventos adversos através de classificação, para isso foram revisados os conceitos relacionados com desastres, tais como suscetibilidade, vulnerabilidade, perigo e risco. Também foi realizado um estudo no município de Três Coroas – RS, onde foram relacionadas as ocorrências de movimentos de massa e as zonas de risco da CPRM. A partir de parâmetros morfométricos, foram identificados padrões de ocorrência de deslizamentos, e a contribuição de fatores como uso, ocupação e declividade. Por fim, foram comparados dois métodos de mapeamento de suscetibilidade, o modelo Shalstab e a Árvore de Decisão. Como dado de entrada dos modelos foram utilizados parâmetros morfométricos, extraídos de imagens SRTM, e amostras de deslizamentos, identificadas por meio de imagens de satélite de alta resolução espacial. A comparação das metodologias e a análise da acurácia obteve uma resposta melhor para a Árvore de Decisão. A diferença, entretanto, foi pouco significativa e ambos podem representar de forma satisfatória o mapa de suscetibilidade. No entanto, o Shalstab apresentou mais limitações, devido à necessidade de dados de maior resolução espacial. A aplicação de metodologias utilizando SIG e Sensoriamento Remoto contribuíram com uma maior qualificação em relação à prevenção de danos ocasionados por movimentos de massa. Ressalta-se, entretanto, a necessidade de inventários consistentes, para obter uma maior confiabilidade na aplicação dos modelos. / The mapping of areas with predisposition to adverse events, which result in threat and damage to society, is a demand of great importance, mainly for the role it plays in planning, environmental, territorial and risk management actions. Therefore, this work seeks to contribute to the qualification of methodologies and morphometric parameters for mapping susceptibility to mass movements through GIS and Remote Sensing, one of the objectives is to apply and compare methodologies of susceptibility to mass movements, among them Shalstab, and the Decision Tree that is still little used in this area. Seeking a consensus about the literature, it was necessary to organize the information regarding the adverse events through classification, for this the concepts related to disasters such as susceptibility, vulnerability, danger and risk were reviewed. A study was also carried out in the city of Três Coroas - RS, where the occurrence of mass movements and the risk zones of CPRM were related. From morphometric parameters, patterns of occurrence of landslides were identified, and the contribution of factors such as use, occupation and declivity. Finally, two methods of susceptibility mapping, the Shalstab model and the Decision Tree, were compared. Morphometric parameters, extracted from SRTM images, and sliding samples, identified by means of high spatial resolution satellite images, were used as input data. The comparison of the methodologies and the analysis of the accuracy obtained a better answer for the Decision Tree. The difference, however, was insignificant and both can represent satisfactorily the map of susceptibility. However, Shalstab presented more limitations due to the need for higher spatial resolution data. The application of methodologies using GIS and Remote Sensing contributed with a higher qualification in relation to the prevention of damages caused by mass movements. However, the need for consistent inventories to obtain greater reliability in the application of the models is emphasized.

Page generated in 0.0505 seconds