• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 146
  • 60
  • 27
  • 14
  • 12
  • 11
  • 9
  • 8
  • 6
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 336
  • 336
  • 106
  • 91
  • 88
  • 67
  • 58
  • 51
  • 47
  • 45
  • 41
  • 41
  • 39
  • 37
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Detecção automática de voçorocas a partir da análise de imagens baseada em objetos geográficos - GEOBIA /

Utsumi, Alex Garcez. January 2019 (has links)
Orientador: Teresa Cristina Tarlé Pissarra / Coorientador: David Luciano Rosalen / Banca: Luiz Henrique da Silva Rotta / Banca: Marcílio Vieira Martins Filho / Banca: Rejane Ennes Cicerelli / Banca: Newton La Scxala Junior / Resumo: A voçoroca é o estágio mais avançado da erosão hídrica, causando inúmeros prejuízos para o meio ambiente e para o homem. Devido à extensão desse fenômeno e a dificuldade de acesso em campo, as técnicas de detecção automática de voçorocas têm despertado interesse, especialmente por meio da Análise de Imagens Baseada em Objetos Geográficos (GEOBIA). O objetivo desse trabalho foi mapear voçorocas utilizando a GEOBIA a partir de imagens RapidEye e dados SRTM, em duas regiões localizadas em Uberaba, Minas Gerais. Para isso, foi proposto aplicar o Índice de Avaliação da Segmentação (SEI) na etapa de segmentação da imagem. A criação das regras para detecção das voçorocas foi realizada de forma empírica, no software InterIMAGE, e de forma automática, a partir do algoritmo de árvore de decisão. A avaliação da acurácia foi realizada por meio dos coeficientes de concordância extraídos da matriz de confusão e, adicionalmente, a partir da sobreposição com dados de referência vetorizados manualmente. O índice SEI proporcionou a criação de objetos semelhantes às voçorocas, permitindo a extração de atributos específicos desses alvos. As regras de classificação do modelo empírico permitem detectar voçorocas nas duas áreas de estudos, ainda que essas feições ocupem uma pequena porção da cena. Os modelos empíricos alcançaram resultados satisfatórios: índice Kappa de 0,74 e F-measure de 53,46% na área 1, e índice Kappa de 0,73 e F-measure de 55,95% na área 2. A informação altimétrica mostrou ser... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: Gully is the most advanced stage of water erosion, causing numerous damages to the environment and man. Due to the extension of this phenomenon and the difficulty of access in the field, automatic gully detection techniques have aroused interest, especially through Geographic Object Based Image Analysis (GEOBIA). The objective of this work was to map gullies using GEOBIA from RapidEye images and SRTM data, in two regions located in Uberaba, Minas Gerais. It was proposed to apply the Segmentation Evaluation Index (SEI) in the image segmentation stage. The rule set creation for gully detection was made empirically in the InterIMAGE software, and automatically, from the decision tree algorithm. The accuracy assessment was performed based on concordance coefficients extracted from the confusion matrix and, additionally, overlapping manually digitized reference data. The SEI index allowed the creation of objects similar to real gullies, providing the extraction of specific attributes of these targets. Empirical model rule set allowed gully detection on both study areas, although these features occupied a small portion of the scene. Empirical models have achieved very good results: Kappa index of 0.74 and F-measure of 53.46% in area 1, and Kappa index of 0.73 and F-measure of 55.95% in area 2. Altimetric information proved to be an important parameter for gully detection, since slope removal from the empirical models reduced the F-measure index by 34,90% in area 1 and 28,65% in are... (Complete abstract click electronic access below) / Doutor
62

Classification techniques for hyperspectral remote sensing image data

Jia, Xiuping, Electrical Engineering, Australian Defence Force Academy, UNSW January 1996 (has links)
Hyperspectral remote sensing image data, such as that recorded by AVIRIS with 224 spectral bands, provides rich information on ground cover types. However, it presents new problems in machine assisted interpretation, mainly in long processing times and the difficulties of class training due to the low ratio of number of training samples to the number of bands. This thesis investigates feasible and efficient feature reduction and image classification techniques which are appropriate for hyperspectral image data. The study is reported in three parts. The first concerns a deterministic approach for hyperspectral data interpretation. Multigroup and multiple threshold spectral coding procedures, and associated techniques for spectral matching and classification, are proposed and tested. By coding on subgroups of bands using one or three thresholds, spectral searching and matching becomes simple, fast and free of the need for radiometric correction. Modifications of existing statistical techniques are proposed in the second part of the investigation A block-based maximum likelihood classification technique is developed. Several subgroups are formed from the complete set of spectral bands in the data, based on the properties of global correlation among the bands. Subgroups which are poorly correlated with each other are treated independently using conventional maximum likelihood classification. Experimental results demonstrate that, when using appropriate subgroup sizes, the new method provides a compromise among classification accuracy, processing time and available training pixels. Furthermore, a segmented, and possibly multi-layer, principal components transformation is proposed as a possible feature reduction technique prior to classification, and for effective colour display. The transformation is performed efficiently on each of the highly correlated subgroups of bands independently. Selected features from each transformed subgroup can be then transformed again to achieve a satisfactory data reduction ratio and to generate the three most significant components for colour display. Classification accuracy is improved and high quality colour image display is achieved in experiments using two AVIRIS data sets.
63

The Decision making processes of semi-commercial farmers: a case study of technology adoption in Indonesia

Sambodo, Leonardo Adypurnama Alias Teguh January 2007 (has links)
An exploration of the creation and use of farmers' commonly used "rules of thumb" is required to conceptualize farmers' decision making processes. While farmers face complex situations, particularly when subsistence is an issue, they do appear to use simple rules in their decision making. To date inadequate attention has been given to understanding their reasoning processes in creating the rules, so this study traces the origins of farmers' beliefs, and extracts the decisive and dynamic elements in their decision making systems to provide this understanding. The analysis was structured by using a model based on the Theory of Planned Behaviour (TPB). Modifications included recognizing a bargaining process (BP) and other decision stimuli to represent socio-cultural influences and sources of perception, respectively. Two analyses based on the Personal Construct Theory (PCT) and the Ethnographic Decision Tree Modelling (EDTM) were also applied to help elaborate the farmers' cognitive process and actual decision criteria. The method involved interviews in two villages in Lamongan Regency in East Java Province of Indonesia, where the farmers adopted an improved paddy-prawn system ("pandu"). The results highlighted that farmers use rational strategies, and that socio-cultural factors influence decision making. This was represented by interactions between the farmers' perceptions, their bargaining effort, and various background factors. The TPB model revealed that the farmers' perceptions about the potential of "pandu", and the interaction with their "significant others", influenced their intention to adopt "pandu". The farmers appeared to prefer a steady income and familiar practices at the same time as obtaining new information, mainly from their peers. When "pandu" failed to show sufficiently profitable results, most farmers decided to ignore or discontinue "pandu". This became the biggest disincentive to a wide and sustainable adoption. However, the PCT analysis showed that part of this problem also stemmed from the farmers' lack of resources and knowledge. The farmers' restrictive conditions also led them to seek socio-cultural and practical support for their actions. This was highlighted by a bargaining process (BP) that integrated what the farmers had learned, and believed, into their adoption behaviour. The BP also captured the farmers' communication strategies when dealing with "pandu" as its adoption affected resource allocation within the family and required cooperation with neighbours. The PCT and EDTM analyses also confirmed how the BP accommodated different sets of decision criteria to form different adoption behaviours. Such a process indicated the importance of considering the adoption decision and the relevant changes resulting from the farmers' cognition. This provided a more dynamic and realistic description of the farmers' decision-making process than has previously been attempted. Overall, the results suggested that semi-commercial farmers need to know, and confirm, that a new technology is significantly superior to the existing system, and can provide a secure income. The introduction of a new technology should use a participatory approach allowing negotiation, conflict mitigation and the creation of consensus among the relevant parties. This can be supported through better access to knowledge, information and financing. A specific and well-targeted policy intervention may also be needed to accommodate the diversity in the farmers' ways of learning and making decisions. Ways to improve the current analytical approaches are also suggested.
64

An approach to boosting from positive-only data

Mitchell, Andrew, Computer Science & Engineering, Faculty of Engineering, UNSW January 2004 (has links)
Ensemble techniques have recently been used to enhance the performance of machine learning methods. However, current ensemble techniques for classification require both positive and negative data to produce a result that is both meaningful and useful. Negative data is, however, sometimes difficult, expensive or impossible to access. In this thesis a learning framework is described that has a very close relationship to boosting. Within this framework a method is described which bears remarkable similarities to boosting stumps and that does not rely on negative examples. This is surprising since learning from positive-only data has traditionally been difficult. An empirical methodology is described and deployed for testing positive-only learning systems using commonly available multiclass datasets to compare these learning systems with each other and with multiclass learning systems. Empirical results show that our positive-only boosting-like method learns, using stumps as a base learner and from positive data only, successfully, and in the process does not pay too heavy a price in accuracy compared to learners that have access to both positive and negative data. We also describe methods of using positive-only learners on multiclass learning tasks and vice versa and empirically demonstrate the superiority of our method of learning in a boosting-like fashion from positive-only data over a traditional multiclass learner converted to learn from positive-only data. Finally we examine some alternative frameworks, such as when additional unlabelled training examples are given. Some theoretical justifications of the results and methods are also provided.
65

A Feasibility Study of Setting-up New Production Line : Either Partly Outsource a process or Fully Produce In-House

Cheepweasarash, Piansiri, Pakapongpan, Sarinthorn January 2008 (has links)
<p>This paper presents the feasibility study of setting up the new potting tray production line based on the two alternatives: partly outsource a process in the production line or wholly make all processes in-house. Both the qualitative and quantitative approaches have been exploited to analyze and compare between the make or buy decision. Also the nature of business, particularly SMEs, in Thailand has been presented, in which it has certain characteristics that influence the business doing and decision, especially to the supply chain management. The literature relating to the forecasting techniques, outsourcing decision framework, inventory management, and investment analysis have been reviewed and applied with the empirical findings. As this production line has not yet been in place, monthly sales volumes are forecasted within the five years time frame. Based on the forecasted sales volume, simulations are implemented to distribute the probability and project a certain demand required for each month. The projected demand is used as a baseline to determine required safety stock of materials, inventory cost, time between production runs and resources utilization for each option. Finally, in the quantitative analysis, the five years forecasted sales volume is used as a framework and several decision making-techniques such as break-even analysis, cash flow and decision trees are employed to come up with the results in financial aspects.</p>
66

An applied approach to numerically imprecise decision making

Idefeldt, Jim January 2007 (has links)
Despite the fact that unguided decision making might lead to inefficient and nonoptimal decisions, decisions made at organizational levels seldom utilise decisionanalytical tools. Several gaps between the decision-makers and the computer baseddecision tools exist, and a main problem in managerial decision-making involves the lack of information and precise objective data, i.e. uncertainty and imprecision may be inherent in the decision situation. We believe that this problem might be overcome by providing computer based decision tools capable of handling the uncertainty inherent in real-life decision-making. At present, nearly all decision analytic software is only able to handle precise input, and no known software is capable of handling full scale imprecision, i.e. imprecise probabilities, values and weights, in the form of interval and comparative statements. There are, however, some theories which are able to handle some kind of uncertainty, and which deal with computational and implementational issues, but if they are never actually operationalised, they are of little real use for a decision-maker. Therefore, a natural question is how a reasonable decision analytical framework can be built based on prevailing interval methods, thus dealing with the problems of uncertain and imprecise input? Further, will the interval approach actually prove useful? The framework presented herein handles theoretical foundations for, and implementations of, imprecise multi-level trees, multi-criteria, risk analysis, together with several different evaluation options. The framework supports interval probabilities, values, and criteria weights, as well as comparative statements, also allowing for mixing probabilistic and multi-criteria decisions. The framework has also been field tested in a number of studies, proving the usefulness of the interval approach.
67

Analys av kvalitet i en webbpanel : Studie av webbpanelsmedlemmarna och deras svarsmönster

Tran, Vuong, Öhgren, Sebastian January 2013 (has links)
During 2012, the employer of this essay carried out a telephone survey with 18000 participants and a web panel survey with 708 participants. Those who partook in the telephone survey were given a choice to join the web panel. The purpose of this work is to study the participants of the telephone survey and see if they reflect the Swedish population with regards to several socio-demographic factors. Also, we intend to investigate if the propensity to join the web panel differs for participants of the telephone survey with regards to various socio-demographic affiliations. It is also of interest to study if the response pattern is different for participants of the telephone survey that would like to join the web panel and those who reject. A comparison of response pattern between the telephone survey and web panel survey has also been done, to see if there exist any differences for these two groups of surveys. The statistical methods used in this essay are descriptive statistics, multiple logistic regression and decision trees. Conclusions to be drawn with result from these methods are that the participants from the telephone survey do reflect the Swedish population regarding certain socio-demographic factors and that there is a slight difference in propensity to join the web panel for people which have dissimilar socio-demographic affiliation. It has also been found that there is a slight difference in response pattern for participants who would or would not like to join the web panel, as well as differences in response pattern also exist between the telephone survey and the web panel survey.
68

A Feasibility Study of Setting-up New Production Line : Either Partly Outsource a process or Fully Produce In-House

Cheepweasarash, Piansiri, Pakapongpan, Sarinthorn January 2008 (has links)
This paper presents the feasibility study of setting up the new potting tray production line based on the two alternatives: partly outsource a process in the production line or wholly make all processes in-house. Both the qualitative and quantitative approaches have been exploited to analyze and compare between the make or buy decision. Also the nature of business, particularly SMEs, in Thailand has been presented, in which it has certain characteristics that influence the business doing and decision, especially to the supply chain management. The literature relating to the forecasting techniques, outsourcing decision framework, inventory management, and investment analysis have been reviewed and applied with the empirical findings. As this production line has not yet been in place, monthly sales volumes are forecasted within the five years time frame. Based on the forecasted sales volume, simulations are implemented to distribute the probability and project a certain demand required for each month. The projected demand is used as a baseline to determine required safety stock of materials, inventory cost, time between production runs and resources utilization for each option. Finally, in the quantitative analysis, the five years forecasted sales volume is used as a framework and several decision making-techniques such as break-even analysis, cash flow and decision trees are employed to come up with the results in financial aspects.
69

Empirical Evaluations of Different Strategies for Classification with Skewed Class Distribution

Ling, Shih-Shiung 09 August 2004 (has links)
Existing classification analysis techniques (e.g., decision tree induction,) generally exhibit satisfactory classification effectiveness when dealing with data with non-skewed class distribution. However, real-world applications (e.g., churn prediction and fraud detection) often involve highly skewed data in decision outcomes. Such a highly skewed class distribution problem, if not properly addressed, would imperil the resulting learning effectiveness. In this study, we empirically evaluate three different approaches, namely the under-sampling, the over-sampling and the multi-classifier committee approaches, for addressing classification with highly skewed class distribution. Due to its popularity, C4.5 is selected as the underlying classification analysis technique. Based on 10 highly skewed class distribution datasets, our empirical evaluations suggest that the multi-classifier committee generally outperformed the under-sampling and the over-sampling approaches, using the recall rate, precision rate and F1-measure as the evaluation criteria. Furthermore, for applications aiming at a high recall rate, use of the over-sampling approach will be suggested. On the other hand, if the precision rate is the primary concern, adoption of the classification model induced directly from original datasets would be recommended.
70

Applications of Data Mining on Drug Safety: Predicting Proper Dosage of Vancomycin for Patients with Renal Insufficiency and Impairment

Yon, Chuen-huei 24 August 2004 (has links)
Abstract Drug misuses result in medical resource wastes and significant society costs. Due to the narrow therapeutic range of vancomycin, appropriate vancomycin dosage is difficult to determine. When inappropriate dosage is used, such side effects as poisoning reaction or drug resistance may occur. Clinically, medical professionals adjust drug protocols of vancomycin based on the Therapeutic Drug Monitoring (TDM) results. TDM is usually defined as the clinical use of drug blood concentration measurements as an aid in dosage finding and adjustment. However, TDM cannot be applied to first-time treatments and, in case, dosage decisions need to reply on medical professionals¡¦ clinical experiences and judgments. Data mining has been applied in various medical and healthcare applications. In this study, we will employ a decision-tree induction (specifically, C4.5) and a backpropagation neural network technique for predicting the appropriateness of vancomycin usage for patients with renal insufficiency and impairment. In addition, we will evaluate whether the use of the boosting and bagging algorithms will improve predictive accuracy. Our empirical evaluation results suggest that use of the boosting and bagging algorithms could improve predictive accuracy. Specifically, use of C4.5 in conjunction with the AdaBoost algorithm achieves an overall accuracy of 79.65%, which significantly improves that of the existing practice, recording an accuracy rate at 41.38%. With respect to the appropriateness category (¡§Y¡¨) and the inappropriateness category (¡§N¡¨), C4.5 in conjunction with the AdaBoost algorithm can achieve a recall rate at 78.75% and 80.25%, respectively. Hence, the incorporation of data mining techniques to decision support would enhance the drug safety, which in turn, would improve patient safety and reduce subsequent medical resource wastes.

Page generated in 0.068 seconds