• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 136
  • 60
  • 27
  • 12
  • 11
  • 10
  • 9
  • 8
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 312
  • 312
  • 97
  • 83
  • 83
  • 63
  • 55
  • 44
  • 44
  • 41
  • 40
  • 38
  • 35
  • 33
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Co-Location Decision Tree for Enhancing Decision-Making of Pavement Maintenance and Rehabilitation

Zhou, Guoqing 02 March 2011 (has links)
A pavement management system (PMS) is a valuable tool and one of the critical elements of the highway transportation infrastructure. Since a vast amount of pavement data is frequently and continuously being collected, updated, and exchanged due to rapidly deteriorating road conditions, increased traffic loads, and shrinking funds, resulting in the rapid accumulation of a large pavement database, knowledge-based expert systems (KBESs) have therefore been developed to solve various transportation problems. This dissertation presents the development of theory and algorithm for a new decision tree induction method, called co-location-based decision tree (CL-DT.) This method will enhance the decision-making abilities of pavement maintenance personnel and their rehabilitation strategies. This idea stems from shortcomings in traditional decision tree induction algorithms, when applied in the pavement treatment strategies. The proposed algorithm utilizes the co-location (co-occurrence) characteristics of spatial attribute data in the pavement database. With the proposed algorithm, one distinct event occurrence can associate with two or multiple attribute values that occur simultaneously in spatial and temporal domains. This research dissertation describes the details of the proposed CL-DT algorithms and steps of realizing the proposed algorithm. First, the dissertation research describes the detailed colocation mining algorithm, including spatial attribute data selection in pavement databases, the determination of candidate co-locations, the determination of table instances of candidate colocations, pruning the non-prevalent co-locations, and induction of co-location rules. In this step, a hybrid constraint, i.e., spatial geometric distance constraint condition and a distinct event-type constraint condition, is developed. The spatial geometric distance constraint condition is a neighborhood relationship-based spatial joins of table instances for many prevalent co-locations with one prevalent co-location; and the distance event-type constraint condition is a Euclidean distance between a set of attributes and its corresponding clusters center of attributes. The dissertation research also developed the spatial feature pruning method using the multi-resolution pruning criterion. The cross-correlation criterion of spatial features is used to remove the nonprevalent co-locations from the candidate prevalent co-location set under a given threshold. The dissertation research focused on the development of the co-location decision tree (CL-DT) algorithm, which includes the non-spatial attribute data selection in the pavement management database, co-location algorithm modeling, node merging criteria, and co-location decision tree induction. In this step, co-location mining rules are used to guide the decision tree generation and induce decision rules. For each step, this dissertation gives detailed flowcharts, such as flowchart of co-location decision tree induction, co-location/co-occurrence decision tree algorithm, algorithm of colocation/co-occurrence decision tree (CL-DT), and outline of steps of SFS (Sequential Feature Selection) algorithm. Finally, this research used a pavement database covering four counties, which are provided by NCDOT (North Carolina Department of Transportation), to verify and test the proposed method. The comparison analyses of different rehabilitation treatments proposed by NCDOT, by the traditional DT induction algorithm and by the proposed new method are conducted. Findings and conclusions include: (1) traditional DT technology can make a consistent decision for road maintenance and rehabilitation strategy under the same road conditions, i.e., less interference from human factors; (2) the traditional DT technology can increase the speed of decision-making because the technology automatically generates a decision-tree and rules if the expert knowledge is given, which saves time and expenses for PMS; (3) integration of the DT and GIS can provide the PMS with the capabilities of graphically displaying treatment decisions, visualizing the attribute and non-attribute data, and linking data and information to the geographical coordinates. However, the traditional DT induction methods are not as quite intelligent as one's expectations. Thus, post-processing and refinement is necessary. Moreover, traditional DT induction methods for pavement M&R strategies only used the non-spatial attribute data. It has been demonstrated from this dissertation research that the spatial data is very useful for the improvement of decision-making processes for pavement treatment strategies. In addition, the decision trees are based on the knowledge acquired from pavement management engineers for strategy selection. Thus, different decision-trees can be built if the requirement changes. / Ph. D.
112

Tratamento de imprecisão na geração de árvores de decisão

Lopes, Mariana Vieira Ribeiro 03 March 2016 (has links)
Submitted by Ronildo Prado (ronisp@ufscar.br) on 2017-08-08T20:30:11Z No. of bitstreams: 1 DissMVRL.pdf: 2179441 bytes, checksum: 3c4089c4b24a3d98521f8561c6f2c515 (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-08-08T20:30:33Z (GMT) No. of bitstreams: 1 DissMVRL.pdf: 2179441 bytes, checksum: 3c4089c4b24a3d98521f8561c6f2c515 (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-08-08T20:30:39Z (GMT) No. of bitstreams: 1 DissMVRL.pdf: 2179441 bytes, checksum: 3c4089c4b24a3d98521f8561c6f2c515 (MD5) / Made available in DSpace on 2017-08-08T20:31:24Z (GMT). No. of bitstreams: 1 DissMVRL.pdf: 2179441 bytes, checksum: 3c4089c4b24a3d98521f8561c6f2c515 (MD5) Previous issue date: 2016-03-03 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Inductive Decision Trees (DT) are mechanisms based on the symbolic paradigm of machine learning which main characteristics are easy interpretability and low computational cost. Though they are widely used, the DTs can represent problems with just discrete or continuous variables. However, for some problems, the variables are not well represented in this way. In order to improve DTs, the Fuzzy Decision Trees (FDT) were developed, adding the ability to deal with fuzzy variables to the Inductive Decision Trees, making them capable to deal with imprecise knowledge. In this text, it is presented a new algorithm for fuzzy decision trees induction. Its fuzification method is applied during the induction and it is inspired by the C4.5’s partitioning method for continuous attributes. The proposed algorithm was tested with 20 datasets from UCI repository (LICHMAN, 2013). It was compared with other three algorithms that implement different solutions to classification problem: C4.5, which induces an Inductive Decision Tree, FURIA, that induces a Rule-based Fuzzy System and FuzzyDT, which induces a Fuzzy Decision Tree where the fuzification is done before tree’s induction is performed. The results are presented in Chapter 4. / As Árvores de Decisão Indutivas (AD) são um mecanismo baseado no paradigma simbólico do Aprendizado de Máquina que tem como principais características a fácil interpretabilidade e baixo custo computacional. Ainda que sejam amplamente utilizadas, as ADs são limitadas à representação de problemas cujas variáveis são do tipo discreto ou contínuo. No entanto, para alguns tipos de problemas, pode haver variáveis que não são bem representadas por estes formatos. Diante deste contexto, foram criadas as Árvores de Decisão Fuzzy (ADF), que adicionam à interpretabilidade das Árvores de Decisão Indutivas, a capacidade de lidar com variáveis fuzzy, as quais representam adequadamente conhecimentos imprecisos. Neste texto, apresentamos o trabalho desenvolvido durante o mestrado, que tem como principal resultado um novo algoritmo para indução de Árvores de Decisão Fuzzy, cujo método de fuzificação dos atributos contínuos é realizado durante a indução da árvore e foi inspirado no método de particionamento de atributos contínuos adotado pelo C4.5. Para validação do algoritmo, foram realizados testes com 20 conjuntos de dados do repositório UCI (LICHMAN, 2013) e o algoritmo foi comparado com outros três algoritmos que abordam o problema de classificação por meio de técnicas diferentes: o C4.5 que induz uma Árvore de Decisão Indutiva, o FURIA, que induz um Sistema Fuzzy Baseado em Regras, porém não segue a estrutura de árvore e o FuzzyDT que induz uma Árvore de Decisão fuzzy realizando a fuzificação dos atributos contínuos antes da indução da árvore. Os resultados dos experimentos realizados são apresentados e discutidos no Capítulo 4 deste texto.
113

Carbon Intensity Estimation of Publicly Traded Companies / Uppskattning av koldioxidintensitet hos börsnoterade bolag

Ribberheim, Olle January 2021 (has links)
The purpose of this master thesis is to develop a model to estimate the carbon intensity, i.e the carbon emission relative to economic activity, of publicly traded companies which do not report their carbon emissions. By using statistical and machine learning models, the core of this thesis is to develop and compare different methods and models with regard to accuracy, robustness, and explanatory value when estimating carbon intensity. Both discrete variables, such as the region and sector the company is operating in, and continuous variables, such as revenue and capital expenditures, are used in the estimation. Six methods were compared, two statistically derived and four machine learning methods. The thesis consists of three parts: data preparation, model implementation, and model comparison. The comparison indicates that boosted decision tree is both the most accurate and robust model. Lastly, the strengths and weaknesses of the methodology is discussed, as well as the suitability and legitimacy of the boosted decision tree when estimating carbon intensity. / Syftet med denna masteruppsats är att utveckla en modell som uppskattar koldioxidsintensiteten, det vill säga koldioxidutsläppen i förhållande till ekonomisk aktivitet, hos publika bolag som inte rapporterar sina koldioxidutsläpp. Med hjälp av statistiska och maskininlärningsmodeller kommer stommen i uppsatsen vara att utveckla och jämföra olika metoder och modeller utifrån träffsäkerhet, robusthet och förklaringsvärde vid uppskattning av koldioxidintensitet. Både diskreta och kontinuerliga variabler används vid uppskattningen, till exempel region och sektor som företaget är verksam i, samt omsättning och kapitalinvesteringar. Sex stycken metoder jämfördes, två statistiskt härledda och fyra maskininlärningsmetoder. Arbetet består av tre delar; förberedelse av data, modellutveckling och modelljämförelse, där jämförelsen indikerar att boosted decision tree är den modell som är både mest träffsäker och robust. Slutligen diskuteras styrkor och svagheter med metodiken, samt lämpligheten och tillförlitligheten med att använda ett boosted decision tree för att uppskatta koldioxidintensitet.
114

Automatic Analysis of Peer Feedback using Machine Learning and Explainable Artificial Intelligence / Automatisk analys av Peer feedback med hjälp av maskininlärning och förklarig artificiell Intelligence

Huang, Kevin January 2023 (has links)
Peer assessment is a process where learners evaluate and provide feedback on one another’s performance, which is critical to the student learning process. Earlier research has shown that it can improve student learning outcomes in various settings, including the setting of engineering education, in which collaborative teaching and learning activities are common. Peer assessment activities in computer-supported collaborative learning (CSCL) settings are becoming more and more common. When using digital technologies for performing these activities, much student data (e.g., peer feedback text entries) is generated automatically. These large data sets can be analyzed (through e.g., computational methods) and further used to improve our understanding of how students regulate their learning in CSCL settings in order to improve their conditions for learning by for example, providing in-time feedback. Yet there is currently a need to automatise the coding process of these large volumes of student text data since it is a very time- and resource consuming task. In this regard, the recent development in machine learning could prove beneficial. To understand how we can harness the affordances of machine learning technologies to classify student text data, this thesis examines the application of five models on a data set containing peer feedback from 231 students in the settings of a large technical university course. The models used to evaluate on the dataset are: the traditional models Multi Layer Perceptron (MLP), Decision Tree and the transformers-based models BERT, RoBERTa and DistilBERT. To evaluate each model’s performance, Cohen’s κ, accuracy, and F1-score were used as metrics. Preprocessing of the data was done by removing stopwords; then it was examined whether removing them improved the performance of the models. The results showed that preprocessing on the dataset only made the Decision Tree increase in performance while it decreased on all other models. RoBERTa was the model with the best performance on the dataset on all metrics used. Explainable artificial intelligence (XAI) was used on RoBERTa as it was the best performing model and it was found that the words considered as stopwords made a difference in the prediction. / Kamratbedömning är en process där eleverna utvärderar och ger feedback på varandras prestationer, vilket är avgörande för elevernas inlärningsprocess. Tidigare forskning har visat att den kan förbättra studenternas inlärningsresultat i olika sammanhang, däribland ingenjörsutbildningen, där samarbete vid undervisning och inlärning är vanligt förekommande. I dag blir det allt vanligare med kamratbedömning inom datorstödd inlärning i samarbete (CSCL). När man använder digital teknik för att utföra dessa aktiviteter skapas många studentdata (t.ex. textinlägg om kamratåterkoppling) automatiskt. Dessa stora datamängder kan analyseras (genom t.ex, beräkningsmetoder) och användas vidare för att förbättra våra kunskaper om hur studenterna reglerar sitt lärande i CSCL-miljöer för att förbättra deras förutsättningar för lärande. Men för närvarande finns det ett stort behov av att automatisera kodningen av dessa stora volymer av textdata från studenter. I detta avseende kan den senaste utvecklingen inom maskininlärning vara till nytta. För att förstå hur vi kan nyttja möjligheterna med maskininlärning teknik för att klassificera textdata från studenter, undersöker vi i denna studie hur vi kan använda fem modeller på en datamängd som innehåller feedback från kamrater till 231 studenter. Modeller som används för att utvärdera datasetet är de traditionella modellerna Multi Layer Perceptron (MLP), Decision Tree och de transformer-baserade modellerna BERT, RoBERTa och DistilBERT. För att utvärdera varje modells effektivitet användes Cohen’s κ, noggrannhet och F1-poäng som mått. Förbehandling av data gjordes genom att ta bort stoppord, därefter undersöktes om borttagandet av dem förbättrade modellernas effektivitet. Resultatet visade att förbehandlingen av datasetet endast fick Decision Tree att öka sin prestanda, medan den minskade för alla andra modeller. RoBERTa var den modell som presterade bäst på datasetet för alla mätvärden som användes. Förklarlig artificiell intelligens (XAI) användes på RoBERTa eftersom det var den modell som presterade bäst, och det visade sig att de ord som ansågs vara stoppord hade betydelse för prediktionen.
115

Consensus Algorithms in Blockchain : A survey to create decision trees for blockchain applications / Konsensusalgoritmer i Blockchain : En undersökning för att skapa beslutsträd för blockchain-applikationer

Zhu, Xinlin January 2023 (has links)
Blockchain is a decentralized database that is distributed among a computer network. To enable a smooth decision making process without any authority, different blockchain applications use their own consensus algorithms. The problem is that for a new blockchain application, there is limited aid in deciding which algorithm it should implement. Selecting consensus algorithms is crucial because reaching consensus is the fundamental issue of a decentralized system. Different algorithms are designed with their own advantages and limitations, making it complex to navigate one’s way through a list of consensus algorithms. This thesis attempts to contribute to solving this problem by surveying 15 existing cryptocurrencies’ consensus algorithms used in their blockchain application and then producing a decision tree as the aid for algorithm selection. The top 5 algorithms from each category in Proof of Work (PoW), Proof of Stake (PoS), and Hybrid Proof of Work + Proof of Stake (PoW + PoS) are selected. The research method is qualitative. The study shows that different consensus algorithms often share some properties, but they are usually built to solve the issues of another algorithm, which means they also have their own distinctive advantages. Therefore, the decision tree reveals how these algorithms are logically connected and the key properties blockchain consensus algorithms possess. Based on the result of this thesis, further research can be conducted to include more algorithms in order to make the decision tree more comprehensive. Implementations of these algorithms in similar network setup can also be done to experiment with their claimed properties. The decision tree can be sent to industry for further feedback. / Blockchain är en decentraliserad databas som distribueras i ett datornätverk. För att möjliggöra en smidig beslutsprocess utan någon auktoritet använder olika blockkedjeapplikationer sina egna konsensusalgoritmer. Problemet är att för en ny blockchain-applikation finns det begränsad hjälp för att bestämma vilken algoritm den ska implementera. Att välja konsensusalgoritmer är avgörande eftersom att nå konsensus är den grundläggande frågan för ett decentraliserat system. Olika algoritmer är designade med sina egna fördelar och begränsningar, vilket gör det komplicerat att navigera sig igenom en lista med konsensusalgoritmer. Forskningsmetoden är kvalitativ. Det här dokumentet försöker bidra till att lösa detta problem genom att kartlägga 15 befintliga kryptovalutors konsensusalgoritmer som används i deras blockkedjeapplikation och sedan ta fram ett beslutsträd som hjälp för val av algoritmer. De 5 bästa algoritmerna från varje kategori i Proof of Work (PoW), Proof of Stake (PoS) och Hybrid Proof of Work + Proof of Stake (PoW + PoS) väljs. Studien visar att olika konsensusalgoritmer ofta delar vissa egenskaper, men de är vanligtvis byggda för att lösa problem med en annan algoritm, vilket innebär att de också har sina egna distinkta fördelar. Därför avslöjar beslutsträdet hur dessa algoritmer är logiskt kopplade och de nyckelegenskaper som blockchain konsensusalgoritmer besitter. Baserat på resultatet av denna artikel kan ytterligare forskning utföras för att inkludera fler algoritmer för att göra beslutsträdet mer heltäckande. Implementeringar av dessa algoritmer i liknande nätverksuppsättningar kan också göras för att experimentera med deras påstådda egenskaper. Beslutsträdet kan skickas till industrin för vidare feedback.
116

Categorization of Swedish e-mails using Supervised Machine Learning / Kategorisering av svenska e-postmeddelanden med användning av övervakad maskininlärning

Mann, Anna, Höft, Olivia January 2021 (has links)
Society today is becoming more digitalized, and a common way of communication is to send e-mails. Currently, the company Auranest has a filtering method for categorizing e-mails, but the method is a few years old. The filter provides a classification of valuable e-mails for jobseekers, where employers can make contact. The company wants to know if the categorization can be performed with a different method and improved. The degree project aims to investigate whether the categorization can be proceeded with higher accuracy using machine learning. Three supervised machine learning algorithms, Naïve Bayes, Support Vector Machine (SVM), and Decision Tree, have been examined, and the algorithm with the highest results has been compared with Auranest's existing filter. Accuracy, Precision, Recall, and F1 score have been used to determine which machine learning algorithm received the highest results and in comparison, with Auranest's filter. The results showed that the supervised machine learning algorithm SVM achieved the best results in all metrics. The comparison between Auranest's existing filter and SVM showed that SVM performed better in all calculated metrics, where the accuracy showed 99.5% for SVM and 93.03% for Auranest’s filter. The comparative results showed that accuracy was the only factor that received similar results. For the other metrics, there was a noticeable difference. / Dagens samhälle blir alltmer digitaliserat och ett vanligt kommunikationssätt är att skicka e-postmeddelanden. I dagsläget har företaget Auranest ett filter för att kategorisera e-postmeddelanden men filtret är några år gammalt. Användningsområdet för filtret är att sortera ut värdefulla e-postmeddelanden för arbetssökande, där kontakt kan ske från arbetsgivare. Företaget vill veta ifall kategoriseringen kan göras med en annan metod samt förbättras. Målet med examensarbetet är att undersöka ifall filtreringen kan göras med högre träffsäkerhet med hjälp av maskininlärning. Tre övervakade maskininlärningsalgoritmer, Naïve Bayes, Support Vector Machine (SVM) och Decision Tree, har granskats och algoritmen med de högsta resultaten har jämförts med Auranests befintliga filter. Träffsäkerhet, precision, känslighet och F1-poäng har använts för att avgöra vilken maskininlärningsalgoritm som gav högst resultat sinsemellan samt i jämförelse med Auranests filter. Resultatet påvisade att den övervakade maskininlärningsmetoden SVM åstadkom de främsta resultaten i samtliga mätvärden. Jämförelsen mellan Auranests befintliga filter och SVM visade att SVM presterade bättre i alla kalkylerade mätvärden, där träffsäkerheten visade 99,5% för SVM och 93,03% för Auranests filter. De jämförande resultaten visade att träffsäkerheten var den enda faktorn som gav liknande resultat. För de övriga mätvärdena var det en märkbar skillnad.
117

Predicting the threshold grade for university admission through Machine Learning Classification Models / Förutspå tröskelvärdet för universitetsantagningsbetyg genom klassificeringsmodeller inom maskininlärning

Almawed, Anas, Victorin, Anton January 2023 (has links)
Higher-level education is very important these days, which can create very high thresholds for admission on popular programs on certain universities. In order to know what grade will be needed to be admitted to a program, a student can look at the threshold from previous years. We explored whether it was possible to generate accurate predictions of what the future threshold would be. We did this by using well-established machine learning classification models and admission data from 14 years back covering all applicants to the Computer Science and Engineering Program at KTH Royal Institute of Technology. What we found through this work is that the models are good at correctly classifying data from the past, but not in a meaningful way able to predict future thresholds. The models could not make accurate future predictions solely based on grades of past applicants. / Eftergymnasiala studier är väldigt viktiga numera, vilket kan leda till mycket höga antagningskrav på populära program på vissa universitet och högskolor. För att veta vilket betyg som krävs för att komma in på en utbildning så kan studenten titta på gränsen från tidigare år och utifrån det gissa sig till vad gränsen kommer vara kommande år. Vi undersöker om det är möjligt att, med hjälp av väletablerade, klassificerande Maskininlärnings-modeller kunna förutse antagningsgränsen i framtiden. Vi tränar modellerna på data med antagningsstatistik som sträcker sig tillbaka 14 år med alla ansökningar till civilingenjörs-programmet Datateknik på Kungliga Tekniska Högskolan. Det vi finner genom detta arbete är att modellerna är bra på att korrekt klassificera data från tidigare år, men att de inte, på ett meningsfullt sätt, kan förutse betygsgränsen kommande år. Modellerna kan inte göra detta endast genom data på betyg från tidigare år.
118

Loss Given Default Estimation with Machine Learning Ensemble Methods / Estimering av förlust vid fallissemang med ensembelmetoder inom maskininlärning

Velka, Elina January 2020 (has links)
This thesis evaluates the performance of three machine learning methods in prediction of the Loss Given Default (LGD). LGD can be seen as the opposite of the recovery rate, i.e. the ratio of an outstanding loan that the loan issuer would not be able to recover in case the customer would default. The methods investigated are decision trees, random forest and boosted methods. All of the methods investigated performed well in predicting the cases were the loan is not recovered, LGD = 1 (100%), or the loan is totally recovered, LGD = 0 (0% ). When the performance of the models was evaluated on a dataset where the observations with LGD = 1 were removed, a significant decrease in performance was observed. The random forest model built on an unbalanced training dataset showed better performance on the test dataset that included values LGD = 1 and the random forest model built on a balanced training dataset performed better on the test set where the observations of LGD = 1 were removed. Boosted models evaluated in this study showed less accurate predictions than other methods used. Overall, the performance of random forest models showed slightly better results than the performance of decision tree models, although the computational time (the cost) was considerably longer when running the random forest models. Therefore decision tree models would be suggested for prediction of the Loss Given Default. / Denna uppsats undersöker och jämför tre maskininlärningsmetoder som estimerar förlust vid fallissemang (Loss Given Default, LGD). LGD kan ses som motsatsen till återhämtningsgrad, dvs. andelen av det utstående lånet som långivaren inte skulle återfå ifall kunden skulle fallera. Maskininlärningsmetoder som undersöks i detta arbete är decision trees, random forest och boosted metoder. Alla metoder fungerade väl vid estimering av lån som antingen inte återbetalas, dvs. LGD = 1 (100%), eller av lån som betalas i sin helhet, LGD = 0 (0%). En tydlig minskning i modellernas träffsäkerhet påvisades när modellerna kördes med ett dataset där observationer med LGD = 1 var borttagna. Random forest modeller byggda på ett obalanserat träningsdataset presterade bättre än de övriga modellerna på testset som inkluderade observationer där LGD = 1. Då observationer med LGD = 1 var borttagna visade det sig att random forest modeller byggda på ett balanserat träningsdataset presterade bättre än de övriga modellerna. Boosted modeller visade den svagaste träffsäkerheten av de tre metoderna som blev undersökta i denna studie. Totalt sett visade studien att random forest modeller byggda på ett obalanserat träningsdataset presterade en aning bättre än decision tree modeller, men beräkningstiden (kostnaden) var betydligt längre när random forest modeller kördes. Därför skulle decision tree modeller föredras vid estimering av förlust vid fallissemang.
119

Automated dust storm detection using satellite images : development of a computer system for the detection of dust storms from MODIS satellite images and the creation of a new dust storm database

El-Ossta, Esam Elmehde Amar January 2013 (has links)
Dust storms are one of the natural hazards, which have increased in frequency in the recent years over Sahara desert, Australia, the Arabian Desert, Turkmenistan and northern China, which have worsened during the last decade. Dust storms increase air pollution, impact on urban areas and farms as well as affecting ground and air traffic. They cause damage to human health, reduce the temperature, cause damage to communication facilities, reduce visibility which delays both road and air traffic and impact on both urban and rural areas. Thus, it is important to know the causation, movement and radiation effects of dust storms. The monitoring and forecasting of dust storms is increasing in order to help governments reduce the negative impact of these storms. Satellite remote sensing is the most common method but its use over sandy ground is still limited as the two share similar characteristics. However, satellite remote sensing using true-colour images or estimates of aerosol optical thickness (AOT) and algorithms such as the deep blue algorithm have limitations for identifying dust storms. Many researchers have studied the detection of dust storms during daytime in a number of different regions of the world including China, Australia, America, and North Africa using a variety of satellite data but fewer studies have focused on detecting dust storms at night. The key elements of this present study are to use data from the Moderate Resolution Imaging Spectroradiometers on the Terra and Aqua satellites to develop more effective automated method for detecting dust storms during both day and night and generate a MODIS dust storm database.
120

An Integrative Approach for Examining the Determinants of Abnormal Returns: The Cases of Internet Security Breach and Ecommerce Initiative

Andoh-Baidoo, Francis Kofi 01 January 2006 (has links)
Researchers in various business disciplines use the event study methodology to assess the market value of firms through capital market reaction to news in the public media about the firm's activities. Capital market reaction is assessed based on cumulative abnormal return (sum of abnormal returns over the event window). In this study, the event study methodology is used to assess the impact that two important information technology activities, Internet security breach and ecommerce initiative, have on the market value of firms. While prior research on the relationship between these business activities and cumulative abnormal return involved the use of regression analysis, in this study, we use decision tree induction and regression.For the Internet security breach study, we use negative cumulative abnormal return as a surrogate for damage to the breached firm. In contrast to what has been reported in the research literature, our results suggest that the relationship between cumulative abnormal return and the independent variables for both the Internet security breach and ecommerce initiative studies is complex, often involving conditional interactions between the independent variables. We report that the incomplete contract theory is unable to effectively explain the relationship between cumulative abnormal return and the organizational variables. Other ecommerce theories provide support to the findings from our analysis. We show that both attack and firm characteristics are determinants of damage to breached firms.Our results revealed that the use of decision tree induction presents additional insight to that provided by regression models. We illustrate that there is value in using data mining techniques to study the market value of e-commerce initiative and Internet security breach and that this approach has applicability in other domains and that Decision Tree can enhance the event study methodology.We demonstrate that Decision Tree induction can be used for both theory building and theory testing. We specifically employ Decision Tree induction to test and enhance ecommerce theories and develop a theoretical model for cumulative abnormal return and ecommerce. We also present theoretical models for Internet security breach and damage to the breached firm. These models can be used by decision makers in Internet security and ecommerce investments strategic formulations and implementations.

Page generated in 0.0284 seconds