1 |
Identifying the Vulnerability of Earthen Levees to Slump Slides using Geotechnical and Geomorphological ParametersSehat, Sona 13 December 2014 (has links)
The main goal of this research is to investigate vulnerability of levees to future slump slides. In the first part, polarimetric synthetic aperture radar (PolSAR) imagery is used as input in an automated classification system for characterizing areas on the levee having anomalies. In addition, a set of in-situ soil data is collected to provide detailed soil properties over the study area. In-situ soil properties of different classes characterized by the classifier are analyzed to determine how similarities between different areas. The second part, a database including of 34 slump slides that occurred in the lower Mississippi River levee system over a period of two years is used. The impacts of rainfall as well as several spatial geometrical and geomorphological variables (including channel width, river sinuosity index, riverbank erosion, channel shape condition and distance to river) are analyzed and tested for significance and used for developing a logistic regression model.
|
2 |
Détection et caractérisation massives de phénomènes sismologiques pour la surveillance d'événements traditionnels et la recherche systématique de phénomènes rares / Large-scale detection and characterization of seismological phenomena for the monitoring of traditional seismic events and systematic data-mining of rare phenomenaLanget, Nadège 09 December 2014 (has links)
La multiplication du nombre de réseaux sismiques fait exploser le nombre de données sismologiques. Manuellement, leur traitement est long et fastidieux, d'où la nécessité d'automatiser la détection, la classification et la localisation des événements pour aider les observatoires qui surveillent continuellement la sismicité, mais aussi, dans un intérêt plus scientifique, rechercher et caractériser les phénomènes. La thèse se décompose en 2 axes majeurs : (1) la détection / localisation des séismes, avec le logiciel Waveloc. On a amélioré les outils pré-existants, ajouté de nouvelles fonctionnalités pour une analyse plus détaillée de la sismicité et validé le code avec les données du Piton de la Fournaise ; (2) la classification des séismes. Après calcul des attributs décrivant au mieux les signaux, on a démontré l'efficacité de 2 méthodes d'apprentissage supervisé (régression logistique et SVM) pour le Piton de la Fournaise et soulevé les difficultés pour un cas plus complexe (le Kawah Ijen). / For some time now the quantity of available seismological data has kept increasing. Manually, their processing is long and tedious. Then, the automation of the detection, location and classification of seismic events has become necessary and aims to help the local observatories and to search and characterize some rarer or not well-known phenomena. The work is divided into 2 main directions : (1) the detection and location of seismic events with the Waveloc software (we improved the pre-existing tools, added some new functions for a more detailed analysis of the seimicity and applied the code to data from the Piton de la Fournaise volcano) ; (2) their classification (after computing the seismic attributes, we proved the efficiency and reliability of 2 supervised learning methods - logistic regression and SVM - for the Piton de la Fournaise volcano, underlined the difficulties for a more complex case - the Kawah Ijen volcano - and tried to apply new strategies).
|
3 |
Automated Classification of Steel Samples : An investigation using Convolutional Neural NetworksAhlin, Björn, Gärdin, Marcus January 2017 (has links)
Automated image recognition software has earlier been used for various analyses in the steel making industry. In this study, the possibility to apply such software to classify Scanning Electron Microscope (SEM) images of two steel samples was investigated. The two steel samples were of the same steel grade but with the difference that they had been treated with calcium for a different length of time. To enable automated image recognition, a Convolutional Neural Network (CNN) was built. The construction of the software was performed with open source code provided by Keras Documentation, thus ensuring an easily reproducible program. The network was trained, validated and tested, first for non-binarized images and then with binarized images. Binarized images were used to ensure that the network's prediction only considers the inclusion information and not the substrate. The non-binarized images gave a classification accuracy of 99.99 %. For the binarized images, the classification accuracy obtained was 67.9%. The results show that it is possible to classify steel samples using CNNs. One interesting aspect of the success in classifying steel samples is that further studies on CNNs could enable automated classification of inclusions. / Automatiserad bildigenkänning har tidigare använts inom ståltillverkning för olika sorters analyser. Den här studiens syfte är att undersöka om bildigenkänningsprogram applicerat på Svepelektronmikroskopi (SEM) bilder kan klassificera två stålprover. Stålproven var av samma sort, med skillnaden att de behandlats med kalcium olika lång tid. För att möjliggöra den automatiserade bildigenkänningen byggdes ett Convolutional Neural Network (CNN). Nätverket byggdes med hjälp av öppen kod från Keras Documentation. Detta för att programmet enkelt skall kunna reproduceras. Nätverket tränades, validerades och testades, först för vanliga bilder och sedan för binariserade bilder. Binariserade bilder användes för att tvinga programmet att bara klassificera med avseende på inneslutningar och inte på grundmatrisen. Resultaten på klassificeringen för vanliga bilder gav en träffsäkerhet på 99.99%. För binariserade bilder blev träffsäkerheten för klassificeringen 67.9%. Resultaten visar att det är möjligt att använda CNNs för att klassificera stålprover. En intressant möjlighet som vidare studier på CNNs kan leda till är att automatisk klassificering av inneslutningar kan möjliggöras.
|
4 |
An exploration of learning tool log data in CS1: how to better understand student behaviour and learningEstey, Anthony 02 February 2017 (has links)
The overall goal of this work is to support student success in computer science. First, I introduce BitFit, an ungraded practice programming tool built to provide students with a pressure-free environment to practice and build confidence working through weekly course material. BitFit was used in an introductory programming course (CSC 110) at the University of Victoria for 5 semesters in 2015 and 2016.
The contributions of this work are a number of studies done analyzing the log data collected by BitFit over those years. First, I explore whether patterns can be identified in log data to differentiate successful from unsuccessful students, with a specific focus on identifying students at-risk of failure within the first few weeks of the semester. Next, I separate out only those students who struggle early in the semester, and examine their changes in programming behaviour over time. The goal behind the second study is to differentiate between transient and sustained struggling, in an attempt better understand the reasons successful students are able to overcome early struggles. Finally, I combine survey data with log data to explore whether students understand whether their study habits are likely to lead to success.
Overall, this work provides insight into the factors contributing to behavioural change in an introductory programming course. I hope this information can aid educators in providing supportive intervention aimed at guiding struggling students towards more productive learning strategies. / Graduate / 0984 / 0525 / 0710 / aestey@uvic.ca
|
5 |
Automated classification of bibliographic data using SVM and Naive BayesNordström, Jesper January 2018 (has links)
Classification of scientific bibliographic data is an important and increasingly more time-consuming task in a “publish or perish” paradigm where the number of scientific publications is steadily growing. Apart from being a resource-intensive endeavor, manual classification has also been shown to be often performed with a quite high degree of inconsistency. Since many bibliographic databases contain a large number of already classified records supervised machine learning for automated classification might be a solution for handling the increasing volumes of published scientific articles. In this study automated classification of bibliographic data, based on two different machine learning methods; Naive Bayes and Support Vector Machine (SVM), were evaluated. The data used in the study were collected from the Swedish research database SwePub and the features used for training the classifiers were based on abstracts and titles in the bibliographic records. The accuracy achieved ranged between a lowest score of 0.54 and a highest score of 0.84. The classifiers based on Support Vector Machine did consistently receive higher scores than the classifiers based on Naive Bayes. Classification performed at the second level in the hierarchical classification system used clearly resulted in lower scores than classification performed at the first level. Using abstracts as the basis for feature extraction yielded overall better results than using titles, the differences were however very small.
|
6 |
災難事件下新媒體資訊傳播方式分析與自動化分類設計 ─ 以八八風災為例 / Information Transmission Analysis and Automated Classification Design for New Media in a Disaster Event – Case Study of Typhoon Morakot施旭峰, Shih, Shiuh Feng Unknown Date (has links)
災難事件發生時,災難資訊的分析和傳遞需具有即時性,才能讓資訊運用達到防災與救災的目的。網路基礎設施普及後,災難資訊的提供者加入廣大的網路公眾媒體,單獨透過搜尋引擎檢索無法即時的反應災難目前狀態;而像災難應變中心這類傳統頻道的災難通報管道有限,經常無法負荷突然爆發的資訊。這些因災難爆發的瞬間巨量資料,已無法完全使用人力蒐集、過濾與處理,需要發展新的工具能夠快速的自動化分類新媒體頻道資訊,提供救災防災體系應變或政府決策時參考。
本研究收集莫拉克颱風八八水災期間五個頻道資料,經過文字處理與專家分類後,由頻率分布、分類結構組成與詞彙共現網絡,觀察不同頻道資料集之性質的異同。在未考慮詞性與文法的狀況下,使用向量空間模型訓練OAO-SVM分類器模型,評估自動化分類方式的績效。
根據分析結果我們發現災難發生後,網路上的資訊隨著時序存在著階段性的期程,能夠由各個頻道瞭解災難的進程。透過詞彙共現網絡,瞭解救難專家書寫相較於俗民書寫使用的詞彙少重複且異質性較高。使用OAO-SVM訓練分類器結果,救難專家書寫的頻道分類績效優於俗民書寫。分類器交叉比較後,對於同性質頻道的內容具有較好的分類績效。透過合併相同屬性資料集訓練,我們發現當訓練資料的品質夠好時,分類器能夠有不錯的分類績效。品質不夠時,可以經由增加訓練資料的數量來提升分類的績效。本研究的歸納,以及所發展出來的分類方式與資訊探索技術,未來可以用於開發更有效率且精確的社群感知器。 / When disaster events occur, information diffusion and transmission need to be in real-time in order to exploit the information in disaster prevention and recovery. With the establishment of network infrastructure, mass media also joins the role of information providers of disaster events on the internet. However retrieved information through search engines often cannot reflect the status of a progressing disaster. Traditional channels such as disaster reaction centers also have difficulty handling the inpour of disaster information, and which is usually beyond the ability of human processing. Thus there is a need to develop new tools to quickly automate classification of information from new media, to provide reliable information to disaster reaction centers, and assist policy decision-making.
In this study, we use the data during typhoon Morakot collected from five different channels. After word processing and content classification by experts, we observe the difference between these datasets by the frequency distribution, classification structures and word co-occurrence network. We use the vector space model to train the OAO-SVM classification model without considering speech and grammar, and evaluate the performance of automated classification.
From the results, we found that the chronology of internet data can identify a number of stages throughout the progression of disasters, allowing us to oversee the development of the disaster through each channel. Through word relation in word co-occurrence network, experts use fewer repeating words and high heterogeneity than amateur writing channels. The training results of classifier from the OAO-SVM model indicate that channels maintained by experts perform better than amateur writing. The cross compare classifier has better performance for channels with the same properties. When we merge the same property channel dataset to train classifier, we found that when the training data quality is good enough, the classifier can have a good performance. If the data quality is not enough, you can increase amount of training data to improve classification performance. As a contribution of this research, we believe the techniques developed and results of the analysis can be used to design more efficient and accurate social sensors in the future.
|
7 |
New Paradigms for Automated Classification of PotteryHörr, Christian, Lindinger, Elisabeth, Brunnett, Guido 14 September 2009 (has links) (PDF)
This paper describes how feature extraction on ancient pottery can be combined with recent developments in artificial intelligence to draw up an automated, but still flexible classification system. These features include for instance several dimensions of the vessel's body, ratios thereof, an abstract representation of the overall shape, the shape of vessel segments and the number and type of attachments such as handles, lugs and feet. While most traditional approaches to classification are based on statistical analysis or the search for fuzzy clusters in high-dimensional spaces, we apply machine learning techniques, such as decision tree algorithms and neural networks. These methods allow for an objective and reproducible classification process. Conclusions about the "typability" of data, the evolution of types and the diagnostic attributes of the types themselves can be drawn as well.
|
8 |
New Paradigms for Automated Classification of PotteryHörr, Christian, Lindinger, Elisabeth, Brunnett, Guido 14 September 2009 (has links)
This paper describes how feature extraction on ancient pottery can be combined with recent developments in artificial intelligence to draw up an automated, but still flexible classification system. These features include for instance several dimensions of the vessel's body, ratios thereof, an abstract representation of the overall shape, the shape of vessel segments and the number and type of attachments such as handles, lugs and feet. While most traditional approaches to classification are based on statistical analysis or the search for fuzzy clusters in high-dimensional spaces, we apply machine learning techniques, such as decision tree algorithms and neural networks. These methods allow for an objective and reproducible classification process. Conclusions about the "typability" of data, the evolution of types and the diagnostic attributes of the types themselves can be drawn as well.
|
Page generated in 0.1245 seconds