• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 47
  • 47
  • 47
  • 47
  • 15
  • 13
  • 12
  • 10
  • 8
  • 8
  • 8
  • 7
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Intelligent Decision Support Systems for Compliance Options : A Systematic Literature Review and Simulation

PATTA, SIVA VENKATA PRASAD January 2019 (has links)
The project revolves around logistics and its adoption to the new rules. Theobjective of this project is to focus on minimizing data tampering to the lowest level possible.To achieve the set goals in this project, Decision support system and simulation havebeen used. However, to get clear insight about how they can be implemented, a systematicliterature review (Case Study incl.) has been conducted, followed by interviews with personnelat Kakinada port to understand the real-time complications in the field. Then, a simulatedexperiment using real-time data from Kakinada port has been conducted to achieve the set goalsand improve the level of transparency on all sides i.e., shipper, port and terminal.
22

Evaluation de l'adhérence au contact roue-rail par analyse d'images spectrales / Wheel-track adhesion evaluation using spectral imaging

Nicodeme, Claire 04 July 2018 (has links)
L’avantage du train depuis sa création est sa faible résistance à l’avancement du fait du contact fer-fer de la roue sur le rail conduisant à une adhérence réduite. Cependant cette adhérence faible est aussi un inconvénient majeur : étant dépendante des conditions environnementales, elle est facilement altérée lors d’une pollution du rail (végétaux, corps gras, eau, etc.). Aujourd’hui, les mesures prises face à des situations d'adhérence dégradée impactent directement les performances du système et conduisent notamment à une perte de capacité de transport. L’objectif du projet est d’utiliser les nouvelles technologies d’imagerie spectrale pour identifier sur les rails les zones à adhérence réduite et leur cause afin d’alerter et d’adapter rapidement les comportements. La stratégie d’étude a pris en compte les trois points suivants : • Le système de détection, installé à bord de trains commerciaux, doit être indépendant du train. • La détection et l’identification ne doivent pas interagir avec la pollution pour ne pas rendre la mesure obsolète. Pour ce faire le principe d’un Contrôle Non Destructif est retenu. • La technologie d’imagerie spectrale permet de travailler à la fois dans le domaine spatial (mesure de distance, détection d’objet) et dans le domaine fréquentiel (détection et reconnaissance de matériaux par analyse de signatures spectrales). Dans le temps imparti des trois ans de thèse, nous nous sommes focalisés sur la validation du concept par des études et analyses en laboratoire, réalisables dans les locaux de SNCF Ingénierie & Projets. Les étapes clés ont été la réalisation d’un banc d’évaluation et le choix du système de vision, la création d'une bibliothèque de signatures spectrales de référence et le développement d'algorithmes classification supervisées et non supervisées des pixels. Ces travaux ont été valorisés par le dépôt d'un brevet et la publication d'articles dans des conférences IEEE. / The advantage of the train since its creation is in its low resistance to the motion, due to the contact iron-iron of the wheel on the rail leading to low adherence. However this low adherence is also a major drawback : being dependent on the environmental conditions, it is easily deteriorated when the rail is polluted (vegetation, grease, water, etc). Nowadays, strategies to face a deteriorated adherence impact the performance of the system and lead to a loss of transport capacity. The objective of the project is to use a new spectral imaging technology to identify on the rails areas with reduced adherence and their cause in order to quickly alert and adapt the train's behaviour. The study’s strategy took into account the three following points : -The detection system, installed on board of commercial trains, must be independent of the train. - The detection and identification process should not interact with pollution in order to keep the measurements unbiased. To do so, we chose a Non Destructive Control method. - Spectral imaging technology makes it possible to work with both spatial information (distance’s measurement, target detection) and spectral information (material detection and recognition by analysis of spectral signatures). In the assigned time, we focused on the validation of the concept by studies and analyses in laboratory, workable in the office at SNCF Ingénierie & Projets. The key steps were the creation of the concept's evaluation bench and the choice of a Vision system, the creation of a library containing reference spectral signatures and the development of supervised and unsupervised pixels classification. A patent describing the method and process has been filed and published.
23

Efficient Kernel Methods For Large Scale Classification

Asharaf, S 07 1900 (has links)
Classification algorithms have been widely used in many application domains. Most of these domains deal with massive collection of data and hence demand classification algorithms that scale well with the size of the data sets involved. A classification algorithm is said to be scalable if there is no significant increase in time and space requirements for the algorithm (without compromising the generalization performance) when dealing with an increase in the training set size. Support Vector Machine (SVM) is one of the most celebrated kernel based classification methods used in Machine Learning. An SVM capable of handling large scale classification problems will definitely be an ideal candidate in many real world applications. The training process involved in SVM classifier is usually formulated as a Quadratic Programing(QP) problem. The existing solution strategies for this problem have an associated time and space complexity that is (at least) quadratic in the number of training points. This makes the SVM training very expensive even on classification problems having a few thousands of training examples. This thesis addresses the scalability of the training algorithms involved in both two class and multiclass Support Vector Machines. Efficient training schemes reducing the space and time requirements of the SVM training process are proposed as possible solutions. The classification schemes discussed in the thesis for handling large scale two class classification problems are a) Two selective sampling based training schemes for scaling Non-linear SVM and b) Clustering based approaches for handling unbalanced data sets with Core Vector Machine. To handle large scale multicalss classification problems, the thesis proposes Multiclass Core Vector Machine (MCVM), a scalable SVM based multiclass classifier. In MVCM, the multiclass SVM problem is shown to be equivalent to a Minimum Enclosing Ball (MEB) problem and is then solved using a fast approximate MEB finding algorithm. Experimental studies were done with several large real world data sets such as IJCNN1 and Acoustic data sets from LIBSVM page, Extended USPS data set from CVM page and network intrusion detection data sets of DARPA, US Defense used in KDD 99 contest. From the empirical results it is observed that the proposed classification schemes achieve good generalization performance at low time and space requirements. Further, the scalability experiments done with large training data sets have demonstrated that the proposed schemes scale well. A novel soft clustering scheme called Rough Support Vector Clustering (RSVC) employing the idea of Soft Minimum Enclosing Ball Problem (SMEB) is another contribution discussed in this thesis. Experiments done with a synthetic data set and the real world data set namely IRIS, have shown that RSVC finds meaningful soft cluster abstractions.
24

Analysis And Classification Of Spelling Paradigm Eeg Data And An Attempt For Optimization Of Channels Used

Yildirim, Asil 01 December 2010 (has links) (PDF)
Brain Computer Interfaces (BCIs) are systems developed in order to control devices by using only brain signals. In BCI systems, different mental activities to be performed by the users are associated with different actions on the device to be controlled. Spelling Paradigm is a BCI application which aims to construct the words by finding letters using P300 signals recorded via channel electrodes attached to the diverse points of the scalp. Reducing the letter detection error rates and increasing the speed of letter detection are crucial for Spelling Paradigm. By this way, disabled people can express their needs more easily using this application. In this thesis, two different methods, Support Vector Machine (SVM) and AdaBoost, are used for classification in the analysis. Classification and Regression Trees is used as the weak classifier of the AdaBoost. Time-frequency domain characteristics of P300 evoked potentials are analyzed in addition to time domain characteristics. Wigner-Ville Distribution is used for transforming time domain signals into time-frequency domain. It is observed that classification results are better in time domain. Furthermore, optimum subset of channels that models P300 signals with minimum error rate is searched. A method that uses both SVM and AdaBoost is proposed to select channels. 12 channels are selected in time domain with this method. Also, effect of dimension reduction is analyzed using Principal Component Analysis (PCA) and AdaBoost methods.
25

Discovering Discussion Activity Flows in an On-line Forum Using Data Mining Techniques

Hsieh, Lu-shih 22 July 2008 (has links)
In the Internet era, more and more courses are taught through a course management system (CMS) or learning management system (LMS). In an asynchronous virtual learning environment, an instructor has the need to beware the progress of discussions in forums, and may intervene if ecessary in order to facilitate students¡¦ learning. This research proposes a discussion forum activity flow tracking system, called FAFT (Forum Activity Flow Tracer), to utomatically monitor the discussion activity flow of threaded forum postings in CMS/LMS. As CMS/LMS is getting popular in facilitating learning activities, the proposedFAFT can be used to facilitate instructors to identify students¡¦ interaction types in discussion forums. FAFT adopts modern data/text mining techniques to discover the patterns of forum discussion activity flows, which can be used for instructors to facilitate the online learning activities. FAFT consists of two subsystems: activity classification (AC) and activity flow discovery (AFD). A posting can be perceived as a type of announcement, questioning, clarification, interpretation, conflict, or assertion. AC adopts a cascade model to classify various activitytypes of posts in a discussion thread. The empirical evaluation of the classified types from a repository of postings in earth science on-line courses in a senior high school shows that AC can effectively facilitate the coding rocess, and the cascade model can deal with the imbalanced distribution nature of discussion postings. AFD adopts a hidden Markov model (HMM) to discover the activity flows. A discussion activity flow can be presented as a hidden Markov model (HMM) diagram that an instructor can adopt to predict which iscussion activity flow type of a discussion thread may be followed. The empirical results of the HMM from an online forum in earth science subject in a senior high school show that FAFT can effectively predict the type of a discussion activity flow. Thus, the proposed FAFT can be embedded in a course management system to automatically predict the activity flow type of a discussion thread, and in turn reduce the teachers¡¦ loads on managing online discussion forums.
26

A cloud-based intelligent and energy efficient malware detection framework : a framework for cloud-based, energy efficient, and reliable malware detection in real-time based on training SVM, decision tree, and boosting using specified heuristics anomalies of portable executable files

Mirza, Qublai K. A. January 2017 (has links)
The continuity in the financial and other related losses due to cyber-attacks prove the substantial growth of malware and their lethal proliferation techniques. Every successful malware attack highlights the weaknesses in the defence mechanisms responsible for securing the targeted computer or a network. The recent cyber-attacks reveal the presence of sophistication and intelligence in malware behaviour having the ability to conceal their code and operate within the system autonomously. The conventional detection mechanisms not only possess the scarcity in malware detection capabilities, they consume a large amount of resources while scanning for malicious entities in the system. Many recent reports have highlighted this issue along with the challenges faced by the alternate solutions and studies conducted in the same area. There is an unprecedented need of a resilient and autonomous solution that takes proactive approach against modern malware with stealth behaviour. This thesis proposes a multi-aspect solution comprising of an intelligent malware detection framework and an energy efficient hosting model. The malware detection framework is a combination of conventional and novel malware detection techniques. The proposed framework incorporates comprehensive feature heuristics of files generated by a bespoke static feature extraction tool. These comprehensive heuristics are used to train the machine learning algorithms; Support Vector Machine, Decision Tree, and Boosting to differentiate between clean and malicious files. Both these techniques; feature heuristics and machine learning are combined to form a two-factor detection mechanism. This thesis also presents a cloud-based energy efficient and scalable hosting model, which combines multiple infrastructure components of Amazon Web Services to host the malware detection framework. This hosting model presents a client-server architecture, where client is a lightweight service running on the host machine and server is based on the cloud. The proposed framework and the hosting model were evaluated individually and combined by specifically designed experiments using separate repositories of clean and malicious files. The experiments were designed to evaluate the malware detection capabilities and energy efficiency while operating within a system. The proposed malware detection framework and the hosting model showed significant improvement in malware detection while consuming quite low CPU resources during the operation.
27

Classificação de sinais de eletroencefalograma usando máquinas de vetores suporte

Chagas, Sandro Luiz das 27 August 2009 (has links)
Made available in DSpace on 2016-03-15T19:38:14Z (GMT). No. of bitstreams: 1 Sandro Luiz das Chagas.pdf: 1694587 bytes, checksum: d10c7a5a95b65289731cab95f9b3478a (MD5) Previous issue date: 2009-08-27 / Electroencephalogram (EEG) is a clinical method widely used to study brain function and neurological disorders. The EEG is a temporal data series which records the electrical activity of the brain. The EEG monitoring systems create a huge amount of data; with this fact a visual analysis of the EEG is not feasible. Because of this, there is a strong demand for computational methods able to analyze automatically the EEG records and extract useful information to support the diagnostics. Herewith, it is necessary to design a tool to extract the relevant features within the EEG record and to classify the EEG based on these features. Calculation of statistics over wavelet coefficients are being used successfully to extract features from many kinds of temporal data series, including EEG signals. Support Vector Machines (SVM) are machine learning techniques with high generalization ability, and they have been successfully used in classification problems by several researches. This dissertation makes an analysis of the influence of feature vectors based on wavelet coefficients in the classification of EEG signal using different implementations of SVMs. / O eletroencefalograma (EEG) é um exame médico largamente utilizado no estudo da função cerebral e de distúrbios neurológicos. O EEG é uma série temporal que contém os registros de atividade elétrica do cérebro. Um grande volume de dados é gerado pelos sistemas de monitoração de EEG, o que faz com que a análise visual completa destes dados se torne inviável na prática. Com isso, surge uma grande demanda por métodos computacionais capazes de extrair, de forma automática, informação útil para a realização de diagnósticos. Para atender essa demanda, é necessária uma forma de extrair de um sinal de EEG as características relevantes para um diagnóstico e também uma forma de classificar o EEG em função destas características. O cálculo de estatísticas sobre coeficientes wavelet vem sendo empregado com sucesso na extração de características de diversos tipos de séries temporais, inclusive EEG. As máquinas de vetores de suporte (SVM do inglês Support Vector Machines) constituem uma técnica de aprendizado de máquina que possui alta capacidade de generalização e têm sido empregadas com sucesso em problemas de classificação por diversos pesquisadores. Nessa dissertação é feita uma análise do impacto da utilização de vetores de características baseados em coeficientes wavelet na classificação de EEG utilizando diferentes implementações de SVM.
28

A comparison of three brain atlases for MCI prediction / 軽度認知障害からアルツハイマー病への移行予測精度における脳アトラス選択の影響

Ota, Kenichi 23 March 2015 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(医学) / 甲第18872号 / 医博第3983号 / 新制||医||1008(附属図書館) / 31823 / 京都大学大学院医学研究科医学専攻 / (主査)教授 河野 憲二, 教授 古川 壽亮, 教授 髙橋 良輔 / 学位規則第4条第1項該当 / Doctor of Medical Science / Kyoto University / DGAM
29

A Cloud-Based Intelligent and Energy Efficient Malware Detection Framework. A Framework for Cloud-Based, Energy Efficient, and Reliable Malware Detection in Real-Time Based on Training SVM, Decision Tree, and Boosting using Specified Heuristics Anomalies of Portable Executable Files

Mirza, Qublai K.A. January 2017 (has links)
The continuity in the financial and other related losses due to cyber-attacks prove the substantial growth of malware and their lethal proliferation techniques. Every successful malware attack highlights the weaknesses in the defence mechanisms responsible for securing the targeted computer or a network. The recent cyber-attacks reveal the presence of sophistication and intelligence in malware behaviour having the ability to conceal their code and operate within the system autonomously. The conventional detection mechanisms not only possess the scarcity in malware detection capabilities, they consume a large amount of resources while scanning for malicious entities in the system. Many recent reports have highlighted this issue along with the challenges faced by the alternate solutions and studies conducted in the same area. There is an unprecedented need of a resilient and autonomous solution that takes proactive approach against modern malware with stealth behaviour. This thesis proposes a multi-aspect solution comprising of an intelligent malware detection framework and an energy efficient hosting model. The malware detection framework is a combination of conventional and novel malware detection techniques. The proposed framework incorporates comprehensive feature heuristics of files generated by a bespoke static feature extraction tool. These comprehensive heuristics are used to train the machine learning algorithms; Support Vector Machine, Decision Tree, and Boosting to differentiate between clean and malicious files. Both these techniques; feature heuristics and machine learning are combined to form a two-factor detection mechanism. This thesis also presents a cloud-based energy efficient and scalable hosting model, which combines multiple infrastructure components of Amazon Web Services to host the malware detection framework. This hosting model presents a client-server architecture, where client is a lightweight service running on the host machine and server is based on the cloud. The proposed framework and the hosting model were evaluated individually and combined by specifically designed experiments using separate repositories of clean and malicious files. The experiments were designed to evaluate the malware detection capabilities and energy efficiency while operating within a system. The proposed malware detection framework and the hosting model showed significant improvement in malware detection while consuming quite low CPU resources during the operation.
30

Development of new data fusion techniques for improving snow parameters estimation

De Gregorio, Ludovica 26 November 2019 (has links)
Water stored in snow is a critical contribution to the world’s available freshwater supply and is fundamental to the sustenance of natural ecosystems, agriculture and human societies. The importance of snow for the natural environment and for many socio-economic sectors in several mid‐ to high‐latitude mountain regions around the world, leads scientists to continuously develop new approaches to monitor and study snow and its properties. The need to develop new monitoring methods arises from the limitations of in situ measurements, which are pointwise, only possible in accessible and safe locations and do not allow for a continuous monitoring of the evolution of the snowpack and its characteristics. These limitations have been overcome by the increasingly used methods of remote monitoring with space-borne sensors that allow monitoring the wide spatial and temporal variability of the snowpack. Snow models, based on modeling the physical processes that occur in the snowpack, are an alternative to remote sensing for studying snow characteristics. However, from literature it is evident that both remote sensing and snow models suffer from limitations as well as have significant strengths that it would be worth jointly exploiting to achieve improved snow products. Accordingly, the main objective of this thesis is the development of novel methods for the estimation of snow parameters by exploiting the different properties of remote sensing and snow model data. In particular, the following specific novel contributions are presented in this thesis: i. A novel data fusion technique for improving the snow cover mapping. The proposed method is based on the exploitation of the snow cover maps derived from the AMUNDSEN snow model and the MODIS product together with their quality layer in a decision level fusion approach by mean of a machine learning technique, namely the Support Vector Machine (SVM). ii. A new approach has been developed for improving the snow water equivalent (SWE) product obtained from AMUNDSEN model simulations. The proposed method exploits some auxiliary information from optical remote sensing and from topographic characteristics of the study area in a new approach that differs from the classical data assimilation approaches and is based on the estimation of AMUNDSEN error with respect to the ground data through a k-NN algorithm. The new product has been validated with ground measurement data and by a comparison with MODIS snow cover maps. In a second step, the contribution of information derived from X-band SAR imagery acquired by COSMO-SkyMed constellation has been evaluated, by exploiting simulations from a theoretical model to enlarge the dataset.

Page generated in 0.0622 seconds