• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5598
  • 577
  • 282
  • 275
  • 167
  • 157
  • 83
  • 66
  • 50
  • 42
  • 24
  • 21
  • 20
  • 19
  • 12
  • Tagged with
  • 9042
  • 9042
  • 3028
  • 1688
  • 1534
  • 1522
  • 1417
  • 1358
  • 1192
  • 1186
  • 1158
  • 1128
  • 1113
  • 1024
  • 1020
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
901

Computational and Imaging Methods for Studying Neuronal Populations during Behavior

Han, Shuting January 2019 (has links)
One of the central questions in neuroscience is how the nervous system generates and regulates behavior. To understand the neural code for any behavior, an ideal experiment would entail (i) quantitatively defining that behavior, (ii) recording neuronal activity in relevant brain regions to identify the underlying neuronal circuits and eventually (iii) manipulating them to test their function. Novel methods in neuroscience have greatly advanced our abilities to conduct such experiments but are still insufficient. In this thesis, I developed methods for these three goals. In Chapter 2, I describe an automatic behavior identification and classification method for the cnidarian Hydra vulgaris using machine learning. In Chapter 3, I describe a fast volumetric two-photon microscope with dual-color laser excitation that can image in 3D the activity of populations of neurons from visual cortex of awake mice. In Chapter 4, I present a machine learning method that identifies cortical ensembles and pattern completion neurons in mouse visual cortex, using two-photon calcium imaging data. These methods advance current technologies, providing opportunities for new discoveries.
902

Deep Learning for Action Understanding in Video

Shou, Zheng January 2019 (has links)
Action understanding is key to automatically analyzing video content and thus is important for many real-world applications such as autonomous driving car, robot-assisted care, etc. Therefore, in the computer vision field, action understanding has been one of the fundamental research topics. Most conventional methods for action understanding are based on hand-crafted features. Like the recent advances seen in image classification, object detection, image captioning, etc, deep learning has become a popular approach for action understanding in video. However, there remain several important research challenges in developing deep learning based methods for understanding actions. This thesis focuses on the development of effective deep learning methods for solving three major challenges. Action detection at fine granularities in time: Previous work in deep learning based action understanding mainly focuses on exploring various backbone networks that are designed for the video-level action classification task. These did not explore the fine-grained temporal characteristics and thus failed to produce temporally precise estimation of action boundaries. In order to understand actions more comprehensively, it is important to detect actions at finer granularities in time. In Part I, we study both segment-level action detection and frame-level action detection. Segment-level action detection is usually formulated as the temporal action localization task, which requires not only recognizing action categories for the whole video but also localizing the start time and end time of each action instance. To this end, we propose an effective multi-stage framework called Segment-CNN consisting of three segment-based 3D ConvNets: (1) a proposal network identifies candidate segments that may contain actions; (2) a classification network learns one-vs-all action classification model to serve as initialization for the localization network; and (3) a localization network fine-tunes the learned classification network to localize each action instance. In another approach, frame-level action detection is effectively formulated as the per-frame action labeling task. We combine two reverse operations (i.e. convolution and deconvolution) into a joint Convolutional-De-Convolutional (CDC) filter, which simultaneously conducts downsampling in space and upsampling in time to jointly model both high-level semantics and temporal dynamics. We design a novel CDC network to predict actions at frame-level and the frame-level predictions can be further used to detect precise segment boundary for the temporal action localization task. Our method not only improves the state-of-the-art mean Average Precision (mAP) result on THUMOS’14 from 41.3% to 44.4% for the per-frame labeling task, but also improves mAP for the temporal action localization task from 19.0% to 23.3% on THUMOS’14 and from 16.4% to 23.8% on ActivityNet v1.3. Action detection in the constrained scenarios: The usual training process of deep learning models consists of supervision and data, which are not always available in reality. In Part II, we consider the scenarios of incomplete supervision and incomplete data. For incomplete supervision, we focus on the weakly-supervised temporal action localization task and propose AutoLoc which is the first framework that can directly predict the temporal boundary of each action instance with only the video-level annotations available during training. To enable the training of such a boundary prediction model, we design a novel Outer-Inner-Contrastive (OIC) loss to help discover the segment-level supervision and we prove that the OIC loss is differentiable to the underlying boundary prediction model. Our method significantly improves mAP on THUMOS14 from 13.7% to 21.2% and mAP on ActivityNet from 7.4% to 27.3%. For the scenario of incomplete data, we formulate a novel task called Online Detection of Action Start (ODAS) in streaming videos to enable detecting the action start time on the fly when a live video action is just starting. ODAS is important in many applications such as early alert generation to allow timely security or emergency response. Specifically, we propose three novel methods to address the challenges in training ODAS models: (1) hard negative samples generation based on Generative Adversarial Network (GAN) to distinguish ambiguous background, (2) explicitly modeling the temporal consistency between data around action start and data succeeding action start, and (3) adaptive sampling strategy to handle the scarcity of training data. Action understanding in the compressed domain: The mainstream action understanding methods including the aforementioned techniques developed by us require first decoding the compressed video into RGB image frames. This may result in significant cost in terms of storage and computation. Recently, researchers started to investigate how to directly perform action understanding in the compressed domain in order to achieve high efficiency while maintaining the state-of-the-art action detection accuracy. The key research challenge is developing effective backbone networks that can directly take data in the compressed domain as input. Our baseline is to take models developed for action understanding in the decoded domain and adapt them to attack the same tasks in the compressed domain. In Part III, we address two important issues in developing the backbone networks that exclusively operate in the compressed domain. First, compressed videos may be produced by different encoders or encoding parameters, but it is impractical to train a different compressed-domain action understanding model for each different format. We experimentally analyze the effect of video encoder variation and develop a simple yet effective training data preparation method to alleviate the sensitivity to encoder variation. Second, motion cues have been shown to be important for action understanding, but the motion vectors in compressed video are often very noisy and not discriminative enough for directly performing accurate action understanding. We develop a novel and highly efficient framework called DMC-Net that can learn to predict discriminative motion cues based on noisy motion vectors and residual errors in the compressed video streams. On three action recognition benchmarks, namely HMDB-51, UCF101 and a subset of Kinetics, we demonstrate that our DMC-Net can significantly shorten the performance gap between state-of-the-art compressed video based methods with and without optical flow, while being two orders of magnitude faster than the methods that use optical flow. By addressing the three major challenges mentioned above, we are able to develop more robust models for video action understanding and improve performance in various dimensions, such as (1) temporal precision, (2) required levels of supervision, (3) live video analysis ability, and finally (4) efficiency in processing compressed video. Our research has contributed significantly to advancing the state of the art of video action understanding and expanding the foundation for comprehensive semantic understanding of video content.
903

Data-driven temporal information extraction with applications in general and clinical domains

Filannino, Michele January 2016 (has links)
The automatic extraction of temporal information from written texts is pivotal for many Natural Language Processing applications such as question answering, text summarisation and information retrieval. However, Temporal Information Extraction (TIE) is a challenging task because of the amount of types of expressions (durations, frequencies, times, dates) and their high morphological variability and ambiguity. As far as the approaches are concerned, the most common among the existing ones is rule-based, while data-driven ones are under-explored. This thesis introduces a novel domain-independent data-driven TIE strategy. The identification strategy is based on machine learning sequence labelling classifiers on features selected through an extensive exploration. Results are further optimised using an a posteriori label-adjustment pipeline. The normalisation strategy is rule-based and builds on a pre-existing system. The methodology has been applied to both specific (clinical) and generic domain, and has been officially benchmarked at the i2b2/2012 and TempEval-3 challenges, ranking respectively 3rd and 1st. The results prove the TIE task to be more challenging in the clinical domain (overall accuracy 63%) rather than in the general domain (overall accuracy 69%).Finally, this thesis also presents two applications of TIE. One of them introduces the concept of temporal footprint of a Wikipedia article, and uses it to mine the life span of persons. In the other case, TIE techniques are used to improve pre-existing information retrieval systems by filtering out temporally irrelevant results.
904

Machine learning algorithms for the analysis and detection of network attacks

Unknown Date (has links)
The Internet and computer networks have become an important part of our organizations and everyday life. With the increase in our dependence on computers and communication networks, malicious activities have become increasingly prevalent. Network attacks are an important problem in today’s communication environments. The network traffic must be monitored and analyzed to detect malicious activities and attacks to ensure reliable functionality of the networks and security of users’ information. Recently, machine learning techniques have been applied toward the detection of network attacks. Machine learning models are able to extract similarities and patterns in the network traffic. Unlike signature based methods, there is no need for manual analyses to extract attack patterns. Applying machine learning algorithms can automatically build predictive models for the detection of network attacks. This dissertation reports an empirical analysis of the usage of machine learning methods for the detection of network attacks. For this purpose, we study the detection of three common attacks in computer networks: SSH brute force, Man In The Middle (MITM) and application layer Distributed Denial of Service (DDoS) attacks. Using outdated and non-representative benchmark data, such as the DARPA dataset, in the intrusion detection domain, has caused a practical gap between building detection models and their actual deployment in a real computer network. To alleviate this limitation, we collect representative network data from a real production network for each attack type. Our analysis of each attack includes a detailed study of the usage of machine learning methods for its detection. This includes the motivation behind the proposed machine learning based detection approach, the data collection process, feature engineering, building predictive models and evaluating their performance. We also investigate the application of feature selection in building detection models for network attacks. Overall, this dissertation presents a thorough analysis on how machine learning techniques can be used to detect network attacks. We not only study a broad range of network attacks, but also study the application of different machine learning methods including classification, anomaly detection and feature selection for their detection at the host level and the network level. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2017. / FAU Electronic Theses and Dissertations Collection
905

Investigation on Bayesian Ying-Yang learning for model selection in unsupervised learning. / CUHK electronic theses & dissertations collection / Digital dissertation consortium

January 2005 (has links)
For factor analysis models, we develop an improved BYY harmony data smoothing learning criterion BYY-HDS in help of considering the dependence between the factors and observations. We make empirical comparisons of the BYY harmony empirical learning criterion BYY-HEC, BYY-HDS, the BYY automatic model selection method BYY-AUTO, AIC, CAIC, BIC, and CV for selecting the number of factors not only on simulated data sets of different sample sizes, noise variances, data dimensions and factor numbers, but also on two real data sets from air pollution data and sport track records, respectively. / Model selection is a critical issue in unsupervised learning. Conventionally, model selection is implemented in two phases by some statistical model selection criterion such as Akaike's information criterion (AIC), Bozdogan's consistent Akaike's information criterion (CAIC), Schwarz's Bayesian inference criterion (BIC) which formally coincides with the minimum description length (MDL) criterion, and the cross-validation (CV) criterion. These methods are very time intensive and may become problematic when sample size is small. Recently, the Bayesian Ying-Yang (BYY) harmony learning has been developed as a unified framework with new mechanisms for model selection and regularization. In this thesis we make a systematic investigation on BYY learning as well as several typical model selection criteria for model selection on factor analysis models, Gaussian mixture models, and factor analysis mixture models. / The most remarkable findings of our study is that BYY-HDS is superior to its counterparts, especially when the sample size is small. AIC, BYY-HEC, BYY-AUTO and CV have a risk of overestimating, while BIC and CAIC have a risk of underestimating in most cases. BYY-AUTO is superior to other methods in a computational cost point of view. The cross-validation method requires the highest computing cost. (Abstract shortened by UMI.) / Hu Xuelei. / "November 2005." / Adviser: Lei Xu. / Source: Dissertation Abstracts International, Volume: 67-07, Section: B, page: 3899. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (p. 131-142). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract in English and Chinese. / School code: 1307.
906

Artificial intelligence system for continuous affect estimation from naturalistic human expressions

Abd Gaus, Yona Falinie January 2018 (has links)
The analysis and automatic affect estimation system from human expression has been acknowledged as an active research topic in computer vision community. Most reported affect recognition systems, however, only consider subjects performing well-defined acted expression, in a very controlled condition, so they are not robust enough for real-life recognition tasks with subject variation, acoustic surrounding and illumination change. In this thesis, an artificial intelligence system is proposed to continuously (represented along a continuum e.g., from -1 to +1) estimate affect behaviour in terms of latent dimensions (e.g., arousal and valence) from naturalistic human expressions. To tackle the issues, feature representation and machine learning strategies are addressed. In feature representation, human expression is represented by modalities such as audio, video, physiological signal and text modality. Hand- crafted features is extracted from each modality per frame, in order to match with consecutive affect label. However, the features extracted maybe missing information due to several factors such as background noise or lighting condition. Haar Wavelet Transform is employed to determine if noise cancellation mechanism in feature space should be considered in the design of affect estimation system. Other than hand-crafted features, deep learning features are also analysed in terms of the layer-wise; convolutional and fully connected layer. Convolutional Neural Network such as AlexNet, VGGFace and ResNet has been selected as deep learning architecture to do feature extraction on top of facial expression images. Then, multimodal fusion scheme is applied by fusing deep learning feature and hand-crafted feature together to improve the performance. In machine learning strategies, two-stage regression approach is introduced. In the first stage, baseline regression methods such as Support Vector Regression are applied to estimate each affect per time. Then in the second stage, subsequent model such as Time Delay Neural Network, Long Short-Term Memory and Kalman Filter is proposed to model the temporal relationships between consecutive estimation of each affect. In doing so, the temporal information employed by a subsequent model is not biased by high variability present in consecutive frame and at the same time, it allows the network to exploit the slow changing dynamic between emotional dynamic more efficiently. Following of two-stage regression approach for unimodal affect analysis, fusion information from different modalities is elaborated. Continuous emotion recognition in-the-wild is leveraged by investigating mathematical modelling for each emotion dimension. Linear Regression, Exponent Weighted Decision Fusion and Multi-Gene Genetic Programming are implemented to quantify the relationship between each modality. In summary, the research work presented in this thesis reveals a fundamental approach to automatically estimate affect value continuously from naturalistic human expression. The proposed system, which consists of feature smoothing, deep learning feature, two-stage regression framework and fusion using mathematical equation between modalities is demonstrated. It offers strong basis towards the development artificial intelligent system on estimation continuous affect estimation, and more broadly towards building a real-time emotion recognition system for human-computer interaction.
907

Improving radiotherapy using image analysis and machine learning

Montgomery, Dean January 2016 (has links)
With ever increasing advancements in imaging, there is an increasing abundance of images being acquired in the clinical environment. However, this increase in information can be a burden as well as a blessing as it may require significant amounts of time to interpret the information contained in these images. Computer assisted evaluation is one way in which better use could be made of these images. This thesis presents the combination of texture analysis of images acquired during the treatment of cancer with machine learning in order to improve radiotherapy. The first application is to the prediction of radiation induced pneumonitis. In 13- 37% of cases, lung cancer patients treated with radiotherapy develop radiation induced lung disease, such as radiation induced pneumonitis. Three dimensional texture analysis, combined with patient-specific clinical parameters, were used to compute unique features. On radiotherapy planning CT data of 57 patients, (14 symptomatic, 43 asymptomatic), a Support Vector Machine (SVM) obtained an area under the receiver operator curve (AUROC) of 0.873 with sensitivity, specificity and accuracy of 92%, 72% and 87% respectively. Furthermore, it was demonstrated that a Decision Tree classifier was capable of a similar level of performance using sub-regions of the lung volume. The second application is related to prostate cancer identification. T2 MRI scans are used in the diagnosis of prostate cancer and in the identification of the primary cancer within the prostate gland. The manual identification of the cancer relies on the assessment of multiple scans and the integration of clinical information by a clinician. This requires considerable experience and time. As MRI becomes more integrated within the radiotherapy work flow and as adaptive radiotherapy (where the treatment plan is modified based on multi-modality image information acquired during or between RT fractions) develops it is timely to develop automatic segmentation techniques for reliably identifying cancerous regions. In this work a number of texture features were coupled with a supervised learning model for the automatic segmentation of the main cancerous focus in the prostate - the focal lesion. A mean AUROC of 0.713 was demonstrated with 10-fold stratified cross validation strategy on an aggregate data set. On a leave one case out basis a mean AUROC of 0.60 was achieved which resulted in a mean DICE coefficient of 0.710. These results showed that is was possible to delineate the focal lesion in the majority (11) of the 14 cases used in the study.
908

Pulsar Search Using Supervised Machine Learning

Ford, John M. 01 January 2017 (has links)
Pulsars are rapidly rotating neutron stars which emit a strong beam of energy through mechanisms that are not entirely clear to physicists. These very dense stars are used by astrophysicists to study many basic physical phenomena, such as the behavior of plasmas in extremely dense environments, behavior of pulsar-black hole pairs, and tests of general relativity. Many of these tasks require information to answer the scientific questions posed by physicists. In order to provide more pulsars to study, there are several large-scale pulsar surveys underway, which are generating a huge backlog of unprocessed data. Searching for pulsars is a very labor-intensive process, currently requiring skilled people to examine and interpret plots of data output by analysis programs. An automated system for screening the plots will speed up the search for pulsars by a very large factor. Research to date on using machine learning and pattern recognition has not yielded a completely satisfactory system, as systems with the desired near 100% recall have false positive rates that are higher than desired, causing more manual labor in the classification of pulsars. This work proposed to research, identify, propose and develop methods to overcome the barriers to building an improved classification system with a false positive rate of less than 1% and a recall of near 100% that will be useful for the current and next generation of large pulsar surveys. The results show that it is possible to generate classifiers that perform as needed from the available training data. While a false positive rate of 1% was not reached, recall of over 99% was achieved with a false positive rate of less than 2%. Methods of mitigating the imbalanced training and test data were explored and found to be highly effective in enhancing classification accuracy.
909

Avaliação automática da qualidade de escrita de resumos científicos em inglês / Automatic evaluation of the quality of English abstracts

Luiz Carlos Genoves Junior 01 June 2007 (has links)
Problemas com a escrita podem afetar o desempenho de profissionais de maneira marcante, principalmente no caso de cientistas e acadêmicos que precisam escrever com proficiência e desembaraço não somente na língua materna, mas principalmente em inglês. Durante os últimos anos, ferramentas de suporte à escrita, algumas com enfoque em textos científicos, como o AMADEUS e o SciPo foram desenvolvidas e têm auxiliado pesquisadores na divulgação de suas pesquisas. Entretanto, a criação dessas ferramentas é baseada em córpus, sendo muito custosa, pois implica em selecionar textos bem escritos, além de segmentá-los de acordo com sua estrutura esquemática. Nesse mestrado estudamos, avaliamos e implementamos métodos de detecção automática da estrutura esquemática e de avaliação automática da qualidade de escrita de resumos científicos em inglês. Investigamos o uso de tais métodos para possibilitar o desenvolvimento de dois tipos de ferramentas: de detecção de bons resumos e de crítica. Nossa abordagem é baseada em córpus e em aprendizado de máquina supervisionado. Desenvolvemos um detector automático da estrutura esquemática, que chamamos de AZEA, com taxa de acerto de 80,4% eKappa de 0,73, superiores ao estado da arte (acerto de 73%, Kappa de 0,65). Experimentamos várias combinações de algoritmos, atributos e diferentes seções de um artigo científicos. Utilizamos o AZEA na implementação de duas dimensões de uma rubrica para o gênero científico, composta de 7 dimensões, e construímos e disponibilizamos uma ferramenta de crítica da estrutura de um resumo. Um detector de erros de uso de artigo também foi desenvolvido, com precisão é de 83,7% (Kappa de 0,63) para a tarefa de decidir entre omitir ou não um artigo, com enfoque no feedback ao usuário e como parte da implementação da dimensão de erros gramaticais da rubrica. Na tarefa de detectar bons resumos, utilizamos métodos usados com sucesso na avaliação automática da qualidade de escrita de redações com as implementações da rubrica e realizamos experimentos iniciais, ainda com resultados fracos, próximos à baseline. Embora não tenhamos construído um bom avaliador automático da qualidade de escrita, acreditamos que este trabalho indica direções para atingir esta meta, e forneça algumas das ferramentas necessárias / Poor writing may have serious implications for a professional\'s career. This is even more serious in the case of scientists and academics whose job requires fluency and proficiency in their mother tongue as well as in English. This is why a number of writing tools have been developed in order to assist researchers to promote their work. Here, we are particularly interested in tools, such as AMADEUS and SciPo, which focus on scientific writing. AMADEUS and SciPo are corpus-based tools and hence they rely on corpus compilation which is by no means an easy task. In addition to the dificult task of selecting well-written texts, it also requires segmenting these texts according to their schematic structure. The present dissertation aims to investigate, evaluate and implement some methods to automatically detect the schematic structure of English abstracts and to automatically evaluate their quality. These methods have been examined with a view to enabling the development of two types of tools, namely: detection of well-written abstracts and a critique tool. For automatically detecting schematic structures, we have developed a tool, named AZEA, which adopts a corpus-based, supervised machine learning approach. AZEA reaches 80.4% accuracy and Kappa of 0.73, which is above the highest rates reported in the literature so far (73% accuracy and Kappa of 0.65). We have tested a number of different combinations of algorithms, features and different paper sections. AZEA has been used to implement two out of seven dimensions of a rubric for analyzing scientific papers. A critique tool for evaluating the structure of abstracts has also been developed and made available. In addition, our work also includes the development of a classifier for identifying errors related to English article usage. This classifier reaches 83.7% accuracy (Kappa de 0.63) in the task of deciding whether or not a given English noun phrase requires an article. If implemented in the dimension of grammatical errors of the above mentioned rubric, it can be used to give users feedback on their errors. As regards the task of detecting well-written abstracts, we have resorted to methods which have been successfully adopted to evaluate quality of essays and some preliminary tests have been carried out. However, our results are not yet satisfactory since they are not much above the baseline. Despite this drawback, we believe this study proves relevant since in addition to offering some of the necessary tools, it provides some fundamental guidelines towards the automatic evaluation of the quality of texts
910

Discretização e geração de gráficos de dados em aprendizado de máquina / Attribute discretization and graphics generation in machine learning

Richardson Floriani Voltolini 17 November 2006 (has links)
A elevada quantidade e variedade de informações adquirida e armazenada em meio eletrônico e a incapacidade humana de analizá-las, têm motivado o desenvolvimento da área de Mineracão de Dados - MD - que busca, de maneira semi-automática, extrair conhecimento novo e útil de grandes bases de dados. Uma das fases do processo de MD é o pré-processamento dessas bases de dados. O pré-processamento de dados tem como alguns de seus principais objetivos possibilitar que o usuário do processo obtenha maior compreensão dos dados utilizados, bem como tornar os dados mais adequados para as próximas fases do processo de MD. Uma técnica que busca auxiliar no primeiro objetivo citado é a geracão de gráficos de dados, a qual consiste na representação gráfica dos registros (exemplos) de uma base de dados. Existem diversos métodos de geracão de gráficos, cada qual com suas características e objetivos. Ainda na fase de pré-processamento, de modo a tornar os dados brutos mais adequados para as demais fases do processo de MD, diversas técnicas podem ser aplicadas, promovendo transformações nesses dados. Uma delas é a discretização de dados, que transforma um atributo contínuo da base de dados em um atributo discreto. Neste trabalho são abordados alguns métodos de geração de gráficos e de discretização de dados freqüentemente utilizados pela comunidade. Com relação aos métodos de geração de gráficos, foi projetado e implementado o sistema DISCOVERGRAPHICS que provê interfaces para a geração de gráficos de dados. As diferentes interfaces criadas permitem a utilização do sistema por usuários avançados, leigos e por outros sistemas computacionais. Com relação ao segundo assunto abordado neste trabalho, discretização de dados, foram considerados diversos métodos de discretização supervisionados e não-supervisionados, freqüentemente utilizados pela comunidade, e foi proposto um novo método não-supervisionado denominado K-MeansR. Esses métodos foram comparados entre sí por meio da realização de experimentos e analise estatística dos resultados, considerando-se diversas medidas para realizar a avaliação. Os resultados obtidos indicam que o método proposto supera vários dos métodos de discretização considerados / The great quantity and variety of information acquired and stored electronically and the lack of human capacity to analyze it, have motivated the development of Data Mining - DM - a process that attempts to extract new and useful knowledge from databases. One of the steps of the DM process is data preprocessing. The main goals of the data preprocessing step are to enable the user to have a better understanding of the data being used and to transform the data so it is appropriate for the next step of the DM process related to pattern extraction. A technique concerning the first goal consists of the graphic representation of records (examples) of databases. There are various methods to generate these graphic representations, each one with its own characteristics and objectives. Furthermore, still in the preprocessing step, and in order to transform the raw data into a more suitable form for the next step of the DM process, various data discretization technique methods which transform continuous database attribute values into discrete ones can be applied. This work presents some frequently used methods of graph generation and data discretization. Related to the graph generation methods, we have developed a system called DISCOVERGRAPHICS, which offers different interfaces for graph generation. These interfaces allow both advanced and beginner users, as well as other systems, to access the DISCOVERGRAPHICS system facilities. Regarding the second subject of this work, data discretization, we considered various supervised and unsupervised methods and proposed a new unsupervised data discretization method called K-MeansR. Using different evaluation measures and databases, all these methods were experimentally compared to each other and statistical tests were run to analyze the experimental results. These results showed that the proposed method performed better than many of the other data discretization methods considered in this work

Page generated in 0.1686 seconds