• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 146
  • 40
  • 23
  • 20
  • 7
  • 6
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 305
  • 200
  • 90
  • 59
  • 52
  • 51
  • 41
  • 37
  • 36
  • 36
  • 33
  • 28
  • 27
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Increasing the Precision of Forest Area Estimates through Improved Sampling for Nearest Neighbor Satellite Image Classification

Blinn, Christine Elizabeth 25 August 2005 (has links)
The impacts of training data sample size and sampling method on the accuracy of forest/nonforest classifications of three mosaicked Landsat ETM+ images with the nearest neighbor decision rule were explored. Large training data pools of single pixels were used in simulations to create samples with three sampling methods (random, stratified random, and systematic) and eight sample sizes (25, 50, 75, 100, 200, 300, 400, and 500). Two forest area estimation techniques were used to estimate the proportion of forest in each image and to calculate forest area precision estimates. Training data editing was explored to remove problem pixels from the training data pools. All possible band combinations of the six non-thermal ETM+ bands were evaluated for every sample draw. Comparisons were made between classification accuracies to determine if all six bands were needed. The utility of separability indices, minimum and average Euclidian distances, and cross-validation accuracies for the selection of band combinations, prediction of classification accuracies, and assessment of sample quality were determined. Larger training data sample sizes produced classifications with higher average accuracies and lower variability. All three sampling methods had similar performance. Training data editing improved the average classification accuracies by a minimum of 5.45%, 5.31%, and 3.47%, respectively, for the three images. Band combinations with fewer than all six bands almost always produced the maximum classification accuracy for a single sample draw. The number of bands and combination of bands, which maximized classification accuracy, was dependent on the characteristics of the individual training data sample draw, the image, sample size, and, to a lesser extent, the sampling method. All three band selection measures were unable to select band combinations that produced higher accuracies on average than all six bands. Cross-validation accuracies with sample size 500 had high correlations with classification accuracies, and provided an indication of sample quality. Collection of a high quality training data sample is key to the performance of the nearest neighbor classifier. Larger samples are necessary to guarantee classifier performance and the utility of cross-validation accuracies. Further research is needed to identify the characteristics of "good" training data samples. / Ph. D.
132

Precision Aggregated Local Models

Edwards, Adam Michael 28 January 2021 (has links)
Large scale Gaussian process (GP) regression is infeasible for larger data sets due to cubic scaling of flops and quadratic storage involved in working with covariance matrices. Remedies in recent literature focus on divide-and-conquer, e.g., partitioning into sub-problems and inducing functional (and thus computational) independence. Such approximations can speedy, accurate, and sometimes even more flexible than an ordinary GPs. However, a big downside is loss of continuity at partition boundaries. Modern methods like local approximate GPs (LAGPs) imply effectively infinite partitioning and are thus pathologically good and bad in this regard. Model averaging, an alternative to divide-and-conquer, can maintain absolute continuity but often over-smooth, diminishing accuracy. Here I propose putting LAGP-like methods into a local experts-like framework, blending partition-based speed with model-averaging continuity, as a flagship example of what I call precision aggregated local models (PALM). Using N_C LAGPs, each selecting n from N data pairs, I illustrate a scheme that is at most cubic in n, quadratic in N_C, and linear in N, drastically reducing computational and storage demands. Extensive empirical illustration shows how PALM is at least as accurate as LAGP, can be much faster in terms of speed, and furnishes continuous predictive surfaces. Finally, I propose sequential updating scheme which greedily refines a PALM predictor up to a computational budget, and several variations on the basic PALM that may provide predictive improvements. / Doctor of Philosophy / Occasionally, when describing the relationship between two variables, it may be helpful to use a so-called ``non-parametric" regression that is agnostic to the function that connects them. Gaussian Processes (GPs) are a popular method of non-parametric regression used for their relative flexibility and interpretability, but they have the unfortunate drawback of being computationally infeasible for large data sets. Past work into solving the scaling issues for GPs has focused on ``divide and conquer" style schemes that spread the data out across multiple smaller GP models. While these model make GP methods much more accessible to large data sets they do so either at the expense of local predictive accuracy of global surface continuity. Precision Aggregated Local Models (PALM) is a novel divide and conquer method for GP models that is scalable for large data while maintaining local accuracy and a smooth global model. I demonstrate that PALM can be built quickly, and performs well predictively compared to other state of the art methods. This document also provides a sequential algorithm for selecting the location of each local model, and variations on the basic PALM methodology.
133

Machine Learning Models in Fullerene/Metallofullerene Chromatography Studies

Liu, Xiaoyang 08 August 2019 (has links)
Machine learning methods are now extensively applied in various scientific research areas to make models. Unlike regular models, machine learning based models use a data-driven approach. Machine learning algorithms can learn knowledge that are hard to be recognized, from available data. The data-driven approaches enhance the role of algorithms and computers and then accelerate the computation using alternative views. In this thesis, we explore the possibility of applying machine learning models in the prediction of chromatographic retention behaviors. Chromatographic separation is a key technique for the discovery and analysis of fullerenes. In previous studies, differential equation models have achieved great success in predictions of chromatographic retentions. However, most of the differential equation models require experimental measurements or theoretical computations for many parameters, which are not easy to obtain. Fullerenes/metallofullerenes are rigid and spherical molecules with only carbon atoms, which makes the predictions of chromatographic retention behaviors as well as other properties much simpler than other flexible molecules that have more variations on conformations. In this thesis, I propose the polarizability of a fullerene molecule is able to be estimated directly from the structures. Structural motifs are used to simplify the model and the models with motifs provide satisfying predictions. The data set contains 31947 isomers and their polarizability data and is split into a training set with 90% data points and a complementary testing set. In addition, a second testing set of large fullerene isomers is also prepared and it is used to testing whether a model can be trained by small fullerenes and then gives ideal predictions on large fullerenes. / Machine learning models are capable to be applied in a wide range of areas, such as scientific research. In this thesis, machine learning models are applied to predict chromatography behaviors of fullerenes based on the molecular structures. Chromatography is a common technique for mixture separations, and the separation is because of the difference of interactions between molecules and a stationary phase. In real experiments, a mixture usually contains a large family of different compounds and it requires lots of work and resources to figure out the target compound. Therefore, models are extremely import for studies of chromatography. Traditional models are built based on physics rules, and involves several parameters. The physics parameters are measured by experiments or theoretically computed. However, both of them are time consuming and not easy to be conducted. For fullerenes, in my previous studies, it has been shown that the chromatography model can be simplified and only one parameter, polarizability, is required. A machine learning approach is introduced to enhance the model by predicting the molecular polarizabilities of fullerenes based on structures. The structure of a fullerene is represented by several local structures. Several types of machine learning models are built and tested on our data set and the result shows neural network gives the best predictions.
134

Machine Learning for Malware Detection in Network Traffic

Omopintemi, A.H., Ghafir, Ibrahim, Eltanani, S., Kabir, Sohag, Lefoane, Moemedi 19 December 2023 (has links)
No / Developing advanced and efficient malware detection systems is becoming significant in light of the growing threat landscape in cybersecurity. This work aims to tackle the enduring problem of identifying malware and protecting digital assets from cyber-attacks. Conventional methods frequently prove ineffective in adjusting to the ever-evolving field of harmful activity. As such, novel approaches that improve precision while simultaneously taking into account the ever-changing landscape of modern cybersecurity problems are needed. To address this problem this research focuses on the detection of malware in network traffic. This work proposes a machine-learning-based approach for malware detection, with particular attention to the Random Forest (RF), Support Vector Machine (SVM), and Adaboost algorithms. In this paper, the model’s performance was evaluated using an assessment matrix. Included the Accuracy (AC) for overall performance, Precision (PC) for positive predicted values, Recall Score (RS) for genuine positives, and the F1 Score (SC) for a balanced viewpoint. A performance comparison has been performed and the results reveal that the built model utilizing Adaboost has the best performance. The TPR for the three classifiers performs over 97% and the FPR performs < 4% for each of the classifiers. The created model in this paper has the potential to help organizations or experts anticipate and handle malware. The proposed model can be used to make forecasts and provide management solutions in the network’s everyday operational activities.
135

Machine Learning Algorithms to Predict Cost Account Codes in an ERP System : An Exploratory Case Study

Wirdemo, Alexander January 2023 (has links)
This study aimed to investigate how Machine Learning (ML) algorithms can be used to predict the cost account code to be used when handling invoices in an Enterprise Resource Planning (ERP) system commonly found in the Swedish public sector. This implied testing which one of the tested algorithms that performs the best and what criteria that need to be met in order to perform the best. Previous studies on ML and its use in invoice classification have focused on either the accounts payable side or the accounts receivable side of the balance sheet. The studies have used a variety of methods, some not only involving common ML algorithms such as Random forest, Naïve Bayes, Decision tree, Support Vector Machine, Logistic regression, Neural network or k-nearest Neighbor but also other classifiers such as rule classifiers and naïve classifiers. The general conclusion from previous studies is that several algorithms can classify invoices with a satisfactory accuracy score and that Random forest, Naïve Bayes and Neural network have shown the most promising results. The study was performed as an exploratory case study. The case company was a small municipal community where the finance clerks handles received invoices through an ERP system. The accounting step of invoice handling involves selecting the proper cost account code before submitting the invoice for review and approval. The data used was invoice summaries holding the organization number, bankgiro, postgiro and account code used. The algorithms selected for the task were the supervised learning algorithms Random forest and Naïve Bayes and the instance-based algorithm k-Nearest Neighbor (k-NN). The findings indicated that ML could be used to predict which cost account code to be used by providing a pre-filled suggestion when the clerk opens the invoice. Among the algorithms tested, Random forest performed the best with 78% accuracy (Naïve Bayes and k-NN performed at 69% and 70% accuracy, respectively). One reason for this is Random forest’s ability to handle several input variables, generate an unbiased estimate of the generalization error, and its ability to give information about the relationship between the variables and classification. However, a high level of support is needed in order to get the algorithm to perform at its best, where 335 occurrences is a guiding number in this case. / Syftet med denna studie var att undersöka hur Machine Learning (ML) algoritmer kan användas för att förutsäga vilken kontokod som ska användas vid hantering av fakturor i ett affärssystem som är vanligt förekommande i svensk offentlig sektor. Detta innebar att undersöka vilken av de testade algoritmerna som presterar bäst och vilka kriterier som måste uppfyllas för att prestera bäst. Tidigare studier om ML och dess användning vid fakturaklassificering har fokuserat på antingen balansräkningens leverantörsreskontra (leverantörsskulder) eller kundreskontrasidan (kundfordringar) i balansräkningen. Studierna har använt olika metoder, några involverar inte bara vanliga ML-algoritmer som Random forest, Naive Bayes, beslutsträd, Support Vector Machine, Logistisk regression, Neuralt nätverk eller k-nearest Neighbour, utan även andra klassificerare som regelklassificerare och naiva klassificerare. Den generella slutsatsen från tidigare studier är att det finns flera algoritmer som kan klassificera fakturor med en tillfredsställande noggrannhet, och att Random forest, Naive Bayes och neurala nätverk har visat de mest lovande resultaten. Studien utfördes som en explorativ fallstudie. Fallföretaget var en mindre kommun där ekonomiassistenter hanterar inkommande fakturor genom ett affärssystem. Bokföringssteget för fakturahantering innebär att användaren väljer rätt kostnadskontokod innan fakturan skickas för granskning och godkännande. Uppgifterna som användes var fakturasammandrag med organisationsnummer, bankgiro, postgiro och kontokod. Algoritmerna som valdes för uppgiften var de övervakade inlärningsalgoritmerna Random forest och Naive Bayes och den instansbaserade algoritmen k-Nearest Neighbour. Resultaten tyder på att ML skulle kunna användas för att förutsäga vilken kostnadskod som ska användas genom att ge ett förifyllt förslag när expediten öppnar fakturan. Bland de testade algoritmerna presterade Random forest bäst med 78 % noggrannhet (Naïve Bayes och k-Nearest Neighbour presterade med 69 % respektive 70 % noggrannhet). En förklaring till detta är Random forests förmåga att hantera flera indatavariabler, generera en opartisk skattning av generaliseringsfelet och dess förmåga att ge information om sambandet mellan variablerna och klassificeringen. Det krävs dock en högt antal dataobservationer för att få algoritmen att prestera som bäst, där 335 förekomster är ett minimum i detta fall.
136

Improving dual-tree algorithms

Curtin, Ryan Ross 07 January 2016 (has links)
This large body of work is entirely centered around dual-tree algorithms, a class of algorithm based on spatial indexing structures that often provide large amounts of acceleration for various problems. This work focuses on understanding dual-tree algorithms using a new, tree-independent abstraction, and using this abstraction to develop new algorithms. Stated more clearly, the thesis of this entire work is that we may improve and expand the class of dual-tree algorithms by focusing on and providing improvements for each of the three independent components of a dual-tree algorithm: the type of space tree, the type of pruning dual-tree traversal, and the problem-specific BaseCase() and Score() functions. This is demonstrated by expressing many existing dual-tree algorithms in the tree-independent framework, and focusing on improving each of these three pieces. The result is a formidable set of generic components that can be used to assemble dual-tree algorithms, including faster traversals, improved tree theory, and new algorithms to solve the problems of max-kernel search and k-means clustering.
137

Techniques de conservation de l'énergie dans les réseaux de capteurs mobiles : découverte de voisinage et routage / Techniques of energy conservation in mobile sensor networks : neighbor discovery and routing

Sghaier, Nouha 22 November 2013 (has links)
Le challenge de la consommation d'énergie dans les réseaux de capteurs sans fil constitue un verrou technologique qui reste un problème ouvert encore aujourd'hui. Ces travaux de thèse s'inscrivent dans la problématique de la conservation de l'énergie dans les réseaux de capteurs et s'articulent autour de deux axes. Dans la première partie, nous abordons le dimensionnement des protocoles de découverte de voisinage. Nous proposons deux techniques de dimensionnement de ces protocoles qui visent à optimiser la consommation d'énergie des nœuds capteurs. La première technique, PPM-BM, consiste à dimensionner le protocole de découverte de voisins en fonction du niveau de batterie du nœud. La deuxième approche, ECoND, vise à ajuster la fréquence de découverte de voisins en fonction de la connectivité estimée à chaque instant. Cette technique tire profit des cycles temporels des modèles de mouvement des nœuds. La connectivité est estimée en se basant sur l'historique des rencontres. La découverte de voisins est ajustée en fonction du taux de connectivité estimé. Les résultats enregistrés mettent en évidence l'efficacité de ces deux techniques dans l'optimisation de la consommation d'énergie des nœuds sans affecter les performances de taux de livraison de messages et d'overhead. La deuxième partie de la thèse concerne l'optimisation des performances des réseaux de capteurs en termes de durée de vie. Nous reconsidérons dans cette partie certains protocoles de routage relevant du domaine des réseaux à connectivité intermittente et nous proposons le protocole EXLIOSE qui se base sur la capacité d'énergie résiduelle au niveau des nœuds pour assurer un équilibre énergétique, partager la charge et étendre à la fois la durée de vie des nœuds ainsi que celle du réseau / The challenge of energy consumption in wireless sensor networks is a key issue that remains an open problem. This thesis relates to the problem of energy conservation in sensor networks and is divided into two parts. In the first part, we discuss the design of neighbor discovery protocols. We propose two techniques for modulating these protocols in order to optimize the energy consumption of sensor nodes. The first technique, PPM-BM aims to modulate the neighbor discovery protocol based on the battery level of the node. The second approach ECoND aims to set up the frequency of neighbor discovery based on estimated connectivity. This technique takes advantage of the temporal cycles of nodes' movement patterns. Connectivity is estimated based on encounters' history. A neighbor discovery is set up based on the estimated rate of connectivity. The achieved results demonstrate the effectiveness of these techniques in optimizing the energy consumption of nodes while maintaining acceptable message delivery and overhead rates. In the second part of the thesis, we contribute to the optimization of the performance of sensor networks in terms of network lifetime. We review in this section some routing protocols for networks with intermittent connectivity and we propose EXLIOSE protocol which is based on residual energy to ensure energy-balancing, load sharing and network lifetime extending
138

Automatic Classification of Fish in Underwater Video; Pattern Matching - Affine Invariance and Beyond

gundam, madhuri, Gundam, Madhuri 15 May 2015 (has links)
Underwater video is used by marine biologists to observe, identify, and quantify living marine resources. Video sequences are typically analyzed manually, which is a time consuming and laborious process. Automating this process will significantly save time and cost. This work proposes a technique for automatic fish classification in underwater video. The steps involved are background subtracting, fish region tracking and classification using features. The background processing is used to separate moving objects from their surrounding environment. Tracking associates multiple views of the same fish in consecutive frames. This step is especially important since recognizing and classifying one or a few of the views as a species of interest may allow labeling the sequence as that particular species. Shape features are extracted using Fourier descriptors from each object and are presented to nearest neighbor classifier for classification. Finally, the nearest neighbor classifier results are combined using a probabilistic-like framework to classify an entire sequence. The majority of the existing pattern matching techniques focus on affine invariance, mainly because rotation, scale, translation and shear are common image transformations. However, in some situations, other transformations may be modeled as a small deformation on top of an affine transformation. The proposed algorithm complements the existing Fourier transform-based pattern matching methods in such a situation. First, the spatial domain pattern is decomposed into non-overlapping concentric circular rings with centers at the middle of the pattern. The Fourier transforms of the rings are computed, and are then mapped to polar domain. The algorithm assumes that the individual rings are rotated with respect to each other. The variable angles of rotation provide information about the directional features of the pattern. This angle of rotation is determined starting from the Fourier transform of the outermost ring and moving inwards to the innermost ring. Two different approaches, one using dynamic programming algorithm and second using a greedy algorithm, are used to determine the directional features of the pattern.
139

EVALUATING SPATIAL QUERIES OVER DECLUSTERED SPATIAL DATA

Eslam A Almorshdy (6832553) 02 August 2019 (has links)
<div> <div> <p>Due to the large volumes of spatial data, data is stored on clusters of machines that inter-communicate to achieve a task. In such distributed environment; communicating intermediate results among computing nodes dominates execution time. Communication overhead is even more dominant if processing is in memory. Moreover, the way spatial data is partitioned affects overall processing cost. Various partitioning strategies influence the size of the intermediate results. Spatial data poses the following additional challenges: 1)Storage load balancing because of the skewed distribution of spatial data over the underlying space, 2)Query load imbalance due to skewed query workload and query hotspots over both time and space, and 3)Lack of effective utilization of the computing resources. We introduce a new kNN query evaluation technique, termed BCDB, for evaluating nearest-neighbor queries (NN-queries, for short). In contrast to clustered partitioning of spatial data, BCDB explores the use of declustered partitioning of data to address data and query skew. BCDB uses summaries of the underling data and a coarse-grained index to localize processing of the NN-query on each local node as much as possible. The coarse-grained index is locally traversed using a new uncertain version of classical distance browsing resulting in minimal O( √k) elements to be communicated across all processing nodes.</p> </div> </div>
140

Traduzindo o Brazil: o país mestiço de Jorge Amado / Translating Brazil: Jorge Amado\'s mestizo country

Tooge, Marly D\'Amaro Blasques 05 June 2009 (has links)
O primeiro livro de Jorge Amado traduzido para o idioma inglês foi publicado nos Estados Unidos em 1945, pela Alfred A. Knopf Publishers, por meio de patrocínio do Departamento de Estado americano, que mantinha um programa de intercâmbio cultural como parte da Política de Boa Vizinhança do presidente Roosevelt. A literatura traduzida era, então, vista como um caminho para compreender o outro. Criou-se, a partir daí, um padrão de comportamento que perdurou por décadas. Érico Veríssimo, Gilberto Freyre, Alfred e Blanche Knopf, Samuel Putnam e Harriet de Onís foram atores importantes nesse cenário. Apesar de seu contínuo posicionamento de esquerda, após desligar-se do Partido Comunista no final da década de 1950, Jorge Amado tornou-se um bestseller norteamericano, como resultado dessa vertente diplomática e do renovado projeto de tradução (e de amizade) de Alfred A. Knopf. Entretanto, outras redes de influência também atuavam sobre a recepção da obra do escritor, fazendo com que ela fosse assimilada de forma própria, metonímica, diferente da que ocorreu em países do leste europeu, por exemplo. Esta pesquisa investigou a relação entre os atores mencionados, tais redes de influência e a representação cultural do Brasil na literatura traduzida de Jorge Amado nos Estados Unidos. / The first book by Jorge Amado in English translation was published in the United States in 1945 by Alfred A. Knopf Publishers, under the auspices of the U.S. State Department, who sponsored a cultural interchange program as part of president Roosevelts the Good Neighbor Policy. Translated literature was seen, at the time, as a way of understanding the other. Érico Veríssimo, Gilberto Freyre, Alfred and Blanche Knopf, Samuel Putnam and Harriet de Onís were actors in this scenario. In spite of his support of the tenets of the political left, after leaving the Communist Party in the late 1950s Jorge Amado became an American bestseller, a result of such diplomatic movement as well as Alfred A. Knops translation project. Nevertheless, other influence networks also affected the authors reception in the United States, which turned out to be quite different from that in the Eastern Europe, for instance. This research investigates the relation between the aforementioned actors, such influence network and Brazils cultural representation in Jorge Amados translated literature in the United States.

Page generated in 0.0424 seconds