• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 142
  • 40
  • 23
  • 20
  • 7
  • 6
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 301
  • 197
  • 87
  • 57
  • 50
  • 49
  • 38
  • 36
  • 36
  • 35
  • 33
  • 28
  • 26
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

IPv6 - Gatewayredundans : Snabbt nog?

Hermansson, Christopher, Johansson, Sebastian January 2011 (has links)
IPv6 har, till skillnad från tidigare versioner av protokollet, inbyggt stöd för redundans hos gatewayenheter. Vid nyttjande av ett flertal gatewayenheter kan det, med hjälp av Neighbor Unreachability Detection, ske automatisk övergång till en ny gatewayenhet utifall att den aktiva skulle gå ur funktion. Innan IPv6 har man varit tvungen att förlita sig på externa lösningar för att uppnå denna redundans. Den huvudsakliga fråga som ställs i rapporten är om det inbyggda stödet för gatewayredundans i IPv6 är tillräckligt snabbt för att kunna nyttjas självständigt, utan att förlita sig på externa lösningar. För att kunna ta fram ett värde om vad som var ”tillräckligt snabbt” har vi, genom att läsa tidigare forskning om användares upplevelse av fördröjningar, kommit fram till att en fördröjning ej får överstiga tio sekunder. Rapporten undersöker även ifall det finns externa lösningar för gatewayredundans som arbetar tillräckligt snabbt, samt om det finns andra situationer där man kan föredra en extern lösning framför Neighbor Unreachability Detection. Efter ett antal experiment har vi kunnat klarlägga att det inbyggda stödet för gatewayredundans i IPv6 inte arbetar tillräckligt snabbt, enligt användare, för att självständigt klara av uppgiften. Undersökningar som beskrivs i rapporten visar även att ett externt First Hop Redundancy Protocol har god potential att återställa kommunikation tillräckligt snabbt för att en användare ska anse fördröjningen acceptabel. Dessutom bekräftar det här arbetet att det finns situationer där man kan föredra ett First Hop Redundancy Protocol framför Neighbor Unreachability Detection. / Unlike earlier versions of the Internet Protocol, IPv6 have native support for gateway redundancy. By using more than one gateway, Neighbor Unreachability Detection supports automatic switching to a new gateway in the case the active one fails. Before IPv6 you had to rely on external solutions to achieve this kind of redundancy. The main question that is set in the report is if the built-in support for gateway redundancy in IPv6 is fast enough to be used independently, without the use of external solutions. In order to obtain a value about what was “fast enough” we have, by reading previous research about how users experience delay, concluded that a delay must not exceed ten seconds. The report also examines if there are external solutions for gateway redundancy that operates quick enough, and if there are other situations where you might prefer an external solution over Neighbor Unreachability Detection. After a number of experiments we’ve been able to conclude that, according to users, the native support for gateway redundancy in IPv6 is not working fast enough to independently solve the task. Experiments described in the report also shows that an external First Hop Redundancy Protocol has good potential to restore communication fast enough for a user to find the delay acceptable. Furthermore the work confirms that there are situations where you might prefer a First Hop Redundancy Protocol over Neighbor Unreachability Detection.
42

Improved tree species discrimination at leaf level with hyperspectral data combining binary classifiers

Dastile, Xolani Collen January 2011 (has links)
The purpose of the present thesis is to show that hyperspectral data can be used for discrimination between different tree species. The data set used in this study contains the hyperspectral measurements of leaves of seven savannah tree species. The data is high-dimensional and shows large within-class variability combined with small between-class variability which makes discrimination between the classes challenging. We employ two classification methods: G-nearest neighbour and feed-forward neural networks. For both methods, direct 7-class prediction results in high misclassification rates. However, binary classification works better. We constructed binary classifiers for all possible binary classification problems and combine them with Error Correcting Output Codes. We show especially that the use of 1-nearest neighbour binary classifiers results in no improvement compared to a direct 1-nearest neighbour 7-class predictor. In contrast to this negative result, the use of neural networks binary classifiers improves accuracy by 10% compared to a direct neural networks 7-class predictor, and error rates become acceptable. This can be further improved by choosing only suitable binary classifiers for combination.
43

GENETIC ALGORITHMS FOR SAMPLE CLASSIFICATION OF MICROARRAY DATA

Liu, Dongqing 23 September 2005 (has links)
No description available.
44

Generalizing Contour Guided Dissemination in Mesh Topologies

Mamidisetty, Kranthi Kumar 20 May 2008 (has links)
No description available.
45

Single Chain Statistics of a Polymer in a Crystallizable Solvent

Nathan, Andrew Prashant 26 August 2008 (has links)
No description available.
46

Comparison of the Utility of Regression Analysis and K-Nearest Neighbor Technique to Estimate Above-Ground Biomass in Pine Forests Using Landsat ETM+ imagery

Prabhu, Chitra L 13 May 2006 (has links)
There is a lack of precise and universally accepted approach in the quantification of carbon sequestered in aboveground woody biomass using remotely sensed data. Drafting of the Kyoto Protocol has made the subject of carbon sequestration more important, making the development of accurate and cost-effective remote sensing models a necessity. There has been much work done in estimating aboveground woody biomass from spectral data using the traditional multiple linear regression analysis approach and the Finnish k-nearest neighbor approach, but the accuracy of these methods to estimate biomass has not been compared. The purpose of this study is to compare the ability of these two methods in estimating above ground biomass (AGB) using spectral data derived from Landsat ETM+ imagery.
47

Salient Index for Similarity Search Over High Dimensional Vectors

Lu, Yangdi January 2018 (has links)
The approximate nearest neighbor(ANN) search over high dimensional data has become an unavoidable service for online applications. Fast and high-quality results of unknown queries are the largest challenge that most algorithms faced with. Locality Sensitive Hashing(LSH) is a well-known ANN search algorithm while suffers from inefficient index structure, poor accuracy in distributed scheme. The traditional index structures have most significant bits(MSB) problem, which is their indexing strategies have an implicit assumption that the bits from one direction in the hash value have higher priority. In this thesis, we propose a new content-based index called Random Draw Forest(RDF), which not only uses an adaptive tree structure by applying the dynamic length of compound hash functions to meet the different cardinality of data, but also applies the shuffling permutations to solve the MSB problem in the traditional LSH-based index. To raise the accuracy in the distributed scheme, we design a variable steps lookup strategy to search the multiple step sub-indexes which are most likely to hold the mistakenly partitioned similar objects. By analyzing the index, we show that RDF has a higher probability to retrieve the similar objects compare to the original index structure. In the experiment, we first learn the performance of different hash functions, then we show the effect of parameters in RDF and the performance of RDF compared with other LSH-based methods to meet the ANN search. / Thesis / Master of Science (MSc)
48

Control of a Chaotic Double Pendulum Model for a Ship Mounted Crane

Hsu, Tseng-Hsing 28 February 2000 (has links)
An extension of the original Ott-Grebogy-Yorke control scheme is used on a simple double pendulum. The base point of the double pendulum moves in both horizontal and vertical directions which leads to rather complicated behavior.A delay coordinate is used to reconstruct the attractor. The required dimension is determined by the False Nearest Neighbor analysis. A newly developed Fixed Point Transformation method is used to identify the unstable periodic orbit (UPO). Two different system parameters are used to control the motion. Minimum parameter constraints are studied. The use of discrete values for parameter changes is also investigated. Based on these investigations, a new on-off control scheme is proposed to simplify the implementation of the controller and minimize the delay in applying the control. / Ph. D.
49

Evaluating the accuracy of imputed forest biomass estimates at the project level

Gagliasso, Donald 01 October 2012 (has links)
Various methods have been used to estimate the amount of above ground forest biomass across landscapes and to create biomass maps for specific stands or pixels across ownership or project areas. Without an accurate estimation method, land managers might end up with incorrect biomass estimate maps, which could lead them to make poorer decisions in their future management plans. Previous research has shown that nearest-neighbor imputation methods can accurately estimate forest volume across a landscape by relating variables of interest to ground data, satellite imagery, and light detection and ranging (LiDAR) data. Alternatively, parametric models, such as linear and non-linear regression and geographic weighted regression (GWR), have been used to estimate net primary production and tree diameter. The goal of this study was to compare various imputation methods to predict forest biomass, at a project planning scale (<20,000 acres) on the Malheur National Forest, located in eastern Oregon, USA. In this study I compared the predictive performance of, 1) linear regression, GWR, gradient nearest neighbor (GNN), most similar neighbor (MSN), random forest imputation, and k-nearest neighbor (k-nn) to estimate biomass (tons/acre) and basal area (sq. feet per acre) across 19,000 acres on the Malheur National Forest and 2) MSN and k-nn when imputing forest biomass at spatial scales ranging from 5,000 to 50,000 acres. To test the imputation methods a combination of ground inventory plots, LiDAR data, satellite imagery, and climate data were analyzed, and their root mean square error (RMSE) and bias were calculated. Results indicate that for biomass prediction, the k-nn (k=5) had the lowest RMSE and least amount of bias. The second most accurate method consisted of the k-nn (k=3), followed by the GWR model, and the random forest imputation. The GNN method was the least accurate. For basal area prediction, the GWR model had the lowest RMSE and least amount of bias. The second most accurate method was k-nn (k=5), followed by k-nn (k=3), and the random forest method. The GNN method, again, was the least accurate. The accuracy of MSN, the current imputation method used by the Malheur Nation Forest, and k-nn (k=5), the most accurate imputation method from the second chapter, were then compared over 6 spatial scales: 5,000, 10,000, 20,000, 30,000, 40,000, and 50,000 acres. The root mean square difference (RMSD) and bias were calculated for each of the spatial scale samples to determine which was more accurate. MSN was found to be more accurate at the 5,000, 10,000, 20,000, 30,000, and 40,000 acre scales. K-nn (k=5) was determined to be more accurate at the 50,000 acre scale. / Graduation date: 2013
50

New paradigms for approximate nearest-neighbor search

Ram, Parikshit 20 September 2013 (has links)
Nearest-neighbor search is a very natural and universal problem in computer science. Often times, the problem size necessitates approximation. In this thesis, I present new paradigms for nearest-neighbor search (along with new algorithms and theory in these paradigms) that make nearest-neighbor search more usable and accurate. First, I consider a new notion of search error, the rank error, for an approximate neighbor candidate. Rank error corresponds to the number of possible candidates which are better than the approximate neighbor candidate. I motivate this notion of error and present new efficient algorithms that return approximate neighbors with rank error no more than a user specified amount. Then I focus on approximate search in a scenario where the user does not specify the tolerable search error (error constraint); instead the user specifies the amount of time available for search (time constraint). After differentiating between these two scenarios, I present some simple algorithms for time constrained search with provable performance guarantees. I use this theory to motivate a new space-partitioning data structure, the max-margin tree, for improved search performance in the time constrained setting. Finally, I consider the scenario where we do not require our objects to have an explicit fixed-length representation (vector data). This allows us to search with a large class of objects which include images, documents, graphs, strings, time series and natural language. For nearest-neighbor search in this general setting, I present a provably fast novel exact search algorithm. I also discuss the empirical performance of all the presented algorithms on real data.

Page generated in 0.159 seconds