• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2618
  • 940
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5991
  • 1459
  • 889
  • 730
  • 724
  • 705
  • 493
  • 493
  • 482
  • 451
  • 421
  • 414
  • 386
  • 366
  • 342
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Grobner Bases and Ideals of Points

Chang, Eun R 01 January 2006 (has links)
The main point of this thesis is an introduction to the theory of Grobner bases. The concept of Grobner basis and construction of the Grobner basis by Buchberger's Algorithm, in which the notion of S-polynomials is introduced, and a few modified or improved versions of Grobner basis algorithm are reviewed in this paper. In Chapter 1, we have a review of ideals, the definitions and types of monomial ordering, the multivariate polynomial division algorithm and its examples. After ascertaining the monomial ordering on multivariate polynomials, we establish a leading term of a polynomial.In Chapter 2, after defining Grobner bases, we study some nice and useful properties of Grobner bases, such as a uniqueness of reduced Grobner basis and existence of a Grobner basis.In Chapter 3, we explore the Buchberger-Moller algorithm to construct Grobner bases and return a set of polynomials whose residue classes form a basis of a quotient of a polynomial ring. Also, we survey a generalized Buchberger-Moller algorithm to determine directly a Grobner basis for the intersection of a finite number of ideals.In Chapter 4, we conclude this paper with some applications of Grobner bases.
82

Analysis of Algorithms for Combinatorial Auctions and Related Problems

Ghebreamlak, Kidane Asrat January 2005 (has links)
<p>The thesis consists of four papers on combinatorial auctions and a summary. The first part is more of a practical nature and contains two papers. In the first paper, we study the performance of a caching technique in an optimal algorithm for a multi-unit combinatorial auction.</p><p>In the second paper, we compare the revenues from a second-price combinatorial auction against a second-price one-shot simultaneous auction. In particular, we show that when the synergy parameter is small, the combinatorial auction gives a higher expected revenue than the one-shot. This is in contrast to an earliear result by Krishna and Rosenthal. We also compare the two mechanisms under the assumption that bidders are risk-averse. Such bidders are more sensitive to financial loss (winner's curse) that they tend to bid less aggressively, which leads to lower revenues. Since a direct analytical approach turns out to be difficult, we present numerical results that show which auction mechanism maximizes the seller's revenue depending on the values of synergy and aversion parameter.</p><p>The second part is more theoretical. Here, we analyze the asymptotic performance of a greedy algorithm for a problem inspired by combinatorial auctions. In particular, we consider a special case in which every bid contains exactly 3 items, and use a Poisson process to model an auction with a random (Poisson) No. of bids. For this restricted case, winner determination problem is equivalent to a maximal 3-set packing on a weighted hypergraph, and hence NP-complete. However, the greedy algorithm approximates this special case within a factor of 3.</p><p>In the third paper, we compute the asymptotic expected size of the partial allocation and its corresponding expected total revenue from the greedy algorithm, for some distribution of bid prices.</p><p>In the final paper, we study the case of a deterministic number of bids, which is proportional to the number of distinguishable items in the auction, say M. Then, we prove that the number of bids allocated, suitably normalized, converges to a Normal random variable as M goes to infinity. As a prelude, we also prove that, both the number of bids allocated and those submitted, again suitably normalized, jointly converge in distribution to a continuous 2-dimensional Gaussian process as M goes to infinity.</p>
83

Analysis of Algorithms for Combinatorial Auctions and Related Problems

Ghebreamlak, Kidane Asrat January 2005 (has links)
The thesis consists of four papers on combinatorial auctions and a summary. The first part is more of a practical nature and contains two papers. In the first paper, we study the performance of a caching technique in an optimal algorithm for a multi-unit combinatorial auction. In the second paper, we compare the revenues from a second-price combinatorial auction against a second-price one-shot simultaneous auction. In particular, we show that when the synergy parameter is small, the combinatorial auction gives a higher expected revenue than the one-shot. This is in contrast to an earliear result by Krishna and Rosenthal. We also compare the two mechanisms under the assumption that bidders are risk-averse. Such bidders are more sensitive to financial loss (winner's curse) that they tend to bid less aggressively, which leads to lower revenues. Since a direct analytical approach turns out to be difficult, we present numerical results that show which auction mechanism maximizes the seller's revenue depending on the values of synergy and aversion parameter. The second part is more theoretical. Here, we analyze the asymptotic performance of a greedy algorithm for a problem inspired by combinatorial auctions. In particular, we consider a special case in which every bid contains exactly 3 items, and use a Poisson process to model an auction with a random (Poisson) No. of bids. For this restricted case, winner determination problem is equivalent to a maximal 3-set packing on a weighted hypergraph, and hence NP-complete. However, the greedy algorithm approximates this special case within a factor of 3. In the third paper, we compute the asymptotic expected size of the partial allocation and its corresponding expected total revenue from the greedy algorithm, for some distribution of bid prices. In the final paper, we study the case of a deterministic number of bids, which is proportional to the number of distinguishable items in the auction, say M. Then, we prove that the number of bids allocated, suitably normalized, converges to a Normal random variable as M goes to infinity. As a prelude, we also prove that, both the number of bids allocated and those submitted, again suitably normalized, jointly converge in distribution to a continuous 2-dimensional Gaussian process as M goes to infinity.
84

Mobile Location Estimation Using Genetic Algorithm and Clustering Technique for NLOS Environments

Hung, Chung-Ching 10 September 2007 (has links)
For the mass demands of personalized security services, such as tracking, supervision, and emergent rescue, the location technologies of mobile communication have drawn much attention of the governments, academia, and industries around the world. However, existing location methods cannot satisfy the requirements of low cost and high accuracy. We hypothesized that a new mobile location algorithm based on the current GSM system will effectively improve user satisfaction. In this study, a prototype system will be developed, implemented, and experimented by integrating the useful information such as the geometry of the cell layout, and the related mobile positioning technologies. The intersection of the regions formed by the communication space of the base stations will be explored. Furthermore, the density-based clustering algorithm (DCA) and GA-based algorithm will be designed to analyze the intersection region and estimate the most possible location of a mobile phone. Simulation results show that the location error of the GA-based is less than 0.075 km for 67% of the time, and less than 0.15 km for 95% of the time. The results of the experiments satisfy the location accuracy demand of E-911.
85

Spanning Tree Approach On The Snow Cleaning Problem

Hossain, Mohammad Forhad January 2010 (has links)
Snow cleaning is one of the important tasks in the winter time in Sweden. Every year government spends huge amount money for snow cleaning purpose. In this thesis we generate a shortest road network of the city and put the depots in different place of the city for snow cleaning. We generate shortest road network using minimum spanning tree algorithm and find the depots position using greedy heuristic. When snow is falling, vehicles start work from the depots and clean the snow all the road network of the city. We generate two types of model. Models are economic model and efficient model. Economic model provide good economical solution of the problem and it use less number of vehicles. Efficient model generate good efficient solution and it take less amount of time to clean the entire road network.
86

TOP-K AND SKYLINE QUERY PROCESSING OVER RELATIONAL DATABASE

Samara, Rafat January 2012 (has links)
Top-k and Skyline queries are a long study topic in database and information retrieval communities and they are two popular operations for preference retrieval. Top-k query returns a subset of the most relevant answers instead of all answers. Efficient top-k processing retrieves the k objects that have the highest overall score. In this paper, some algorithms that are used as a technique for efficient top-k processing for different scenarios have been represented. A framework based on existing algorithms with considering based cost optimization that works for these scenarios has been presented. This framework will be used when the user can determine the user ranking function. A real life scenario has been applied on this framework step by step. Skyline query returns a set of points that are not dominated (a record x dominates another record y if x is as good as y in all attributes and strictly better in at least one attribute) by other points in the given datasets. In this paper, some algorithms that are used for evaluating the skyline query have been introduced. One of the problems in the skyline query which is called curse of dimensionality has been presented. A new strategy that based on the skyline existing algorithms, skyline frequency and the binary tree strategy which gives a good solution for this problem has been presented. This new strategy will be used when the user cannot determine the user ranking function. A real life scenario is presented which apply this strategy step by step. Finally, the advantages of the top-k query have been applied on the skyline query in order to have a quickly and efficient retrieving results.
87

Adaptive Comparison-Based Algorithms for Evaluating Set Queries

Mirzazadeh, Mehdi January 2004 (has links)
In this thesis we study a problem that arises in answering boolean queries submitted to a search engine. Usually a search engine stores the set of IDs of documents containing each word in a pre-computed sorted order and to evaluate a query like "computer AND science" the search engine has to evaluate the union of the sets of documents containing the words "computer" and "science". More complex queries will result in more complex set expressions. In this thesis we consider the problem of evaluation of a set expression with union and intersection as operators and ordered sets as operands. We explore properties of comparison-based algorithms for the problem. A <i>proof of a set expression</i> is the set of comparisons that a comparison-based algorithm performs before it can determine the result of the expression. We discuss the properties of the proofs of set expressions and based on how complex the smallest proofs of a set expression <i>E</i> are, we define a measurement for determining how difficult it is for <i>E</i> to be computed. Then, we design an algorithm that is adaptive to the difficulty of the input expression and we show that the running time of the algorithm is roughly proportional to difficulty of the input expression, where the factor is roughly logarithmic in the number of the operands of the input expression.
88

Approximation Algorithms for (S,T)-Connectivity Problems

Laekhanukit, Bundit 27 July 2010 (has links)
We study a directed network design problem called the $k$-$(S,T)$-connectivity problem; we design and analyze approximation algorithms and give hardness results. For each positive integer $k$, the minimum cost $k$-vertex connected spanning subgraph problem is a special case of the $k$-$(S,T)$-connectivity problem. We defer precise statements of the problem and of our results to the introduction. For $k=1$, we call the problem the $(S,T)$-connectivity problem. We study three variants of the problem: the standard $(S,T)$-connectivity problem, the relaxed $(S,T)$-connectivity problem, and the unrestricted $(S,T)$-connectivity problem. We give hardness results for these three variants. We design a $2$-approximation algorithm for the standard $(S,T)$-connectivity problem. We design tight approximation algorithms for the relaxed $(S,T)$-connectivity problem and one of its special cases. For any $k$, we give an $O(\log k\log n)$-approximation algorithm, where $n$ denotes the number of vertices. The approximation guarantee almost matches the best approximation guarantee known for the minimum cost $k$-vertex connected spanning subgraph problem which is $O(\log k\log\frac{n}{n-k})$ due to Nutov in 2009.
89

Improved Interflow and Infiltration Algorithms for Distributed Hydrological Models

Liu, Guoxiang January 2010 (has links)
The shallow subsurface controls the partitioning of available energy between sensible and latent heat of the land surface, and the partitioning of available water among evaporation, infiltration, and runoff. It is a key component of both the hydrometeorological system and the terrestrial water cycle. A critical part of any hydrological or hydrometeorological forecast model is therefore the algorithms used to represent the shallow soil processes, which include infiltration, evaporation, runoff, and interflow. For climate models, coupled algorithms called “Land Surface Schemes” (LSSs) are developed to represent the lower boundary conditions that deal with the land-to-atmosphere energy and moisture fluxes. Similar algorithms are implemented in regional watershed models and day-to-day operational water resources forecasting models. It is the primary objective of this thesis to provide improved methods for simulating coupled land surface processes, which can be used as components of LSSs or within existing operational hydrology models. These new methods address a number of specific issues inadequately handled by current models, including the presence of shallow boundary conditions, heterogeneity in infiltration, and infiltration and interflow coupling processes. The main objective of the proposed research is to provide consistent physically-based approach for simulating near surface soil moisture processes, so as to complete the parameterization of the interflow/infiltration algorithm in a Hydrology-Land-Surface scheme MESH. The work mainly focuses on the investigation and development of more physically-based infiltration and interflow algorithms. The hope is to determine appropriate relationships between internal state variables (specifically bulk soil moisture) and system boundary fluxes, while simultaneously reducing the number of nonphysical or unknown model parameters. Fewer parameters lead to reduced calibration requirements for distributed hydrological models and consequently accelerate the transfer of such models to engineering practice. Multiple approaches were taken to provide improved relationships between infiltration and lateral drainage, fluxes and storage. These algorithms were tested by a specialized Richards' equation for sloping soils and Monte Carlo simulations. These tests demonstrated reasonable accuracy and improved representation for the hydrological processes.
90

A Hyper-Heuristic Clustering Algorithm

Song, Huei-jyun 07 September 2012 (has links)
The so-called heuristics have been widely used in solving combinatorial optimization problems because they provide a simple but effective way to find an approximate solution. These technologies are very useful for users who do not need the exact solution but who care very much about the response time. For every existing heuristic algorithm has its pros and cons, a hyper-heuristic clustering algorithm based on the diversity detection and improvement detection operators to determine when to switch from one heuristic algorithm to another is presented to improve the clustering result in this paper. Several well-known datasets are employed to evaluate the performance of the proposed algorithm. Simulation results show that the proposed algorithm can provide a better clustering result than the state-of-the-art heuristic algorithms compared in this paper, namely, k-means, simulated annealing, tabu search, and genetic k-means algorithm.

Page generated in 0.0572 seconds