• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 141
  • 116
  • 43
  • 23
  • 18
  • 14
  • 10
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • Tagged with
  • 431
  • 114
  • 108
  • 95
  • 43
  • 42
  • 41
  • 40
  • 38
  • 28
  • 23
  • 22
  • 22
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Planification d'expériences numériques en phase exploratoire pour la simulation des phénomènes complexes

Franco, Jessica 10 September 2008 (has links) (PDF)
La simulation numérique modélise des phénomènes toujours plus complexes. De tels problèmes, souvent de grande dimension, exigent des codes sophistiqués et coûteux en temps de calcul. Le recours systématique au simulateur devient alors illusoire. L'approche privilégiée consiste à définir un nombre réduit de simulations organisées selon un plan d'expériences numériques à partir duquel une surface de réponse est ajustée pour approcher le simulateur. Nous considérons ici les plans générés par des simulateurs déterministes en phase exploratoire i.e. lorsqu'il n'y a aucune connaissance a priori. Les plans requièrent donc certaines propriétés comme le remplissage de l'espace et la bonne répartition des points en projection. Deux indicateurs quantifiant la qualité intrinsèque des plans ont été développés. Le point essentiel de ce travail concerne un procédé de planification basée sur la simulation d'échantillons selon une loi de probabilité par une méthode de Monte Carlo par chaînes de Markov.
102

Effective Resource Allocation for Non-cooperative Spectrum Sharing

Jacob-David, Dany D. 13 October 2011 (has links)
Spectrum access protocols have been proposed recently to provide flexible and efficient use of the available bandwidth. Game theory has been applied to the analysis of the problem to determine the most effective allocation of the users’ power over the bandwidth. However, prior analysis has focussed on Shannon capacity as the utility function, even though it is known that real signals do not, in general, meet the Gaussian distribution assumptions of that metric. In a non-cooperative spectrum sharing environment, the Shannon capacity utility function results in a water-filling solution. In this thesis, the suitability of the water-filling solution is evaluated when using non-Gaussian signalling first in a frequency non-selective environment to focus on the resource allocation problem and its outcomes. It is then extended to a frequency selective environment to examine the proposed algorithm in a more realistic wireless environment. It is shown in both scenarios that more effective resource allocation can be achieved when the utility function takes into account the actual signal characteristics. Further, it is demonstrated that higher rates can be achieved with lower transmitted power, resulting in a smaller spectral footprint, which allows more efficient use of the spectrum overall. Finally, future spectrum management is discussed where the waveform adaptation is examined as an additional option to the well-known spectrum agility, rate and transmit power adaptation when performing spectrum sharing.
103

Mapping and Filling Metabolic Pathway Holes

Kaur, Dipendra 21 April 2008 (has links)
The network-mapping tool integrated with protein database search can be used for filling pathway holes. A metabolic pathway under consideration (pattern) is mapped into a known metabolic pathway (text), to find pathway holes. Enzymes that do not show up in the pattern may be a hole in the pattern pathway or an indication of alternative pattern pathway. We present a data-mining framework for filling holes in the pattern metabolic pathway based on protein function, prosite scan and protein sequence homology. Using this framework we suggest several fillings found with the same EC notation, with group neighbors (enzymes with same EC number in first three positions, different in the fourth position), and instances where the function of an enzyme has been taken up by the left or right neighboring enzyme in the pathway. The percentile scores are better when closely related organisms are mapped as compared to mapping distantly related organisms.
104

Effective Resource Allocation for Non-cooperative Spectrum Sharing

Jacob-David, Dany D. 13 October 2011 (has links)
Spectrum access protocols have been proposed recently to provide flexible and efficient use of the available bandwidth. Game theory has been applied to the analysis of the problem to determine the most effective allocation of the users’ power over the bandwidth. However, prior analysis has focussed on Shannon capacity as the utility function, even though it is known that real signals do not, in general, meet the Gaussian distribution assumptions of that metric. In a non-cooperative spectrum sharing environment, the Shannon capacity utility function results in a water-filling solution. In this thesis, the suitability of the water-filling solution is evaluated when using non-Gaussian signalling first in a frequency non-selective environment to focus on the resource allocation problem and its outcomes. It is then extended to a frequency selective environment to examine the proposed algorithm in a more realistic wireless environment. It is shown in both scenarios that more effective resource allocation can be achieved when the utility function takes into account the actual signal characteristics. Further, it is demonstrated that higher rates can be achieved with lower transmitted power, resulting in a smaller spectral footprint, which allows more efficient use of the spectrum overall. Finally, future spectrum management is discussed where the waveform adaptation is examined as an additional option to the well-known spectrum agility, rate and transmit power adaptation when performing spectrum sharing.
105

Design of a Depth-Image-Based Rendering (DIBR) 3D Stereo View Synthesis Engine

Chang, Wei-Chun 01 September 2011 (has links)
Depth-Based Image Rendering (DIBR) is a popular method to generate 3D virtual image at different view positions using an image and a depth map. In general, DIBR consists of two major operations: image warping and hole filling. Image warping calculates the disparity from the depth map given some information of viewers and display screen. Hole filling is to calculate the color of pixel locations that do not correspond to any pixels in the original image after image warping. Although there are many different hole filling methods that determine the colors of the blank pixels, some undesirable artifacts are still observed in the synthesized virtual image. In this thesis, we present an approach that examines the geometry information near the region of blank pixels in order to reduce the artifacts near the edges of objects. Experimental results show that the proposed design can generate more natural shape around the edges of objects at the cost of more hardware and computation time.
106

Efficient Access Methods on the Hilbert Curve

Wu, Chen-Chang 18 June 2012 (has links)
The design of multi-dimensional access methods is difficult as compared to those of one-dimensional case because of no total ordering that preserves spatial locality. One way is to look for the total order that preserves spatial proximity at least to some extent. A space-filling curve is a continuous path which passes through every point in a space once so giving a one-to-one correspondence between the coordinates of the points and the 1D-sequence numbers of points on the curve. The Hilbert curve is a famous space filling curve, since it has been shown to have strong locality preserving properties; that is, it is the best space-filling curve in minimizing the number of clusters. Hence, it has been extensively used to maintain spatial locality of multidimensional data in a wide variety of applications. A window query is an important query operation in spatial (image) databases. Given a Hilbert curve, a window query reports its corresponding orders without the need to decode all the points inside this window into the corresponding Hilbert orders. Chung et al. have proposed an algorithm for decomposing a window into the corresponding Hilbert orders. However, the Hilbert curve requires that the region is of size 2^k x 2^k, where k∈N. The intuitive method such as Chung et al.¡¦s algorithm is to directly use Hilbert curves in the decomposed areas and then connect them. They must generate a sequence of the scanned quadrants additionally before encoding and decoding the Hilbert order of one pixel and scan this sequence one time while encoding and decoding one pixel. In this dissertation, on the design of methods for window queries on a Hilbert curve, we propose an efficient algorithm, named as Quad-Splitting, for decomposing a window into the corresponding Hilbert orders on a Hilbert curve without individual sorting and merging steps. The proposed algorithm does not perform individual sorting and merging steps which are needed in Chung et al.¡¦s algorithm. From our experimental results, we show that the Quad-Splitting algorithm outperforms Chung et al.¡¦s algorithm. On the design of the methods for generating the Hilbert curve of an arbitrary-sized image, we propose approximately even partition approach to generate a pseudo Hilbert curve of an arbitrary-sized image. From our experimental results, we show that our proposed pseudo Hilbert curve preserves the similar strong locality property to the Hilbert curve. On the design of the methods for coding Hilbert curve of an arbitrary-sized image, we propose encoding and decoding algorithms. From our experimental results, we show that our encoding and decoding algorithms outperform the Chung et al.¡¦s algorithms.
107

An Efficient Hilbert Curve-based Clustering Strategy for Large Spatial Databases

Lu, Yun-Tai 25 July 2003 (has links)
Recently, millions of databases have been used and we need a new technique that can automatically transform the processed data into useful information and knowledge. Data mining is the technique of analyzing data to discover previously unknown information and spatial data mining is the branch of data mining that deals with spatial data. In spatial data mining, clustering is one of useful techniques for discovering interesting data in the underlying data objects. The problem of clustering is that give n data points in a d-dimensional metric space, partition the data points into k clusters such that the data points within a cluster are more similar to each other than data points in different clusters. Cluster analysis has been widely applied to many areas such as medicine, social studies, bioinformatics, map regions and GIS, etc. In recent years, many researchers have focused on finding efficient methods to the clustering problem. In general, we can classify these clustering algorithms into four approaches: partition, hierarchical, density-based, and grid-based approaches. The k-means algorithm which is based on the partitioning approach is probably the most widely applied clustering method. But a major drawback of k-means algorithm is that it is difficult to determine the parameter k to represent ``natural' cluster, and it is only suitable for concave spherical clusters. The k-means algorithm has high computational complexity and is unable to handle large databases. Therefore, in this thesis, we present an efficient clustering algorithm for large spatial databases. It combines the hierarchical approach with the grid-based approach structure. We apply the grid-based approach, because it is efficient for large spatial databases. Moreover, we apply the hierarchical approach to find the genuine clusters by repeatedly combining together these blocks. Basically, we make use of the Hilbert curve to provide a way to linearly order the points of a grid. Note that the Hilbert curve is a kind of space-filling curves, where a space-filling curve is a continuous path which passes through every point in a space once to form a one-one correspondence between the coordinates of the points and the one-dimensional sequence numbers of the points on the curve. The goal of using space-filling curve is to preserve the distance that points which are close in 2-D space and represent similar data should be stored close together in the linear order. This kind of mapping also can minimize the disk access effort and provide high speed for clustering. This new algorithm requires only one input parameter and supports the user in determining an appropriate value for it. In our simulation, we have shown that our proposed clustering algorithm can have shorter execution time than other algorithms for the large databases. Since the number of data points is increased, the execution time of our algorithm is increased slowly. Moreover, our algorithm can deal with clusters with arbitrary shapes in which the k-means algorithm can not discover.
108

A Local Expansion Approach for Continuous Nearest Neighbor Queries

Liu, Ta-Wei 16 June 2008 (has links)
Queries on spatial data commonly concern a certain range or area, for example, queries related to intersections, containment and nearest neighbors. The Continuous Nearest Neighbor (CNN) query is one kind of the nearest neighbor queries. For example, people may want to know where those gas stations are along the super highway from the starting position to the ending position. Due to that there is no total ordering of spatial proximity among spatial objects, the space filling curve (SFC) approach has proposed to preserve the spatial locality. Chen and Chang have proposed efficient algorithms based on SFC to answer nearest neighbor queries, so we may perform a sequence of individually nearest neighbor queries to answer such a CNN query in the centralized system by one of Chen and Chang's algorithms. However, each searched range of these nearest neighbor queries could be overlapped, and these queries may access several same pages on the disk, resulting in many redundant disk accesses. On the other hand, Zheng et al. have proposed an algorithm based on the Hilbert curve for the CNN query for the wireless broadcast environment, and it contains two phases. In the first phase, Zheng et al.'s algorithm designs a searched range to find candidate objects. In the second phase, it uses some heuristics to filter the candidate objects for the final answer. However, Zheng et al.'s algorithm may check some data blocks twice or some useless data blocks, resulting in some redundant disk accesses. Therefore, in this thesis, to avoid these disadvantages in the first phase of Zheng et al.'s algorithm, we propose a local expansion approach based on the Peano curve for the CNN query in the centralized system. In the first phase, we determine the searched range to obtain all candidate objects. Basically, we first calculate the route between the starting point and the ending point. Then, we move forward one block from the starting point to the ending point, and locally spread the searched range to find the candidate objects. In the second phase, we use heuristics mentioned in Zheng et al.'s algorithm to filter the candidate objects for the final answer. Based on such an approach, we proposed two algorithms: the forward moving (FM) algorithm and the forward moving* (FM*) algorithm. The FM algorithm assumes that each object is in the center of a block, and the FM* algorithm assumes that each object could be in any place of a block. Our local expansion approach can avoid the duplicated check in Zheng et al.'s algorithm, and determine a searched range with higher accuracy than that of Zhenget al.'s algorithm. From our simulation results, we show that the performance of the FM or FM* algorithm is better than that of Zheng et al.'s algorithm, in terms of the accuracy and the processing time.
109

Comparison of the sealing ability of two different types of root canal obturation cold lateral compaction and the continuous wave compaction technique /

Hughart, Donald Wayne. January 2004 (has links)
Thesis (M.S.)--West Virginia University, 2004. / Title from document title page. Document formatted into pages; contains xi, 56 p. : ill. Vita. Includes abstract. Includes bibliographical references (p. 41-44).
110

Detecting Wetland Change through Supervised Classification of Landsat Satellite Imagery within the Tunkwa Watershed of British Columbia, Canada

Lee, Steven January 2011 (has links)
Wetlands are considered to be one of the most valuable natural occurring forms of land cover in the world. Hydrologic regulation, carbon sequestration, and habitat provision for a wide assortment of flora and fauna are just a few of the benefits associated with wetlands. The implementation of satellite remote sensing has been demonstrated to be a reliable approach to monitoring wetlands over time. Unfortunately, a national wetland inventory does not exist for Canada at this time. This study employs a supervised classification method of Landsat satellite imagery between 1976 and 2008 within the Tunkwa watershed, southwest of Kamloops, British Columbia, Canada. Images from 2005 and 2008 were repaired using a gap-filling technique due to do the failure of the scan-line corrector on the Landsat 7 satellite in 2003. Percentage pixel counts for wetlands were compared, and a diminishing trend was identified; approximately 4.8% of wetland coverage loss was recognized. The influence of the expansion of Highland Valley Copper and the forestry industry in the area may be the leading causes of wetland desiccation. This study expresses the feasibility of wetland monitoring using remote sensing and emphasizes the need for future work to compile a Canadian wetland inventory.

Page generated in 0.0593 seconds