Spelling suggestions: "subject:"gridbased"" "subject:"bridgebased""
1 |
Relationships between chlorophyll concentration and marine environmental factors in the Kuroshio and its adjacent waters off eastern TaiwanLiu, Hsin-yu 15 January 2008 (has links)
Data of various marine environmental factors collected and integrated by the National Center of Ocean Research (NCOR) were used to search for possible statistics relationship with chlorophyll concentration of the Kuroshio and its adjacent waters off eastern Taiwan. The seaWiFS chlorophyll concentration in natural logarithm were used as dependent variable in General Linear Model¡]GLM¡^analysis, followed by least square means (lsmeans) and cluster analysis. Study area ranged from 21.5¡CN and 121¡CE to 26.5¡CN and 125¡CE eastern Taiwan. Data were first assembled, screened, transformed to natural logarithm and reorganized into monthly averages for individual geographical grid points of 10¡¦X 10¡¦.
The result of GLM analysis shows that all factors have significant relationship with chlorophyll concentration, more than 20 regression formulae were found with different combination of variable. Results of standard regression analysis show their order of importance as: latitude, depth, longitude, light, sst, east and west current on upper 20 meter(c20EW), north and south current on upper 20 meter(c20NS) and eddy kinetic energy(EKE), respectively. Results of lsmeans listing by latitude and by longitude showed that area with higher chlorophyll concentration are on high-latitude and low-longitude area but not between and area near east Taiwan tend to have high concentration and decreased eastward. Results of cluster analysis indicated that chlorophyll concentration of western longitude, and northern as well as southern latitude are different from other area.
|
2 |
The GDense Algorithm for Clustering Data Streams with High QualityLin, Shu-Yi 25 June 2009 (has links)
In recent years, mining data streams has been widely studied. A data streams is a
sequence of dynamic, continuous, unbounded and real time data items with a very
high data rate that can only be read once. In data mining, clustering is one of use-
ful techniques for discovering interesting data in the underlying data objects. The
problem of clustering can be defined formally as follows: given n data points in the d-
dimensional metric space, partition the data points into k clusters such that the data
points within a cluster are more similar to each other than data points in different
clusters. In the data streams environment, the difficulties of data streams clustering
contain storage overhead, low clustering quality and a low updating efficiency. Cur-
rent clustering algorithms can be broadly classified into four categories: partition,
hierarchical, density-based and grid-based approaches. The advantage of the grid-
based algorithm is that it can handle large databases. Based on the density-based
approach, the insertion or deletion of data affects the current clustering only in the
neighborhood of this data. Combining the advantages of the grid-based approach
and density-based approach, the CDS-Tree algorithm was proposed. Although it can
handle large databases, its clustering quality is restricted to the grid partition and the
threshold of a dense cell. Therefore, in this thesis, we present a new clustering algo-
rithm with high quality, GDense, for data streams. The GDense algorithm has high
quality due to two kinds of partition: cells and quadcells, and two kinds of threshold:
£_ and (1/4) . Moreover, in our GDense algorithm, in the data insertion part, the
7 cases takes 3 factors about the cell and the quadcell into consideration. In the
deletion part, the 10 cases take 5 factors about the cell into consideration. From our
simulation results, no matter what condition (including the number of data points,
the number of cells, the size of the sliding window, and the threshold of dense cell)
is, the clustering purity of our GDense algorithm is always higher than that of the
CDS-Tree algorithm. Moreover, we make a comparison of the purity between the our
GDense algorithm and the CDS-Tree algorithm with outliers. No matter whether the
number of outliers is large or small, the clustering purity of our GDense algorithm is
still higher than that of the CDS-Tree and we can improve about 20% the clustering
purity as compared to the CDS-Tree algorithm.
|
3 |
Adaptive Grid-Based Data Collection Scheme for Multiple Mobile Sinks in Wireless Sensor NetworksLiu, Wei-chang 28 June 2007 (has links)
Wireless Sensor Network (WSN) has become a popular wireless technology in recent years. In WSN, a large number of sensors are used to collect data and forward data hop-by-hop to a sink. Due to the unbalancing of traffic load, some grid nodes may consume more energy and their packet loss ratio may be increased as well. In order to improve above-mentioned shortcomings, in this Thesis, we propose an Adaptive Grid-based Data Collection (AGDC) scheme. Because a mobile sink may move, it is possible the traffic load of primary grid nodes can be changed in WSN. According to the distribution of traffic load, the AGDC can adjust transmission range to allocate one or more temporary grid nodes between two primary grid nodes. Through the added temporary grid nodes, traffic load is evenly dispersed among different grid nodes. We allow the primary grid nodes to use smaller transmission power to save energy and allow the temporary grid nodes to buffer data to reduce packet loss ratio. For the purpose of evaluation, we perform simulation on NS-2. With the proposed AGDC scheme, the transmission range of a primary grid node can be set to an appropriate distance to reduce power consumption and packet loss ratio. Since the packet loss ratio is reduced, the throughput of entire WSN is increased.
|
4 |
Efficient Grid-Based Techniques for Density Functional Theory.Rodriguez-Hernandez, Juan I. 05 1900 (has links)
<p>Understanding the chemical and physical properties of molecules and materials at a fundamental level often requires quantum-mechanical models for these substance's electronic structure. This type of many body quantum mechanics calculation is computationally demanding, hindering its application to substances with more than a few hundreds atoms. The supreme goal of many researches in quantum chemistry-and the topic of this dissertation-is to develop more efficient computational algorithms for electronic structure calculations. In particular, this dissertation develops two new numerical integration techniques for computing molecular and atomic properties within conventional Kohn-Sham-Density Functional Theory (KS-DFT) of molecular electronic structure. </p>
<p>The first of these grid-based techniques is based on the transformed sparse grid construction. In this construction, a sparse grid is generated in the unit cube and then mapped to real space according to the pro-molecular density using the conditional distribution transformation. The transformed sparse grid was implemented in program deMon2k, where it is used as the numerical integrator for the exchange-correlation energy and potential in the KS-DFT procedure. We tested our grid by computing ground state energies, equilibrium geometries, and atomization energies. The accuracy on these test calculations shows that our grid is more efficient than some previous integration methods: our grids use fewer points to obtain the same accuracy. The transformed sparse grids were also tested for integrating, interpolating and differentiating in different dimensions (n = 1, 2, 3, 6).</p> <p> The second technique is a grid-based method for computing atomic properties within QTAIM. It was also implemented in deMon2k. The performance of the method was tested by computing QTAIM atomic energies, charges, dipole moments, and quadrupole moments. For medium accuracy, our method is the fastest one we know of.</p> / Thesis / Doctor of Philosophy (PhD)
|
5 |
An Efficient Hilbert Curve-based Clustering Strategy for Large Spatial DatabasesLu, Yun-Tai 25 July 2003 (has links)
Recently, millions of databases have been used and we need a new technique that can automatically transform the processed data into useful information and knowledge. Data mining is the technique of analyzing data to discover previously unknown information and spatial data mining is the branch of data mining that deals with spatial data. In spatial data mining, clustering is one of useful techniques for discovering interesting data in the underlying data objects. The problem of clustering is that give n data points in a d-dimensional metric space, partition the data points into k clusters such that the data points within a cluster are more similar to each other than data points in different clusters. Cluster analysis has been widely applied to many areas such as medicine, social studies, bioinformatics, map regions and GIS, etc. In recent years, many researchers have focused on finding efficient methods to the clustering problem. In general, we can classify these clustering algorithms into four approaches: partition, hierarchical, density-based, and grid-based approaches. The k-means algorithm which is based on the partitioning approach is probably the most widely applied clustering method. But a major drawback of k-means algorithm is that it is difficult to determine the parameter k to represent ``natural' cluster, and it is only suitable for concave spherical clusters. The k-means algorithm has high computational complexity and is unable to handle large databases. Therefore, in this thesis, we present an efficient clustering algorithm for large spatial databases. It combines the hierarchical approach with the grid-based approach structure. We apply the grid-based approach, because it is efficient for large spatial databases. Moreover, we apply the hierarchical approach to find the genuine clusters by repeatedly combining together these blocks. Basically, we make use of the Hilbert curve to provide a way to linearly order the points of a grid. Note that the Hilbert curve is a kind of space-filling curves, where a space-filling curve is a continuous path which passes through every point in a space once to form a one-one correspondence between the coordinates of the points and the one-dimensional sequence numbers of the points on the curve. The goal of using space-filling curve is to preserve the distance that points which are close in 2-D space and represent similar data should be stored close together in the linear order. This kind of mapping also can minimize the disk access effort and provide high speed for clustering. This new algorithm requires only one input parameter and supports the user in determining an appropriate value for it. In our simulation, we have shown that our proposed clustering algorithm can have shorter execution time than other algorithms for the large databases. Since the number of data points is increased, the execution time of our algorithm is increased slowly. Moreover, our algorithm can deal with clusters with arbitrary shapes in which the k-means algorithm can not discover.
|
6 |
Non-Uniform Grid-Based Coordinated Routing in Wireless Sensor NetworksKadiyala, Priyanka 08 1900 (has links)
Wireless sensor networks are ad hoc networks of tiny battery powered sensor nodes that can organize themselves to form self-organized networks and collect information regarding temperature, light, and pressure in an area. Though the applications of sensor networks are very promising, sensor nodes are limited in their capability due to many factors. The main limitation of these battery powered nodes is energy. Sensor networks are expected to work for long periods of time once deployed and it becomes important to conserve the battery life of the nodes to extend network lifetime. This work examines non-uniform grid-based routing protocol as an effort to minimize energy consumption in the network and extend network lifetime. The entire test area is divided into non-uniformly shaped grids. Fixed source and sink nodes with unlimited energy are placed in the network. Sensor nodes with full battery life are deployed uniformly and randomly in the field. The source node floods the network with only the coordinator node active in each grid and the other nodes sleeping. The sink node traces the same route back to the source node through the same coordinators. This process continues till a coordinator node runs out of energy, when new coordinator nodes are elected to participate in routing. Thus the network stays alive till the link between the source and sink nodes is lost, i.e., the network is partitioned. This work explores the efficiency of the non-uniform grid-based routing protocol for different node densities and the non-uniform grid structure that best extends network lifetime.
|
7 |
A Comparison of Air Flow Simulation Techniques in Architectural DesignYuanpei Zhao (10709238) 06 May 2021 (has links)
<p>The fluid simulation in computer generates realistic
animations of fluids by solving Navier-Stokes equation. The methods of simulation are
divided into two types. The grid-based methods and particle-based methods. The
former one is wildly used for scientific computation because of its precision
of simulation while the latter one is used in visual effects, games and other
areas requiring real-time simulation because of the less computation time it
has. </p>
<p> </p><p>The indoor airflow simulations with HVAC system in construction design is
one specific application in scientific computation and uses grid-based
simulation as the general-purpose simulation does. This study addresses
the problem that this kind of airflow simulations in construction design using grid-based
methods are very time consuming and always need designers to do pretreatment of
the building model, which takes time, money, and effort. On the other hand, the
particle-based methods would have less computation time with an acceptable
accuracy in indoor
airflow simulations because this kind of simulation does not require very high
precision.</p>
<p><br></p><p>Then this study conducts a detailed and practical comparison
of different fluid simulation algorithms in both grid-based methods and
particle-based ones. This study's deliverable is a comparison between
particle-based and grid-based methods in indoor airflow simulations with HVAC system.</p>
<p><br></p><p>The overall methodology used to arrive at the deliverables of
this study will need two parts of work. The benchmark data is gathered from a CFD
software simulation using FVM with a decent grid resolution. The particle-based
data will be generated by simulation algorithms over the same set of room and
furniture models implemented by OpenGL and CUDA. After the benchmark FVM simulation
being conducted in a CFD software, the temperature field of airflow will be
measured. After simulation, the temperature field are gained on each one of 4
particle-based simulation. A comparison standard is set and data will be
analyzed to get the conclusion. The result shows that in a short simulation
time period, after finding a proper number of particles, the particle-based
method will achieve acceptable accuracy of temperature and velocity field while
using much less time.</p><p></p>
|
8 |
Spatial Operations in a GIS-Based Karst Feature DatabaseGao, Yongli 01 May 2008 (has links)
This paper presents the spatial implementation of the karst feature database (KFD) of Minnesota in a GIS environment. ESRI's ArcInfo and ArcView GIS packages were used to analyze and manipulate the spatial operations of the KFD of Minnesota. Spatial operations were classified into three data manipulation categories: single layer operation, multiple layer operation, and other spatial transformation in the KFD. Most of the spatial operations discussed in this paper can be conducted using ArcInfo, ArcView, and ArcGIS. A set of strategies and rules were proposed and used to build the spatial operational module in the KFD to make the spatial operations more efficient and topographically correct.
|
9 |
A User Experience Evaluation of AAC SoftwareFrisch, Blade William Martin 12 August 2020 (has links)
No description available.
|
10 |
Development of a visualization and information management platform in translational biomedical informaticsStokes, Todd Hamilton 06 April 2009 (has links)
Translational Biomedical Informatics (TBMI) is an emerging discipline expanding beyond traditional bioinformatics, with a focus on developing computational technologies for real-world biomedical practice. The goal of my Ph.D. research is to address a few key challenges in TBI, including: (1) the high quality and reproducibility required by medical applications when processing high throughput data, (2) the need for knowledge management solutions that allow molecular data to be handled and evaluated by researchers, regulators, and doctors collectively, (3) the need for near real-time, efficient access to decision-oriented visualizations of integrated data and data processing results, and (4) the need for an integrated solution that can evolve as medical consensus evolves, without requiring retraining, overhaul or replacement. This dissertation resulted in the development and adoption of concrete web-based application deliverables in regular use by bioinformaticians, clinicians, biologists and nanotechnologists. These include: the Chip Artifact Correction (caCORRECT) web site and grid services, the ArrayWiki community microarray repository, and the SimpleVisGrid visualization grid services (including eGOMiner, nanoDRIVE, PathwayVis and SphingoVisGrid).
|
Page generated in 0.0429 seconds