Spelling suggestions: "subject:"gliding window"" "subject:"gliding lindow""
1 |
PROBABILITY OF FALSE POLYNOMIAL DIVISION SYNCHRONIZATION USING SHORTENED CYCLIC CODESSchauer, Anna Lynn, Ingels, Frank M. 11 1900 (has links)
International Telemetering Conference Proceedings / November 04-07, 1991 / Riviera Hotel and Convention Center, Las Vegas, Nevada / Shortened cyclic codes are not cyclic, but many cyclic shifts of various code words are still part of the shortened code set. This paper addresses the probability of false synchronization obtained through polynomial division of a serial shortened cyclic code stream in a “sliding” window correlator.
|
2 |
Reliable user datagram protocol (RUDP).Thammadi, Abhilash January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Gurdip Singh / As the network bandwidth and delay increase, TCP becomes inefficient. Data intensive applications over high-speed networks need new transport protocol to support them. This project describes a general purpose high performance data transfer protocol as an application level solution. The protocol Reliable UDP-based data transfer works above UDP with reliability. Reliable Data Transfer protocol provides reliability to applications using the Sliding Window protocol (Selective Repeat).
UDP uses a simple transmission model without implicit handshaking techniques for providing reliability and ordering of packets. Thus, UDP provides an unreliable service and datagrams may arrive out of order, appear duplicated, or go missing without notice. Reliable UDP uses both positive acknowledgements and negative acknowledgements to guarantee data reliability. Both simulation and implementation results have shown that Reliable UDP provides reliable data transfer. This report will describe the details of Reliable UDP protocol with simulation and implementation results and analysis.
|
3 |
An Obstruction-Check Approach to Mining Closed Sequential Patterns in Data StreamsChin, Tsz-lin 21 June 2010 (has links)
Online mining sequential patterns over data streams is an important problem
in data mining. There are many applications of using sequential patterns in data
streams, such as market analysis, network security, sensor networks and web track-
ing. Previous studies have shown mining closed patterns provides more benefits than
mining the complete set of frequent patterns, since closed pattern mining leads to
compact results. A sequential pattern is closed if the sequential pattern does not
have any supersequence which has the same support. Chang et al. proposed a time-
based sliding window model. The time-based sliding window has two features, the
new item is inserted in front of a sequence, and the obsolete item is removed from of
tail of a sequence. For solving the problem of data mining in the time-based sliding
window, Chang et al. proposed an algorithm called SeqStream. It uses a data struc-
ture IST (Inverse Closed Sequence Tree) to keep the result. IST can incrementally be
updated by the SeqStream algorithm. Although the SeqStream algorithm has used
the technique of dividing the time-based sliding window to speed up the updating of
IST, the SeqStream algorithm still scans the sliding window many times when IST
needs to be updated. In this thesis, we propose an obstruction-check approach to
maintain the result of closed sequential patterns. Our approach is designed based
on the lattice structure. The feature of the lattice structure is that the parent is a
supersequence of its children. By utilizing this feature, we decide the obstruction
link between the parent and child if their support is the same. If a node does not
have any obstruction link parent, the node is a closed sequential pattern. Then we
can utilize this feature to locally travel the lattice structure. Moreover, we can fully
utilize the features of the time-based sliding window model to locally travel the lat-
tice structure. Based on the lattice structure, we propose the EULB (Exact Update
based on Lattice structure with Bit stream)-Lattice algorithm. The EULB-Lattice
algorithm is an exact method for mining data streams. We record additional informa-
tion, instead of scanning the entire sliding window. We conduct several experiments
using different synthetic data sets. The simulation results show that the proposed
algorithm outperforms the SeqStream algorithm.
|
4 |
A Subset-Lattice Algorithm for Mining Maximal Frequent Itemsets over a Data Stream Sliding WindowWang, Syuan-Yun 09 July 2012 (has links)
Online mining association rules in data streams is an important field in the data
mining. Among them, mining the maximal frequent itemsets is also an important
issue. A frequent itemset is called maximal if it is not a subset of any other frequent
itemset. The set of all the maximal frequent itemsets is denoted as the maximal
frequent itemset. Because data streams are continuous, high speed, unbounded, and
real time. As a result, we can only scan once for the data streams. Therefore, the
previous algorithms to mine the maximal frequent itemsets in the traditional
databases are not suitable for the data streams. Furthermore, many applications are
interested in the recent data streams, and the sliding window is the model which
deal with the most recent data streams. In the sliding window model, a window size
is required. One of the algorithms for mining the maximal frequent itemsets based
on the sliding window model is called the MFIoSSW algorithm. The MFIoSSW
algorithm uses a compact structure to mine the maximal frequent itemsets. It uses
an array-based structure A to store the maximal frequent itemsets and other helpful
itemsets. But it takes long time to mine the maximal frequent itemsets. When the
new transaction comes, the number of comparison between the new transaction and
the old transactions is too much. Therefore, in this project, we propose a sliding
window approach, the Subset-Lattice algorithm. We use the lattice structure to store
the information of the transactions. The structure of the lattice stores the relationship
between the child node and the father node. In each node, we record the itemset and
the support. When the new transaction comes, we consider five relations: (1)
equivalent, (2) subset, (3) intersection, (4) empty set, (5) superset. With this five
relations, we can add the new transactions and update the support efficiently.
|
5 |
An Improved PDA Multi-User Detector for DS-CDMA UWB SystemsLi, Tzung-Cheng 28 August 2005 (has links)
Ultra-Wideband technology has attracted interests of the researchers and commercial groups due to its advantage of high data rate, low complexity and low power consumption. The direct-sequence code division multiple access ultra wideband system (DS-CDMA UWB) is one of the proposal of IEEE 802.15.3a standard. By combing the power of both UWB and DS-CDMA techniques, the system could construct multiple access architecture using direct sequence method. In multi-user environment, the major problem of the receiver designing of conventional DS-CDMA system is multiple access interference(MAI). In DS-CDMA UWB system, the transmitted signal were interfered by inter-symbol interference(ISI) and neighbor symbol interference because of the multi-path channel characteristic.
In this thesis, we use the training method to get the spreading waveform influenced by multi-path. Based on the information of spreading waveform, we use the block method to reformulate the received signal. We can separate the interference into multiple access interference and neighbor symbol interference. With Combining the interference cancellation, probabilistic data association (PDA) filter and sliding window techniques, we could eliminate the interference. In the computer simulation section, we compare the detection performance of sliding window PDA detector with conventional detector, and the simulation result shows that the improved PDA detector has better performance than others.
|
6 |
Finding the Longest Increasing Subsequence of Every SubstringTseng, Chiou-Ting 27 August 2006 (has links)
Given a string S = {a1, a2, a3, ..., an}, the longest increasing subsequence (LIS) problem is to find a subsequence of the given string such that the subsequence
is increasing and its length is maximal. In a previous result, to find the longest increasing subsequences of each sliding window with a fixed size w of a given string
with length n can be solved in O(w log log n+OUTPUT) time, where O(w log log n+ w^2) time is taken for preprocessing and OUTPUT is the sum of all output lengths. In this thesis, we solve the problem for finding the longest increasing subsequence of every substring of S. With the straightforward implementation of the previous result, the time required for the preprocessing would be O(n^3). We modify the data structure used in the algorithm, hence the required preprocessing time is improved to O(n^2). The time required for the report stage is linear to the size of the output. In other words, our algorithm can find the LIS of every substring in O(n^2+OUTPUT) time. If the LIS's of all substrings are desired to be reported, since there are O(n^2) substrings totally in a given string with length n, our algorithm is optimal.
|
7 |
Incorporating Sliding Window-Based Aggregation for Evaluating Topographic Variables in Geographic Information SystemsGomes, Rahul January 2019 (has links)
The resolution of spatial data has increased over the past decade making them more accurate in depicting landform features. From using a 60m resolution Landsat imagery to resolution close to a meter provided by data from Unmanned Aerial Systems, the number of pixels per area has increased drastically. Topographic features derived from high resolution remote sensing is relevant to measuring agricultural yield. However, conventional algorithms in Geographic Information Systems (GIS) used for processing digital elevation models (DEM) have severe limitations. Typically, 3-by-3 window sizes are used for evaluating the slope, aspect and curvature. Since this window size is very small compared to the resolution of the DEM, they are mostly resampled to a lower resolution to match the size of typical topographic features and decrease processing overheads. This results in low accuracy and limits the predictive ability of any model using such DEM data. In this dissertation, the landform attributes were derived over multiple scales using the concept of sliding window-based aggregation. Using aggregates from previous iteration increases the efficiency from linear to logarithmic thereby addressing scalability issues. The usefulness of DEM-derived topographic features within Random Forest models that predict agricultural yield was examined. The model utilized these derived topographic features and achieved the highest accuracy of 95.31% in predicting Normalized Difference Vegetation Index (NDVI) compared to a 51.89% for window size 3-by-3 in the conventional method. The efficacy of partial dependence plots (PDP) in terms of interpretability was also assessed. This aggregation methodology could serve as a suitable replacement for conventional landform evaluation techniques which mostly rely on reducing the DEM data to a lower resolution prior to data processing. / National Science Foundation (Award OIA-1355466)
|
8 |
Vertical Data Structures and Computation of Sliding Window Averages in Two-Dimensional DataHelsene, Adam Paul January 2020 (has links)
A vertical-style data structure and operations on data in that structure are explored and tested in the domain of sliding window average algorithms for geographical information systems (GIS) data. The approach allows working with data of arbitrary precision, which is centrally important for very large GIS data sets.
The novel data structure can be constructed from existing multi-channel image data, and data in the structure can be converted back to image data. While in the new structure, operations such as addition, division, and bit-level shifting can be performed in a parallelized manner. It is shown that the computation of averages for sliding windows on this data structure can be performed faster than using traditional computation techniques, and the approach scales to larger sliding window sizes.
|
9 |
Windowing effects and adaptive change point detection of dynamic functional connectivity in the brainShakil, Sadia 27 May 2016 (has links)
Evidence of networks in the resting-brain reflecting the spontaneous brain activity is perhaps the most significant discovery to understand intrinsic brain functionality. Moreover, subsequent detection of dynamics in these networks can be milestone in differentiating the normal and disordered brain functions. However, capturing the correct dynamics is a challenging task since no ground truths' are present for comparison of the results. The change points of these networks can be different for different subjects even during normal brain functions. Even for the same subject and session, dynamics can be different at the start and end of the session based on the fatigue level of the subject scanned. Despite the absence of ground truths, studies have analyzed these dynamics using the existing methods and some of them have developed new algorithms too. One of the most commonly used method for this purpose is sliding window correlation. However, the result of the sliding window correlation is dependent on many parameters and without the ground truth there is no way of validating the results. In addition, most of the new algorithms are complicated, computationally expensive, and/or focus on just one aspect on these dynamics. This study applies the algorithms and concepts from signal processing, image processing, video processing, information theory, and machine learning to analyze the results of the sliding window correlation and develops a novel algorithm to detect change points of these networks adaptively. The findings in this study are divided into three parts: 1) Analyzing the extent of variability in well-defined networks of rodents and humans with sliding window correlation applying concepts from information theory and machine learning domains. 2) Analyzing the performance of sliding window correlation using simulated networks as ground truths for best parameters’ selection, and exploring its dependence on multiple frequency components of the correlating signals by processing the signals in time and Fourier domains. 3) Development of a novel algorithm based on image similarity measures from image and video processing that maybe employed to identify change points of these networks adaptively.
|
10 |
Weapon Detection In Surveillance Camera ImagesVajhala, Rohith, Maddineni, Rohith, Yeruva, Preethi Raj January 2016 (has links)
Now a days, Closed Circuit Television (CCTV) cameras are installedeverywhere in public places to monitor illegal activities like armedrobberies. Mostly CCTV footages are used as post evidence after theoccurrence of crime. In many cases a person might be monitoringthe scene from CCTV but the attention can easily drift on prolongedobservation. Eciency of CCTV surveillance can be improved by in-corporation of image processing and object detection algorithms intomonitoring process.The object detection algorithms, previously implemented in CCTVvideo analysis detect pedestrians, animals and vehicles. These algo-rithms can be extended further to detect a person holding weaponslike rearms or sharp objects like knives in public or restricted places.In this work the detection of weapon from CCTV frame is acquiredby using Histogram of Oriented Gradients (HOG) as feature vector andarticial neural networks performing back-propagation algorithm forclassication.As a weapon in the hands of a human is considered to be greaterthreat as compared to a weapon alone, in this work the detection ofhuman in an image prior to a weapon detection has been found advan-tageous. Weapon detection has been performed using three methods.In the rst method, the weapon in the image is detected directly with-out human detection. Second and third methods use HOG and back-ground subtraction methods for detection of human prior to detectionof a weapon. A knife and a gun are considered as weapons of inter-est in this work. The performance of the proposed detection methodswas analysed on test image dataset containing knives, guns and im-ages without weapon. The accuracy rate 84:6% has been achievedby a single-class classier for knife detection. A gun and a knife havebeen detected by the three-class classier with an accuracy rate 83:0%.
|
Page generated in 0.0555 seconds