1 |
Cell Tracking in Microscopy Images Using a Rao-Blackwellized Particle FilterLindmark, Sofia January 2014 (has links)
Analysing migrating cells in microscopy time-lapse images has already helped the understanding of many biological processes and may be of importance in the development of new medical treatments. Today’s biological experiments tend to produce a huge amount of dynamic image data and tracking the individual cells by hand has become a bottleneck for the further analysis work. A number of cell tracking methods have therefore been developed over the past decades, but still many of the techniques have a limited performance. The aim of this Master Project is to develop a particle filter algorithm that automatically detects and tracks a large number of individual cells in an image sequence. The solution is based on a Rao-Blackwellized particle filter for multiple object tracking. The report also covers a review of existing automatic cell tracking techniques, a review of well-known filter techniques for single target tracking and how these techniques have been developed to handle multiple target tracking. The designed algorithm has been tested on real microscopy image data of neutrophils with 400 to 500 cells in each frame. The designed algorithm works well in areas of the images where no cells touch and can in these situations also correct for some segmentation mistakes. In areas where cells touch, the algorithm works well if the segmentation is correct, but often makes mistakes when it is not. A target effectiveness of 77 percent and a track purity of 80 percent are then achieved.
|
2 |
Localization algorithms for indoor UAVsBarac, Daniel January 2011 (has links)
The increased market for navigation, localization and mapping system has encouraged the research to dig deeper into these new and challenging areas. The remarkable development of computer soft- and hardware have also opened up many new doors. Things which more or less where impossible ten years ago are now reality. The possibilities of using a mathematical approach to compensate for the need of expensive sensors has been one of the main objectives in this thesis. Here you will find the basic principles of localization of indoor UAVs using particle filter (PF) and Octomaps, but also the procedures of implementing 2D scanmatching algorithms and quaternions. The performance of the algorithms is evaluated using a high precision motion capture system. The UAV which forms the basis for this thesis is equipped with a 2D laser and an inertial measurement unit (IMU). The results show that it is possible to perform localization in 2D with centimetre precision only by using information from a laser and a predefined Octomap.
|
3 |
Robotic localization of hostile networked radio sources with a directional antennaHu, Qiang 25 April 2007 (has links)
One of the distinguishing characteristics of hostile networked radio sources (e.g.,
enemy sensor network nodes), is that only physical layer information and limited
medium access control (MAC) layer information of the network is observable. We
propose a scheme to localize hostile networked radio sources based on the radio signal
strength and communication protocol pattern analysis using a mobile robot with a
directional antenna. We integrate a Particle Filter algorithm with a new sensing
model which is built on a directional antenna model and Carrier Sense Multiple
Access (CSMA)-based MAC protocol model. we model and analyze the channel
idle probability and busy collision probability as a function of the number of radio
sources in the CSMA protocol modeling. Based on the sensing model, we propose a
particle-filter-based scheme to simultaneously estimate the number and the positions
of networked radio sources. We provide a localization scheme based on the method
of steepest descent for the purpose of performance comparison. Simulation results
demonstrate that our proposed localization scheme has a better success rate than the
scheme based on the steepest descent at different tolerant distances.
|
4 |
Indoor Location Tracking and Orientation Estimation Using a Particle Filter, INS, and RSSINouri, Cameron Ramin 01 January 2015 (has links) (PDF)
With the advent of wireless sensor technologies becoming more and more common-place in wearable devices and smartphones, indoor localization is becoming a heavily researched topic. One such application for this topic is in the medical field where wireless sensor devices that are capable of monitoring patient vitals and giving accurate location estimations allow for a less intrusive environment for nursing home patients.
This project explores the usage of using received signal strength indication (RSSI) in conjunction with an inertial navigation system (INS) to provide location estimations without the use of GPS in a Particle Filter with a small development microcontroller and base station. The paper goes over the topics used in this thesis and the results.
|
5 |
Techniques for Efficient Implementation of FIR and Particle FilteringAlam, Syed Asad January 2016 (has links)
FIR filters occupy a central place many signal processing applications which either alter the shape, frequency or the sampling frequency of the signal. FIR filters are used because of their stability and possibility to have linear-phase but require a high filter order to achieve the same magnitude specifications as compared to IIR filters. Depending on the size of the required transition bandwidth the filter order can range from tens to hundreds to even thousands. Since the implementation of the filters in digital domain requires multipliers and adders, high filter orders translate to a large number of these arithmetic units for its implementation. Research towards reducing the complexity of FIR filters has been going on for decades and the techniques used can be roughly divided into two categories; reduction in the number of multipliers and simplification of the multiplier implementation. One technique to reduce the number of multipliers is to use cascaded sub-filters with lower complexity to achieve the desired specification, known as FRM. One of the sub-filters is a upsampled model filter whose band edges are an integer multiple, termed as the period L, of the target filter's band edges. Other sub-filters may include complement and masking filters which filter different parts of the spectrum to achieve the desired response. From an implementation point-of-view, time-multiplexing is beneficial because generally the allowable maximum clock frequency supported by the current state-of-the-art semiconductor technology does not correspond to the application bound sample rate. A combination of these two techniques plays a significant role towards efficient implementation of FIR filters. Part of the work presented in this dissertation is architectures for time-multiplexed FRM filters that benefit from the inherent sparsity of the periodic model filters. These time-multiplexed FRM filters not only reduce the number of multipliers but lowers the memory usage. Although the FRM technique requires a higher number delay elements, it results in fewer memories and more energy efficient memory schemes when time-multiplexed. Different memory arrangements and memory access schemes have also been discussed and compared in terms of their efficiency when using both single and dual-port memories. An efficient pipelining scheme has been proposed which reduces the number of pipelining registers while achieving similar clock frequencies. The single optimal point where the number of multiplications is minimum for non-time-multiplexed FRM filters is shown to become a function of both the period, L and time-multiplexing factor, M. This means that the minimum number of multipliers does not always correspond to the minimum number of multiplications which also increases the flexibility of implementation. These filters are shown to achieve power reduction between 23% and 68% for the considered examples. To simplify the multiplier, alternate number systems like the LNS have been used to implement FIR filters, which reduces the multiplications to additions. FIR filters are realized by directly designing them using ILP in the LNS domain in the minimax sense using finite word length constraints. The branch and bound algorithm, a typical algorithm to implement ILP problems, is implemented based on LNS integers and several branching strategies are proposed and evaluated. The filter coefficients thus obtained are compared with the traditional finite word length coefficients obtained in the linear domain. It is shown that LNS FIR filters provide a better approximation error compared to a standard FIR filter for a given coefficient word length. FIR filters also offer an opportunity in complexity reduction by implementing the multipliers using Booth or standard high-radix multiplication. Both of these multiplication schemes generate pre-computed multiples of the multiplicand which are then selected based on the encoded bits of the multiplier. In TDF FIR filters, one input data is multiplied with a number of coefficients and complexity can be reduced by sharing the pre-computation of the multiplies of the input data for all multiplications. Part of this work includes a systematic and unified approach to the design of such computation sharing multipliers and a comparison of the two forms of multiplication. It also gives closed form expressions for the cost of different parts of multiplication and gives an overview of various ways to implement the select unit with respect to the design of multiplexers. Particle filters are used to solve problems that require estimation of a system. Improved resampling schemes for reducing the latency of the resampling stage is proposed which uses a pre-fetch technique to reduce the latency between 50% to 95% dependent on the number of pre-fetches. Generalized division-free architectures and compact memory structures are also proposed that map to different resampling algorithms and also help in reducing the complexity of the multinomial resampling algorithm and reduce the number of memories required by up to 50%.
|
6 |
Bayesian inference methods for next generation DNA sequencingShen, Xiaohu, active 21st century 30 September 2014 (has links)
Recently developed next-generation sequencing systems are capable of rapid and cost-effective DNA sequencing, thus enabling routine sequencing tasks and taking us one step closer to personalized medicine. To provide a blueprint of a target genome, next-generation sequencing systems typically employ the so called shotgun sequencing strategy and oversample the genome with a library of relatively short overlapping reads. The order of nucleotides in the short reads is determined by processing acquired noisy signals generated by the sequencing platforms, and the overlaps between the reads are exploited to assemble the target long genome. Next-generation sequencing utilizes massively parallel array-based technology to speed up the sequencing and reduce the cost. However, accuracy and lengths of the short reads are yet to surpass those provided by the conventional slower and costlier Sanger sequencing method. In this thesis, we first focus on Illumina's sequencing-by-synthesis platform which relies on reversible terminator chemistry and describe the acquired signal by a Hidden Markov Model. Relying on this model and sequential Monte Carlo methods, we develop a parameter estimation and base calling scheme called ParticleCall. ParticleCall is tested on an experimental data set obtained by sequencing phiX174 bacteriophage using Illumina's Genome Analyzer II. The results show that ParticleCall scheme is significantly more computationally efficient than the best performing unsupervised base calling method currently available, while achieving the same accuracy. Having addressed the problem of base calling of short reads, we turn our attention to genome assembly. Assembly of a genome from acquired short reads is a computationally daunting task even in the scenario where a reference genome exists. Errors and gaps in the reference, and perfect repeat regions in the target, further render the assembly challenging and cause inaccuracies. We formulate reference-guided assembly as the inference problem on a bipartite graph and solve it using a message-passing algorithm. The proposed algorithm can be interpreted as the classical belief propagation scheme under a certain prior. Unlike existing state-of-the-art methods, the proposed algorithm combines the information provided by the reads without needing to know reliability of the short reads (so-called quality scores). Relation of the message-passing algorithm to a provably convergent power iteration scheme is discussed. Results on both simulated and experimental data demonstrate that the proposed message-passing algorithm outperforms commonly used state-of-the-art tools, and it nearly achieves the performance of a genie-aided maximum a posteriori (MAP) scheme. We then consider the reference-free genome assembly problem, i.e., the de novo assembly. Various methods for de novo assembly have been proposed in literature, all of whom are very sensitive to errors in short reads. We develop a novel error-correction method that enables performance improvements of de novo assembly. The new method relies on a suffix array structure built on the short reads data. It incorporates a hypothesis testing procedure utilizing the sum of quality information as the test statistic to improve the accuracy of overlap detection. Finally, we consider an inference problem in gene regulatory networks. Gene regulatory networks are highly complex dynamical systems comprising biomolecular components which interact with each other and through those interactions determine gene expression levels, i.e., determine the rate of gene transcription. In this thesis, a particle filter with Markov Chain Monte Carlo move step is employed for the estimation of reaction rate constants in gene regulatory networks modeled by chemical Langevin equations. Simulation studies demonstrate that the proposed technique outperforms previously considered methods while being computationally more efficient. Dynamic behavior of gene regulatory networks averaged over a large number of cells can be modeled by ordinary differential equations. For this scenario, we compute an approximation to the Cramer-Rao lower bound on the mean-square error of estimating reaction rates and demonstrate that, when the number of unknown parameters is small, the proposed particle filter can be nearly optimal. In summary, this thesis presents a set of Bayesian inference methods for base-calling and sequence assembly in next-generation DNA sequencing. Experimental studies shows the advantage of proposed algorithms over traditional methods. / text
|
7 |
Real Time Human Tracking in Unconstrained EnvironmentsGao, Hongzhi January 2011 (has links)
The tabu search particle filter is proposed in this research based on the integration of the modified tabu search metaheuristic optimization and the genetic particle filter. Experiments with this algorithm in real time human tracking applications in unconstrained environments show that it is more robust, accurate and faster than a number of other existing metaheuristic filters, including the evolution particle filter, particle swarm filter, simulated annealing filter, path relink filter and scatter search filter. Quantitative evaluation illustrates that even with only ten particles in the system, the proposed tabu search particle filter has a success rate of 93.85% whereas the success rate of other metaheuristic filters ranged from 68.46% to 17.69% under the same conditions. The accuracy of the proposed algorithm (with ten particles in the tracking system) is 2.69 pixels on average, which is over 3.85 times better than the second best metaheuristic filters in accuracy and 18.13 times better than the average accuracy of all other filters. The proposed algorithm is also the fastest among all metaheuristic filters that have been tested. It achieves approximately 50 frames per second, which is 1.5 times faster than the second fastest algorithm and nineteen times faster than the average speed of all other metaheuristic filters.
Furthermore, a unique colour sequence model is developed in this research based on a degenerated form of the hidden Markov model. Quantitative evaluations based on rigid object matching experiments illustrate that the successful matching rate is 5.73 times better than the widely used colour histogram. In terms of speed, the proposed algorithm achieves twice the successful matching rate in about three quarters of the processing time consumed by the colour histogram model.
Overall, these results suggest that the two proposed algorithms would be useful in many applications due to their efficiently, accuracy and ability to robustly track people and coloured objects.
|
8 |
Indoor Navigation Using an iPhone / Inomhusnavigering med iPhoneEmilsson, André January 2010 (has links)
<p>Indoor navigation could be used in many applications to enhance performance in</p><p>its specific area. Anything from serious life critical tasks like aiding firefighters or</p><p>coordinating military attacks to more simple every day use like finding a desired</p><p>shop in a large supermarket could be considered. Smartphones of today introduce</p><p>an interesting platform with capabilities like existing, more clumsy, indoor</p><p>navigation systems. The iPhone 3GS is a powerful smartphone that lets the programmer</p><p>use its hardware in an efficient and easy way. The iPhone 3GS has a</p><p>3-axis accelerometer, a 3-axis magnetometer and hardware accelerated image rendering</p><p>which is used in this thesis to track the user on an indoor map. A particle</p><p>filter is used to track the position of the user. The implementation shows how</p><p>many particles the iPhone will be able to handle and update in real time without</p><p>lag in the application.</p>
|
9 |
Estimation and Detection with Applications to NavigationTörnqvist, David January 2008 (has links)
The ability to navigate in an unknown environment is an enabler for truly utonomous systems. Such a system must be aware of its relative position to the surroundings using sensor measurements. It is instrumental that these measurements are monitored for disturbances and faults. Having correct measurements, the challenging problem for a robot is to estimate its own position and simultaneously build a map of the environment. This problem is referred to as the Simultaneous Localization and Mapping (SLAM) problem. This thesis studies several topics related to SLAM, on-board sensor processing, exploration and disturbance detection. The particle filter (PF) solution to the SLAM problem is commonly referred to as FastSLAM and has been used extensively for ground robot applications. Having more complex vehicle models using for example flying robots extends the state dimension of the vehicle model and makes the existing solution computationally infeasible. The factorization of the problem made in this thesis allows for a computationally tractable solution. Disturbance detection for magnetometers and detection of spurious features in image sensors must be done before these sensor measurements can be used for estimation. Disturbance detection based on comparing a batch of data with a model of the system using the generalized likelihood ratio test is considered. There are two approaches to this problem. One is based on the traditional parity space method, where the influence of the initial state is removed by projection, and the other on combining prior information with data in the batch. An efficient parameterization of incipient faults is given which is shown to improve the results considerably. Another common situation in robotics is to have different sampling rates of the sensors. More complex sensors such as cameras often have slower update rate than accelerometers and gyroscopes. An algorithm for this situation is derived for a class of models with linear Gaussian dynamic model and sensors with different sampling rates, one slow with a nonlinear and/or non-Gaussian measurement relation and one fast with a linear Gaussian measurement relation. For this case, the Kalman filter is used to process the information from the fast sensor and the information from the slow sensor is processed using the PF. The problem formulation covers the important special case of fast dynamics and one slow sensor, which appears in many navigation and tracking problems. Vision based target tracking is another important estimation problem in robotics. Distributed exploration with multi-aircraft flight experiments has demonstrated localization of a stationary target with estimate covariance on the order of meters. Grid-based estimation as well as the PF have been examined. / The third article in this thesis is included with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Linköping University's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to pubs-permissions@ieee.org. By choosing to view this material, you agree to all provisions of the copyright laws protecting it.Please be advised that wherever a copyright notice from another organization is displayed beneath a figure, a photo, a videotape or a Powerpoint presentation, you must get permission from that organization, as IEEE would not be the copyright holder.
|
10 |
Extracting Atmospheric Profiles from Hyperspectral Data Using Particle FiltersRawlings, Dustin 01 May 2013 (has links)
Removing the effects of the atmosphere from remote sensing data requires accurate knowledge of the physical properties of the atmosphere during the time of measurement. There is a nonlinear relationship that maps atmospheric composition to emitted spectra, but it cannot be easily inverted. The time evolution of atmospheric composition is approximately Markovian, and can be estimated using hyperspectral measurements of the atmosphere with particle filters. The difficulties associated with particle filtering high-dimension data can be mitigated by incorporating future measurement data with the proposal density.
|
Page generated in 0.3291 seconds