831 |
Moving Object Detection based on Background ModelingLuo, Yuanqing January 2014 (has links)
Aim at the moving objects detection, after studying several categories of background modeling methods, we design an improved Vibe algorithm based on image segmentation algorithm. Vibe algorithm builds background model via storing a sample set for each pixel. In order to detect moving objects, it uses several techniques such as fast initialization, random update and classification based on distance between pixel value and its sample set. In our improved algorithm, firstly we use histograms of multiple layers to extract moving objects in block-level in pre-process stage. Secondly we segment the blocks of moving objects via image segmentation algorithm. Then the algorithm constructs region-level information for the moving objects, designs the classification principles for regions and the modification mechanism among neighboring regions. In addition, to solve the problem that the original Vibe algorithm can easily introduce the ghost region into the background model, the improved algorithm designs and implements the fast ghost elimination algorithm. Compared with the tradition pixel-level background modeling methods, the improved method has better robustness and reliability against the factors like background disturbance, noise and existence of moving objects in the initial stage. Specifically, our algorithm improves the precision rate from 83.17% in the original Vibe algorithm to 95.35%, and recall rate from 81.48% to 90.25%. Considering the affection of shadow to moving objects detection, this paper designs a shadow elimination algorithm based on Red Green and Illumination (RGI) color feature, which can be converted from RGB color space, and dynamic match threshold. The results of experiments demonstrate that the algorithm can effectively reduce the influence of shadow on the moving objects detection. At last this paper makes a conclusion for the work of this thesis and discusses the future work.
|
832 |
Developing an optimization algorithm within an e-referral program for clinical specialist selection, based on an extensive e-referral program analysisCarrick, Curtis 08 July 2013 (has links)
When referring physicians decide to refer their patients to specialist care, they rarely, if ever, make a referral decision with the benefit of having access to all of the desirable information. It is therefore highly unlikely that the referring physician will make the optimal choice of specialist for that particular referral. A specialist selection optimization algorithm was developed to guarantee that the “right specialist” for each patient’s referral was chosen. The specialist selection optimization algorithm was developed based on feedback from over 120 users of the e-referral program. The developed algorithm was simulated, tested, and validated in MATLAB. Results from the MATLAB simulation demonstrate that the algorithm functioned as it was designed to. The developed algorithm provides referring physicians with an unprecedented level of support for their decision of which specialist to refer their patient to.
|
833 |
Novel tree-based algorithms for computational electromagneticsAronsson, Jonatan January 2011 (has links)
Tree-based methods have wide applications for solving large-scale problems in electromagnetics, astrophysics, quantum chemistry, fluid mechanics, acoustics, and many more areas. This thesis focuses on their applicability for solving large-scale problems in electromagnetics. The Barnes-Hut (BH) algorithm and the Fast Multipole Method (FMM) are introduced along with a survey of important previous work. The required theory for applying those methods to problems in electromagnetics is presented with particular emphasis on the capacitance extraction problem and broadband full-wave scattering.
A novel single source approximation is introduced for approximating clusters of electrostatic sources in multi-layered media. The approximation is derived by matching the spectra of the field in the vicinity of the stationary phase point. Combined with the BH algorithm, a new algorithm is shown to be an efficient method for evaluating electrostatic fields in multilayered media. Specifically, the new BH algorithm is well suited for fast capacitance extraction.
The BH algorithm is also adapted to the scalar Helmholtz kernel by using the same methodology to derive an accurate single source approximation. The result is a fast algorithm that is suitable for accelerating the solution of the Electric Field Integral Equation (EFIE) for electrically small structures.
Finally, a new version of FMM is presented that is stable and efficient from the low frequency regime to mid-range frequencies. By applying analytical derivatives to the field expansions at the observation points, the proposed method can rapidly evaluate vectorial kernels that arise in the FMM-accelerated solution of EFIE, the Magnetic Field Integral Equation (MFIE), and the Combined Field Integral Equation (CFIE).
|
834 |
Reducing queue wait times at Los Angeles International AirportSedani, Harshit 01 January 2014 (has links)
Operations research and queue theory have many different applications, providing tremendous value for different organizations. With the rise of fast computers and better data, stochastic processes can be better modeled into simulations to provide results of higher quality. The application of Operations Research is a very interesting intersection of mathematics, statistics, computer science and management science. In this project, the benefits of using point wise stationary approximations and stationary independent period by period approximationsto simulate staffing requirements at LAX (Los Angeles International Airport) in conjuction are examined with the motivation of reducing arrival processing times. This paper then examines the performance of different airport layouts, utilizing a discrete event simulation.
|
835 |
Investigating some heuristic solutions for the two-dimensional cutting stock problem / S.M. ManyatsiManyatsi, Sanele Mduduzi Innocent January 2010 (has links)
In this study, the two-dimensional cutting stock problem (2DCSP) is considered. This is a problem that occurs in the cutting of a number of smaller rectangular pieces or items from a set of large stock rectangles. It is assumed that the set of large objects is sufficient to accommodate all the small items. A heuristic procedure is developed to solve the two-dimensional single stock-size cutting stock problem (2DSSSCSP). This is the special case where the large rectangles are all of the same size. The major objective is to minimize waste and the number of stock sheets utilized.
The heuristic procedures developed to solve the 2DSSSCSP are based on the generation of cutting pattern. The Wang algorithm and a specific commercial software package are made use of to generate these patterns. The commercial software was chosen from a set of commercial software packages available in the market. A combinatoric process is applied to generate sets of cutting patterns using the Wang algorithm and the commercial software. The generated cutting patterns are used to formulate an integer linear programming model which is solved using an optimization solver.
Empirical experimentation is carried out to test the heuristic procedures using data obtained from both small and real world application problem instances. The results obtained shows that the heuristic procedures developed produce good quality results for both small and real life problem instances. It is quite clear that the heuristic procedure developed to solve the 2DSSSCSP produces cutting patterns which are acceptable in terms of waste generated and may offer useful alternatives to approaches currently available.
Broadly stated, this study involves investigating available software (commercial) in order to assess, formulate and investigate methods to attempt to benchmark software systems and algorithms and to employ ways to enhance solutions obtained by using these software systems. / Thesis (M.Sc. (Computer Science))--North-West University, Potchefstroom Campus, 2011.
|
836 |
Algorithms, measures and upper bounds for satisfiability and related problemsWahlström, Magnus January 2007 (has links)
The topic of exact, exponential-time algorithms for NP-hard problems has received a lot of attention, particularly with the focus of producing algorithms with stronger theoretical guarantees, e.g. upper bounds on the running time on the form O(c^n) for some c. Better methods of analysis may have an impact not only on these bounds, but on the nature of the algorithms as well. The most classic method of analysis of the running time of DPLL-style ("branching" or "backtracking") recursive algorithms consists of counting the number of variables that the algorithm removes at every step. Notable improvements include Kullmann's work on complexity measures, and Eppstein's work on solving multivariate recurrences through quasiconvex analysis. Still, one limitation that remains in Eppstein's framework is that it is difficult to introduce (non-trivial) restrictions on the applicability of a possible recursion. We introduce two new kinds of complexity measures, representing two ways to add such restrictions on applicability to the analysis. In the first measure, the execution of the algorithm is viewed as moving between a finite set of states (such as the presence or absence of certain structures or properties), where the current state decides which branchings are applicable, and each branch of a branching contains information about the resultant state. In the second measure, it is instead the relative sizes of the modelled attributes (such as the average degree or other concepts of density) that controls the applicability of branchings. We adapt both measures to Eppstein's framework, and use these tools to provide algorithms with stronger bounds for a number of problems. The problems we treat are satisfiability for sparse formulae, exact 3-satisfiability, 3-hitting set, and counting models for 2- and 3-satisfiability formulae, and in every case the bound we prove is stronger than previously known bounds.
|
837 |
Probabilistic Approaches to Consumer-generated Review RecommendationZhang, Richong 03 May 2011 (has links)
Consumer-generated reviews play an important role in online purchase decisions for many consumers. However, the quality and helpfulness of online reviews varies significantly. In addition, the helpfulness of different consumer-generated reviews is not disclosed to consumers unless they carefully analyze the overwhelming number of available contents. Therefore, it is of vital importance to develop predictive models that can evaluate online product reviews efficiently and then display the most useful reviews to consumers, in order to assist them in making purchase decisions.
This thesis examines the problem of building computational models for predicting whether a consumer-generated review is helpful based on consumers' online votes on other reviews (where a consumer's vote on a review is either HELPFUL or UNHELPFUL), with the aim of suggesting the most suitable products and vendors to consumers.In particular, we propose in this thesis three different helpfulness prediction approaches for consumer-generated reviews. Our entropy-based approach is relatively simple and suitable for applications requiring simple recommendation engine with fully-voted reviews. However, our entropy-based approach, as well as the existing approaches, lack a general framework and are all limited to utilizing fully-voted reviews. We therefore present a probabilistic helpfulness prediction framework to overcome these limitations. To demonstrate the versatility and flexibility of this framework, we propose an EM-based model and a logistic regression-based model. We show that the EM-based model can utilize reviews voted by a very small number of voters as the training set, and the logistic regression-based model is suitable for real-time helpfulness predicting of consumer-generated reviews. To our best knowledge, this is the first framework for modeling review helpfulness and measuring the goodness of models. Although this thesis primarily considers the problem of review helpfulness prediction, the presented probabilistic methodologies are, in general, applicable for developing recommender systems that make recommendation based on other forms of user-generated contents.
|
838 |
Search Space Analysis and Efficient Channel Assignment Solutions for Multi-interface Multi-channel Wireless NetworksGonzález Barrameda, José Andrés 12 August 2011 (has links)
This thesis is concerned with the channel assignment (CA) problem in multi-channel
multi-interface wireless mesh networks (M2WNs). First, for M2WNs with general topologies,
we rigorously demonstrate using the combinatorial principle of inclusion/exclusion
that the CA solution space can be quantified, indicating that its cardinality is greatly
influenced by the number of radio interfaces installed on each router. Based on this analysis,
a novel scheme is developed to construct a new reduced search space, represented
by a lattice structure, that is searched more efficiently for a CA solution. The elements
in the reduced lattice-based space, labeled Solution Structures (SS), represent groupings
of feasible CA solutions satisfying the radio constraints at each node. Two algorithms
are presented for searching the lattice structure. The first is a greedy algorithm that
finds a good SS in polynomial time, while the second provides a user-controlled depthfirst
search for the optimal SS. The obtained SS is used to construct an unconstrained
weighted graph coloring problem which is then solved to satisfy the soft interference
constraints.
For the special class of full M2WNs (fM2WNs), we show that an optimal CA solution
can only be achieved with a certain number of channels; we denote this number as the
characteristic channel number and derive upper and lower bounds for that number as a
function of the number of radios per router. Furthermore, exact values for the required
channels for minimum interference are obtained when certain relations between the number
of routers and the radio interfaces in a given fM2WN are satisfied. These bounds are
then employed to develop closed-form expressions for the minimum channel interference
that achieves the maximum throughput for uniform traffic on all communication links.
Accordingly, a polynomial-time algorithm to find a near-optimal solution for the channel
assignment problem in fM2WN is developed.
Experimental results confirm the obtained theoretical results and demonstrate the
performance of the proposed schemes.
|
839 |
Stochastic Search Genetic Algorithm Approximation of Input Signals in Native Neuronal NetworksAnisenia, Andrei 09 October 2013 (has links)
The present work investigates the applicability of Genetic Algorithms (GA) to the problem of signal propagation in Native Neuronal Networks (NNNs). These networks are comprised of neurons, some of which receive input signals. The signals propagate though the network by transmission between neurons. The research focuses on the regeneration of the output signal of the network without knowing the original input signal. The computational complexity of the problem is prohibitive for the exact computation. We propose to use a heuristic approach called Genetic Algorithm. Three algorithms are developed, based on the GA technique. The developed algorithms are tested on two different networks with varying input signals. The results obtained from the testing indicate significantly better performance of the developed algorithms compared to the Uniform Random Search (URS) technique, which is used as a control group. The importance of the research is in the demonstration of the ability of GA-based algorithms to successfully solve the problem at hand.
|
840 |
Preparing for a Safety Evaluation of Rotavirus Vaccine Using Health Services Data in Ontario: The Development of a Diagnostic Algorithm for Intussusception, an Estimation of Baseline Incidence and an Evaluation of MethodsDucharme, Robin Beverly 19 December 2013 (has links)
In view of the recent implementation of a publicly funded rotavirus vaccination program in Ontario, we undertook studies to help guide the design of a safety evaluation of the vaccine with respect to intussusception. We used administrative data to develop and validate an algorithm for intussusception, and quantified its incidence in Ontario. We also conducted a systematic review of study designs used to evaluate post-licensure vaccine safety, and discussed each design’s strengths and weaknesses.
The validated algorithm for intussusception was sensitive (89.3%) and highly specific (>99.9%). We observed the highest mean incidence (34 / 100,000) in males <1 year of age.
While other designs are more robust, the inability to ascertain individual vaccination status from Ontario’s administrative data dictated our selection of an ecological design for safety evaluation of rotavirus vaccine.
Data assimilated from this thesis represent a critical step toward the timely evaluation of rotavirus vaccine safety in Ontario.
|
Page generated in 0.037 seconds