Spelling suggestions: "subject:"algorithm"" "subject:"allgorithm""
301 |
De novo peptide sequencing methods for tandem mass spectra2015 August 1900 (has links)
De novo peptide sequencing from MS/MS spectra has become of primary importance in proteomics. It provides essential information for studies of protein structure and function. With the availability of various MS/MS spectra, a lot of computational methods have been developed to infer peptide sequences from them. However, current de novo peptide sequencing methods still have limitations. Some major ones include a lack of suitable models reflecting MS/MS spectra, limited information extracted from MS/MS spectra, and the inefficient use of multiple spectra. This thesis addresses some of the limitations with a series of novel computational methods designed for various MS/MS spectra and their combinations.
The main content of the thesis starts with a comprehensive review of recent developments in de novo peptide sequencing methods, followed by two novel methods for single spectrum sequencing problems, and then presents two paired spectra sequencing methods. The first chapter introduces relevant background information, objectives of the study, and the structure of the thesis. After that, a comprehensive review of de novo peptide sequencing methods is given. It summarizes recent developments of computational methods for various experimental spectra, compares and analyzes their advantages and disadvantages, and points out some future research directions. Having these potential research directions, the thesis next presents two novel methods designed for higher-energy collisional dissociation (HCD) spectra and electron capture dissociation (ECD) (or electron transfer dissociation (ETD)) spectra, respectively. These methods apply new spectrum graph models with multiple types of edges, integrate amino acid combination (AAC) information and peptide tags, and consider spectrum-specific information to suit different spectra. After that, multiple spectra sequencing problem is studied. A framework for de novo peptide sequencing of multiple spectra is given with applications to two different spectra pairs. One pair is spectrally complementary to each other, and the other is similar spectra with property differences. These methods include effective spectra merging criteria and parent mass correction steps, and modify the previously proposed graph models to fit the merged spectra. Experiments on several experimental MS/MS spectra datasets and datasets pairs show the advantages of the proposed methods in terms of peptide sequencing accuracy. Finally, conclusions and future work directions are given at the end of the thesis.
To summarize the work in the thesis, a series of novel computational methods for de novo peptide sequencing are proposed. These methods target different types of MS/MS spectra and their combinations. Experiential results show the proposed methods are either better than competing methods that already exist, or fill gaps in the suite of currently available methods.
|
302 |
Algorithmic techniques for the micron automata processorRoy, Indranil 21 September 2015 (has links)
Our research is the first in-depth study in the use of the Micron Automata Processor, a novel re-configurable streaming co-processor which is purpose-built to execute thousands of Non-deterministic Finite Automata (NFA) in parallel. By design, this processor is well-suited to accelerate applications which need to find all occurrences of thousands of complex string-patterns in the input data. We have validated this by implementing two such applications, one from network security and the other from bioinformatics, both of which are significantly faster than their state-of-art counterparts. Our research has also widened the scope of the applications which can be accelerated through this processor by finding ways to quickly program any generic graph into it and then search for hard to find features like maximal-cliques and Hamiltonian paths. These applications and algorithms have yielded valuable design-inputs for next generation of the chip which is currently in design phase. We hope that this work paves the way to the early adoption of this upcoming architecture and to efficient solution of some of the currently computationally challenging problems.
|
303 |
On End Vertices of Search AlgorithmsGorzny, Jan 24 August 2015 (has links)
Given a graph G=(V,E), a vertex ordering of G is a total order v1,v2,...,vn of V. A graph search algorithm is a systematic method for visiting each vertex in a graph, naturally producing a vertex ordering of the graph. We explore the problem of determining whether a given vertex in a graph can be the end (last) vertex of a search ordering for various common graph search algorithms when restricted to various graph classes, as well as the related problem of determining if a vertex is an end-vertex when a start vertex is specified for the search. The former is referred to as the end-vertex problem, and the latter is the beginning-end-vertex problem. For the beginning-end-vertex problem, we show it is NP-complete on bipartite graphs as well as degree restricted bipartite graphs for Lexicographic Breadth First Search, but solvable in polynomial time on split graphs for Breadth First Search. We show that the end-vertex problem is tractable for Lexicographic Breadth First Search on proper interval bigraphs and for Lexicographic Depth First Search on chordal graphs. Further, we show that the problem is NP-complete for Lexicographic Breadth First Search and Depth First Search on bipartite graphs. / Graduate
|
304 |
Improving the efficiency of dynamic traffic assignment through computational methods based on combinatorial algorithmNezamuddin 12 October 2011 (has links)
Transportation planning and operation requires determining the state of the transportation system under different network supply and demand conditions. The most fundamental determinant of the state of a transportation system is time-varying traffic flow pattern on its roadway segments. It forms a basis for numerous engineering analyses which are used in operational- and planning-level decision-making process. Dynamic traffic assignment (DTA) models are the leading modeling tools employed to determine time-varying traffic flow pattern under changing network conditions. DTA models have matured over the past three decades, and are now being adopted by transportation planning agencies and traffic management centers. However, DTA models for large-scale regional networks require excessive computational resources. The problem becomes further compounded for other applications such as congestion pricing, capacity calibration, and network design for which DTA needs to be solved repeatedly as a sub-problem. This dissertation aims to improve the efficiency of the DTA models, and increase their viability for various planning and operational applications.
To this end, a suite of computational methods based on the combinatorial approach for dynamic traffic assignment was developed in this dissertation. At first, a new polynomial run time combinatorial algorithm for DTA was developed. The combinatorial DTA (CDTA) model complements and aids simulation-based DTA models rather than replace them. This is because various policy measures and active traffic control strategies are best modeled using the simulation-based DTA models. Solution obtained from the CDTA model was provided as an initial feasible solution to a simulation-based DTA model to improve its efficiency – this process is called “warm starting” the simulation-based DTA model. To further improve the efficiency of the simulation-based DTA model, the warm start process is made more efficient through parallel computing. Parallel computing was applied to the CDTA model and the traffic simulator used for warm starting. Finally, another warm start method based on the static traffic assignment model was tested on the simulation-based DTA model.
The computational methods developed in this dissertation were tested on the Anaheim, CA and Winnipeg, Canada networks. Models warm-started using the CDTA solution performed better than the purely simulation-based DTA models in terms of equilibrium convergence metrics and run time. Warm start methods using solutions from the static traffic assignment models showed similar improvements. Parallel computing was applied to the CDTA model, and it resulted in faster execution time by employing multiple computer processors. Parallel version of the traffic simulator can also be embedded into the simulation-assignment framework of the simulation-based DTA models and improve their efficiency. / text
|
305 |
Εφαρμογές του τετραγωνικού προγραμματισμού στην επιλογή του βέλτιστου χαρτοφυλακίουΛύρη, Αναστασία 29 August 2008 (has links)
- / -
|
306 |
Resource allocation of drones flown in a simulated environment / Resursfördelning av drönare i en simulerad miljöWikström, Anders January 2014 (has links)
In this report we compare three different assignment algorithms in how they can be used to assign a set of drones to get to a set of goal locations in an as resource efficient way as possible. An experiment is set up to compare how these algorithms perform in a somewhat realistic simulated environment. The Robot Operating system (ROS) is used to create the experimental environment. We found that by introducing a threshold for the Hungarian algorithm we could reduce the total time it takes to complete the problem while only sightly increasing total distance traversed by the drones.
|
307 |
Evaluation methods for comparing energy savings due to variable speed pumping in wastewater applicationsSpataru, Adrian January 2014 (has links)
The Master Thesis work has been carried out at Xylem HQ, Sundbyberg, Sweden in collaboration with Linköping University, Department of Management and Engineering. The work was to evaluate in different ways energy savings in wastewater pumping stations and conclude what is the discrepancy between them, emphasizing on the theoretical model and measured data. Two pump stations were chosen to be modeled by mathematical calculations based on theoretical pump and system curves. Based on the same inputs, a commercial tool was used to calculate energy savings. Moreover, theoretical curves and variable speed drives were combined into an own developed testing platform in LabView, as an alternative evaluation solution. Finally, measured data was collected and used in a specific energy algorithm, designed to have as inputs water level and energy. In term of method accuracy, initial assumptions are wrong. For a given frequency, the results show similar values for all four evaluation methods. Also, variable speed is confirmed as a good control philosophy for less energy use than direct online.
|
308 |
Multi-view hockey tracking with trajectory smoothing and camera selectionWu, Lan 11 1900 (has links)
We address the problem of multi-view multi-target tracking using multiple stationary cameras in the application of hockey tracking and test the approach with data from two cameras. The system is based on the previous work by Okuma et al. [50]. We replace AdaBoost detection with blob detection in both image coordinate systems after background subtraction. The sets of blob-detection results are then mapped to the rink coordinate system using a homography transformation. These observations are further merged into the final detection result which will be incorporated into the particle filter. In addition, we extend the particle filter to use multiple observation models, each corresponding to a view. An observation likelihood and a reference color model are also maintained for each player in each view and are updated only when the player is not occluded in that view. As a result of the expanded coverage range and multiple perspectives in the multi-view tracking, even when the target is occluded in one view, it still can be tracked as long as it is visible from another view. The multi-view tracking data are further processed by trajectory smoothing using the Maximum a posteriori smoother. Finally, automatic camera selection is performed using the Hidden Markov Model to create personalized video programs.
|
309 |
Multiobjective Design and Optimization of Polymer Flood PerformanceEkkawong, Peerapong 16 December 2013 (has links)
The multiobjective genetic algorithm can be used to optimize two conflicting objectives, oil production and polymer utility factor in polymer flood design. This approach provides a set of optimal solutions which can be considered as trade-off curve (Pareto front) to maximize oil production while preserving polymer performance. Then an optimal polymer flood design can be considered from post-optimization analysis. A 2D synthetic example, and a 3D field-scale application, accounting for geologic uncertainty, showed that beyond the optimal design, a relatively minor increase in oil production requires much more polymer injection and the polymer utility factor increases substantially.
|
310 |
WORKFLOW SCHEDULING ALGORITHMS IN THE GRIDDong, FANGPENG 25 April 2009 (has links)
The development of wide-area networks and the availability of powerful computers as low-cost commodity components are changing the face of computation. These progresses in technology make it possible to utilize geographically distributed resources in multiple owner domains to solve large-scale problems in science, engineering and commerce. Research on this topic has led to the emergence of Grid computing. To achieve the promising potentials of tremendous distributed resources in the Grid, effective and efficient scheduling algorithms are fundamentally important.
However, scheduling problems are well known for their intractability, and many of instances are in fact NP-Complete. The situation becomes even more challenging in the Grid circumstances due to some unique characteristics of the Grid. Scheduling algorithms in traditional parallel and distributed systems, which usually run on homogeneous and dedicated resources, cannot work well in the new environments.
This work focuses on workflow scheduling algorithms in the Grid scenario. New challenges are discussed, previous research in this realm is surveyed, and novel heuristic algorithms addressing the challenges are proposed and tested.
The proposed algorithms contribute to the literature by taking the following factors into account when a schedule for a DAG-based workflow is produced: predictable performance fluctuation and non-deterministic performance model of Grid resources, the computation and data staging co-scheduling, the clustering characteristic of Grid resource distribution, and the ability to reschedule according to performance change after the initial schedule is made. The performance of proposed algorithms are tested and analyzed by simulation under different workflow and resource configurations. / Thesis (Ph.D, Computing) -- Queen's University, 2009-04-23 22:30:09.646
|
Page generated in 0.0342 seconds