Spelling suggestions: "subject:"theoretic"" "subject:"heoretic""
11 |
A Distributed Control Algorithm for Small Swarms in Cordon and PatrolAlder, C Kristopher 01 June 2016 (has links)
Distributed teams of air and ground robots have the potential to be very useful in a variety of application domains, and much work is being done to design distributed algorithms that produce useful behaviors. This thesis presents a set of distributed algorithms that operate under minimal human input for patrol and cordon tasks. The algorithms allow the team to surround and travel between objects of interest. Empirical analyses indicate that the surrounding behaviors are robust to variations on the shape of the object of interest, communication loss, and robot failures.
|
12 |
A combined machine-learning and graph-based framework for the 3-D automated segmentation of retinal structures in SD-OCT imagesAntony, Bhavna Josephine 01 December 2013 (has links)
Spectral-domain optical coherence tomography (SD-OCT) is a non-invasive imaging modality that allows for the quantitative study of retinal structures. SD-OCT has begun to find widespread use in the diagnosis and management of various ocular diseases. While commercial scanners provide limited analysis of a small number of retinal layers, the automated segmentation of retinal layers and other structures within these volumetric images is quite a challenging problem, especially in the presence of disease-induced changes.
The incorporation of a priori information, ranging from qualitative assessments of the data to automatically learned features, can significantly improve the performance of automated methods. Here, a combined machine learning-based approach and graph-theoretic approach is presented for the automated segmentation of retinal structures in SD-OCT images. Machine-learning based approaches are used to learn textural features from a training set, which are then incorporated into the graph- theoretic approach. The impact of the learned features on the final segmentation accuracy of the graph-theoretic approach is carefully evaluated so as to avoid incorporating learned components that do not improve the method. The adaptability of this versatile combination of a machine-learning and graph-theoretic approach is demonstrated through the segmentation of retinal surfaces in images obtained from humans, mice and canines.
In addition to this framework, a novel formulation of the graph-theoretic approach is described whereby surfaces with a disruption can be segmented. By incorporating the boundary of the "hole" into the feasibility definition of the set of surfaces, the final result consists of not only the surfaces but the boundary of the hole as well. Such a formulation can be used to model the neural canal opening (NCO) in SD-OCT images, which appears as a 3-D planar hole disrupting the surfaces in its vicinity. A machine-learning based approach was also used here to learn descriptive features of the NCO.
Thus, the major contributions of this work include 1) a method for the automated correction of axial artifacts in SD-OCT images, 2) a combined machine-learning and graph-theoretic framework for the segmentation of retinal surfaces in SD-OCT images (applied to humans, mice and canines), 3) a novel formulation of the graph-theoretic approach for the segmentation of multiple surfaces and their shared hole (applied to the segmentation of the neural canal opening), and 4) the investigation of textural markers that could precede structural and functional change in degenerative retinal diseases.
|
13 |
Constrained Coding and Signal Processing for HolographyGarani, Shayan Srinivasa 05 July 2006 (has links)
The increasing demand for high density storage devices has led to innovative data recording paradigms like optical holographic memories that record and read data in a two-dimensional page-oriented manner. In order to overcome the effects of inter-symbol-interference and noise in holographic channels, sophisticated constrained modulation codes and error correction codes are needed in these systems. This dissertation deals with the information-theoretic and signal processing aspects of holographic storage. On the information-theoretic front, the capacity of two-dimensional runlength-limited channels is analyzed. The construction of two-dimensional runlength-limited codes achieving the capacity lower bounds is discussed. This is a theoretical study on one of the open problems in symbolic dynamics and mathematical physics. The analysis of achievable storage density in holographic channels is useful for building practical systems. In this work, fundamental limits for the achievable volumetric storage density in holographic channels dominated by optical scattering are analyzed for two different recording mechanisms, namely angle multiplexed holography and localized recording. Pixel misregistration is an important signal processing problem in holographic systems. In this dissertation, algorithms for compensating two-dimensional translation and rotational misalignments are discussed and analyzed for Nyquist size apertures with low fill factors. These techniques are applicable for general optical imaging systems
|
14 |
Information-theoretic security under computational, bandwidth, and randomization constraintsChou, Remi 21 September 2015 (has links)
The objective of the proposed research is to develop and analyze coding schemes for information-theoretic security, which could bridge a gap between theory an practice. We focus on two fundamental models for information-theoretic security: secret-key generation for a source model and secure communication over the wire-tap channel. Many results for these models only provide existence of codes, and few attempts have been made to design practical schemes. The schemes we would like to propose should account for practical constraints. Specifically, we formulate the following constraints to avoid oversimplifying the problems. We should assume: (1) computationally bounded legitimate users and not solely rely on proofs showing existence of code with exponential complexity in the block-length; (2) a rate-limited public communication channel for the secret-key generation model, to account for bandwidth constraints; (3) a non-uniform and rate-limited source of randomness at the encoder for the wire-tap channel model, since a perfectly uniform and rate-unlimited source of randomness might be an expensive resource. Our work focuses on developing schemes for secret-key generation and the wire-tap channel that satisfy subsets of the aforementioned constraints.
|
15 |
Predicting Protein-Protein Interactions Using Graph Invariants and a Neural NetworkKnisley, D., Knisley, J. 01 April 2011 (has links)
The PDZ domain of proteins mediates a protein-protein interaction by recognizing the hydrophobic C-terminal tail of the target protein. One of the challenges put forth by the DREAM (Discussions on Reverse Engineering Assessment and Methods) 2009 Challenge consists of predicting a position weight matrix (PWM) that describes the specificity profile of five PDZ domains to their target peptides. We consider the primary structures of each of the five PDZ domains as a numerical sequence derived from graph-theoretic models of each of the individual amino acids in the protein sequence. Using available PDZ domain databases to obtain known targets, the graph-theoretic based numerical sequences are then used to train a neural network to recognize their targets. Given the challenge sequences, the target probabilities are computed and a corresponding position weight matrix is derived. In this work we present our method. The results of our method placed second in the DREAM 2009 challenge.
|
16 |
Clustering Multiple Contextually Related Heterogeneous DatasetsHossain, Mahmood 09 December 2006 (has links)
Traditional clustering is typically based on a single feature set. In some domains, several feature sets may be available to represent the same objects, but it may not be easy to compute a useful and effective integrated feature set. We hypothesize that clustering individual datasets and then combining them using a suitable ensemble algorithm will yield better quality clusters compared to the individual clustering or clustering based on an integrated feature set. We present two classes of algorithms to address the problem of combining the results of clustering obtained from multiple related datasets where the datasets represent identical or overlapping sets of objects but use different feature sets. One class of algorithms was developed for combining hierarchical clustering generated from multiple datasets and another class of algorithms was developed for combining partitional clustering generated from multiple datasets. The first class of algorithms, called EPaCH, are based on graph-theoretic principles and use the association strengths of objects in the individual cluster hierarchies. The second class of algorithms, called CEMENT, use an EM (Expectation Maximization) approach to progressively refine the individual clusterings until the mutual entropy between them converges toward a maximum. We have applied our methods to the problem of clustering a document collection consisting of journal abstracts from ten different Library of Congress categories. After several natural language preprocessing steps, both syntactic and semantic feature sets were extracted. We present empirical results that include the comparison of our algorithms with several baseline clustering schemes using different cluster validation indices. We also present the results of one-tailed paired emph{T}-tests performed on cluster qualities. Our methods are shown to yield higher quality clusters than the baseline clustering schemes that include the clustering based on individual feature sets and clustering based on concatenated feature sets. When the sets of objects represented in two datasets are overlapping but not identical, our algorithms outperform all baseline methods for all indices.
|
17 |
Graph Theoretic Models in Chemistry and Molecular BiologyKnisley, Debra, Knisley, Jeff 01 March 2007 (has links)
The field of chemical graph theory utilizes simple graphs as models of molecules. These models are called molecular graphs, and quantifiers of molecular graphs are known as molecular descriptors or topological indices. Today's chemists use molecular descriptors to develop algorithms for computer aided drug designs, and computer based searching algorithms of chemical databases and the field is now more commonly known as combinatorial or computational chemistry. With the completion of the human genome project, related fields are emerging such as chemical genomics and pharmacogenomics. Recent advances in molecular biology are driving new methodologies and reshaping existing techniques, which in turn produce novel approaches to nucleic acid modeling and protein structure prediction. The origins of chemical graph theory are revisited and new directions in combinatorial chemistry with a special emphasis on biochemistry are explored. Of particular importance is the extension of the set of molecular descriptors to include graphical invariants. We also describe the use of artificial neural networks (ANNs) in predicting biological functional relationships based on molecular descriptor values. Specifically, a brief discussion of the fundamentals of ANNs together with an example of a graph theoretic model of RNA to illustrate the potential for ANN coupled with graphical invariants to predict function and structure of biomolecules is included.
|
18 |
Apport de la modélisation et de la simulation à l'analyse des risques et la prévention des accidents d'un site de stockage de GPL / Evaluating LPG storage and distribution safety operations plan with a simulation tool and providing recommendations based on STAMPOueidat, Dahlia 13 December 2016 (has links)
En vue d’éviter ou de diminuer l’importance des dégâts causés par les accidents majeurs, il convient de modéliser les fonctions et les relations entre les composants d’un système industriel. Pour cela, dans cette thèse, on utilise la démarche de modélisation par la méthode STAMP (Systems-Theoretic Accident Model and Processes) pour représenter la structure hiéarchique du système, ainsi que les mécanismes de contrôle nécessaire pour préserver la sécurité d’un processus industriel. Afin de traiter les problématiques liées aux contrôles de la sécurité industrielle, on propose l’utilisation de l’outil de simulation Anylogic. Cet outil, permet modéliser et de simuler le comportement du système en fonction du temps en mode normal et en mode dégradé.L’objectif de ces travaux est donc de proposer une démarche basée à la fois sur la modélisation et la simulation pour analyser les risques et prévenir les accidents d’un site de stockage et distribution de GPL (Gaz Pétrole liquéfié). / System thinking concepts and simulation tools are used to model the risk prevention plan and operational modes designed to enforce safety constraints at a French liquefied petroleum gas (LPG) storage and distribution facility. In France, such facilities are classified and the subject of special legislation and safety regulations. Their supervision is the responsibility of a control and regulatory body. A technological risk and prevention plan is provided, where all the dangerous phenomena likely to occur in addition to the safety control measures are listed in the safety report. Safety is therefore addressed through rules, and control mechanisms ensure that the system complies with safety constraints. Taking this facility as a case study, we use the STAMP theoretical framework together with AnyLogic simulation software to model technical elements and human and organizational behavior. We simulate how the system evolves over time and the strategies that are deployed in a loss of control scenario. The aim is to assess whether the prescribed safety program covers all of the system's phases; namely operations and audits. The results enrich other research that focuses on the contribution of system dynamics to risk analysis and accident prevention.
|
19 |
In the Face of Anticipation: Decision Making under Visible Uncertainty as Present in the Safest-with-Sight ProblemKnowles, Bryan A 01 April 2016 (has links)
Pathfinding, as a process of selecting a fixed route, has long been studied in
Computer Science and Mathematics. Decision making, as a similar, but intrinsically different, process of determining a control policy, is much less studied. Here, I propose a problem that appears to be of the first class, which would suggest that it is easily solvable with a modern machine, but that would be too easy, it turns out. By allowing a pathfinding to anticipate and respond to information, without setting restrictions
on the \structure" of this anticipation, selecting the \best step" appears to be an intractable problem.
After introducing the necessary foundations and stepping through the strangeness of “safest-with-sight," I attempt to develop an method of approximating the success rate associated with each potential decision; the results suggest something fundamental about decision making itself, that information that is collected at a moment that it is not immediately “consumable", i.e. non-incident, is not as necessary to anticipate than the contrary, i.e. incident information.
This is significant because (i) it speaks about when the information should be anticipated, a moment in decision-making long before the information is actually collected, and (ii) whenever the model is restricted to only incident anticipation the problem again becomes tractable. When we only anticipate what is most important, solutions become easy to compute, but attempting to anticipate any more than that and solutions may become impossible to find on any realistic machine.
|
20 |
Self-reconfigurable multi-robot systemsPickem, Daniel 27 May 2016 (has links)
Self-reconfigurable robotic systems are variable-morphology machines capable of changing their overall structure by rearranging the modules they are composed of. Individual modules are capable of connecting and disconnecting to and from one another, which allows the robot to adapt to changing environments. Optimally reconfiguring such systems is computationally prohibitive and thus in general self-reconfiguration approaches aim at approximating optimal solutions. Nonetheless, even for approximate solutions, centralized methods scale poorly in the number of modules. Therefore, the objective of this research is the development of decentralized self-reconfiguration methods for modular robotic systems. Building on completeness results of the centralized algorithms in this work, decentralized methods are developed that guarantee stochastic convergence to a given target shape. A game-theoretic approach lays the theoretical foundation of a novel potential game-based formulation of the self-reconfiguration problem. Furthermore, two extensions to the basic game-theoretic algorithm are proposed that enable agents to modify the algorithms' parameters during runtime and improve convergence times. The flexibility in the choice of utility functions together with runtime adaptability makes the presented approach and the underlying theory suitable for a range of problems that rely on decentralized local control to guarantee global, emerging properties. The experimental evaluation of the presented algorithms relies on a newly developed multi-robotic testbed called the "Robotarium" that is equipped with custom-designed miniature robots, the "GRITSBots". The Robotarium provides hardware validation of self-reconfiguration on robots but more importantly introduces a novel paradigm for remote accessibility of multi-agent testbeds with the goal of lowering the barrier to entrance into the field of multi-robot research and education.
|
Page generated in 0.0437 seconds