• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5512
  • 1072
  • 768
  • 625
  • 541
  • 355
  • 145
  • 96
  • 96
  • 96
  • 96
  • 96
  • 96
  • 95
  • 83
  • Tagged with
  • 11494
  • 6047
  • 2543
  • 1989
  • 1676
  • 1419
  • 1350
  • 1317
  • 1217
  • 1136
  • 1075
  • 1037
  • 1011
  • 891
  • 877
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
461

Multi-feature RGB-D generic object tracking using a simple filter hierarchy

Entin, Irina January 2014 (has links)
This research focuses on tracking generic non-rigid objects at close range to an infrared triangulation-based RGB-D sensor. The work was motivated by direct industry demand for a foundation for a low-cost application to operate in a surveillance setting. There are several novel components of this research that build on classical and state-of-the-art literature to extend into this real-world environment with limited constraints. The initialization is automatic with no a priori knowledge of the object and there are no restrictions on object appearance or transformation. There are no assumptions on object placement and only a very general physical model is applied to object trajectory. The tracking is performed using a Kalman filter and polynomial predictor to hypothesize the next location and a particle filter with colour, edge, depth edge, and absolute depth features to pinpoint object location. This work deals with challenges that are not explored in other work including highly variable object motion characteristics and generality with respect to the object tracked. It also explores the potential for multiple objects to occupy the same x-y location and have the same appearance. The result is a basic model for generic single object tracking that can be extended to any scenario with tailored occlusion-handling and augmented with behavioural analysis to confront a real-world problem. / Cette recherche implique le suivi des objets génériques non-rigides qui passent à courte distance à un capteur RVB-D utilisant une caméra infrarouge avec triangulation. Le travail a été motivé par un besoin de partenaires de l'industrie pour une application à faible coût pour fonctionner dans un cadre de surveillance. Il y a plusieurs éléments nouveaux de ce recherche qui utilisent la technologie classique et nouvelle pour rendre l'application faisable dans le monde réel avec peu de contraintes. L'initialisation est automatique sans connaissance à l'avance de l'objet et il n'y a aucune restriction sur l'apparence ou transformation de l'objet. Il n'y a pas d'hypothèses sur le placement de l'objet et seulement une modèle physique très général est appliqué à la trajectoire de l'objet. Le suivi est effectué en utilisant un filtre Kalman avec une fonctionne polynomial pour prédire l'emplacement de l'objet dans le prochain cadre. Un filtre à particules utilise ce prédiction pour placé ces particules et focalisé sur l'objet utilisant l'information de la couleur, les bords d'intensité, les bords de profondeur, et la profondeur absolue. Cette recherche traite des défis qui ne sont pas explorées dans d'autres travaux, notamment le mouvement d'objets très variable et la généralité par rapport à l'objet suivi. Il explore aussi la possibilité de plusieurs objets qui occupent le même emplacement x-y ayant la même apparence. Le résultat est un modèle pour le suivi d'un objet unique générique qui peut être étendue à n'importe quel scénario avec l'ajout d'un processus pour les occlusions et l'analyse comportementale.
462

On the bottleneck concept for options discovery: theoretical underpinnings and extension in continuous state spaces

Bacon, Pierre-Luc January 2014 (has links)
The bottleneck concept in reinforcement learning has played a prominent role in automatically finding temporal abstractions from experience. Lacking significant theory, it has however been regarded by some as being merely a trick. This thesis attempts to gain better intuition about this approach using spectral graph theory. A connection to the theory of Nearly Completely Decomposable Markov Chains (NCD) is also drawn and shows great promise. An options discovery algorithm is proposed and is the first of its kind to be applicable in continuous state spaces. As opposed to other similar approaches, this one can have running time O(n^2 log n) rather than O(n^3) making it suitable to much larger domains than the typical grid worlds. / L'identification automatique de goulots d'étranglement dans la structure de solution a joué un rôle important en apprentissage par renforcement hiérarchique au cours des dernières années. Bien que populaire, cette approche manque toujours de fondements théoriques adaptés. Ce mémoire tente de pallier ces lacunes en établissant des liens en théorie spectrale des graphes, espérant ainsi obtenir une meilleure compréhension des conditions garantissant son applicabilité. Une revue des efforts réalisés concernant les chaines de Markov presque complètement décomposable (NCD) permet de croire qu'elles pourraient être utiles au problème ici considéré. Un algorithme de découverte d'options motivé par la théorie spectrale des graphes est proposé et semble être le premier du genre à pouvoir être aussi appliqué dans un espace d'états continu. Contraire- ment à d'autres approches similaires, la complexité algorithmique en temps peut être de l'ordre de O(n^2 log n) plutôt que O(n^3), rendant possible la résolution de problèmes de plus grande envergure.
463

Efficient inference algorithms for near-deterministic systems

Chatterjee, Shaunak 04 June 2014 (has links)
<p> This thesis addresses the problem of performing probabilistic inference in stochastic systems where the probability mass is far from uniformly distributed among all possible outcomes. Such <i>near-deterministic</i> systems arise in several real-world applications. For example, in human physiology, the widely varying evolution rates of physiological variables make certain trajectories much more likely than others; in natural language, a very small fraction of all possible word sequences accounts for a disproportionately high amount of probability under a language model. In such settings, it is often possible to obtain significant computational savings by focusing on the outcomes where the probability mass is concentrated. This contrasts with existing algorithms in probabilistic inference---such as junction tree, sum product, and belief propagation algorithms---which are well-tuned to exploit conditional independence relations. </p><p> The first topic addressed in this thesis is the structure of discrete-time temporal graphical models of near-deterministic stochastic processes. We show how the structure depends on the ratios between the size of the time step and the effective rates of change of the variables. We also prove that accurate approximations can often be obtained by sparse structures even for very large time steps. Besides providing an intuitive reason for causal sparsity in discrete temporal models, the sparsity also speeds up inference. </p><p> The next contribution is an eigenvalue algorithm for a linear factored system (e.g., dynamic Bayesian network), where existing algorithms do not scale since the size of the system is exponential in the number of variables. Using a combination of graphical model inference algorithms and numerical methods for spectral analysis, we propose an approximate spectral algorithm which operates in the factored representation and is exponentially faster than previous algorithms. </p><p> The third contribution is a temporally abstracted Viterbi (TAV) algorithm. Starting with a spatio-temporally abstracted coarse representation of the original problem, the TAV algorithm iteratively refines the search space for the Viterbi path via spatial and temporal refinements. The algorithm is guaranteed to converge to the optimal solution with the use of admissible heuristic costs in the abstract levels and is much faster than the Viterbi algorithm for near-deterministic systems. </p><p> The fourth contribution is a hierarchical image/video segmentation algorithm, that shares some of the ideas used in the TAV algorithm. A supervoxel tree provides the abstraction hierarchy for this application. The algorithm starts working with the coarsest level supervoxels, and refines portions of the tree which are likely to have multiple labels. Several existing segmentation algorithms can be used to solve the energy minimization problem in each iteration, and admissible heuristic costs once again guarantee optimality. Since large contiguous patches exist in images and videos, this approach is more computationally efficient than solving the problem at the finest level of supervoxels. </p><p> The final contribution is a family of Markov Chain Monte Carlo (MCMC) algorithms for near-deterministic systems when there exists an efficient algorithm to sample solutions for the corresponding deterministic problem. In such a case, a generic MCMC algorithm's performance worsens as the problem becomes more deterministic despite the existence of the efficient algorithm in the deterministic limit. MCMC algorithms designed using our methodology can bridge this gap. </p><p> The computational speedups we obtain through the various new algorithms presented in this thesis show that it is indeed possible to exploit near-determinism in probabilistic systems. Near-determinism, much like conditional independence, is a potential (and promising) source of computational savings for both exact and approximate inference. It is a direction that warrants more understanding and better generalized algorithms.</p>
464

AI Planning-Based Service Modeling for the Internet of Things

Bahers, Quentin January 2015 (has links)
It is estimated that by 2020, more than 50 billion devices will be interconnected, to form what is called the Internet of Things. Those devices range from consumer electronics to utility meters, including vehicles. Provided with sensory capabilities, those objects will be able to transmit valuable information about their environment, not only to humans, but even more importantly to other machines, which should ultimately be able to interpret and take decisions based on the information received. This “smartness” implies gifting those devices with a certain degree of automation. This Master’s Thesis investigates how recent advances in artificial intelligence planning can be helpful in building such systems. In particular, an artificial intelligence planner able to generate workflows for most of IoT-related use cases has been connected to an IoT platform. A performance study of a state-of-the planner, Fast Downward, on one of the most challenging IoT application, Smart Garbage Collection (which is similar to the Traveling Salesman Problem) has also been carried out. Eventually, different pre-processing and clustering techniques are suggested to tackle the latest AI planners’ inefficiency on quickly finding plans for the most difficult tasks.
465

Self organising maps for data fusion and novelty detection

Taylor, Odin January 2000 (has links)
No description available.
466

Image to interpretation : towards an intelligent system to aid historians in the reading of the Vindolanda texts

Terras, Melissa M. January 2002 (has links)
The ink and stylus tablets discovered at the Roman Fort of Vindolanda have provided a unique resource for scholars of ancient history. However, the stylus tablets in particular have proved extremely difficult to read. The aim of this thesis is to explore the extent to which techniques from Artificial Intelligence can be used to develop a system that could aid historians in reading the stylus texts. This system would utilise image processing techniques that have been developed in Engineering Science to analyse the stylus tablets, whilst incorporating knowledge elicited from experts working on the texts, to propagate possible suggestions of the text contained within the tablets. This thesis reports on what appears to be the first system developed to aid experts in the process of reading an ancient document. There has been little previous research carried out to see how papyrologists actually carry out their task. This thesis studies closely how experts working with primary sources, such as the Vindolanda Texts, operate. Using Knowledge Elicitation Techniques, a model is proposed for how they read a text. Information regarding the letter forms and language used at Vindolanda is collated, A corpus of annotated images is built up, to provide a data set regarding the letter forms used in the ink and stylus texts. In order to relate this information to the work done on image processing, a stochastic Minimum Description Length (MDL) architecture is adopted, and adapted, to form the basis of a system that can propagate interpretations of the Vindolanda texts. In doing so a system is constructed that can read in image data and output textual interpretations of the writing that appears on the documents. It is demonstrated that knowledge elicitation techniques can be used to capture and mobilise expert information. The process of reading ancient, and ambiguous texts, is made explicit. It is also shown that MDL can be used as a basis to build large systems that reason about complex information effectively. This research presents the first stages towards developing a cognitive visual system that can propagate realistic interpretations from image data, and so aid the papyrologists in their task.
467

A framework for information architecture for business networks

Bobeva, Milena January 2005 (has links)
The concept of Information Architecture (IA) has been independently explored by researchers and practitioners in Information Engineering, Information Systems (ISmanagement, information visualisation and Web site design. However, little has been achieved towards its standardisation within and across these subject domains. To bridge the existing subject divide this study conducts a systematic analysis of publications on frameworks for Information Architecture developed in the field of IS planning and Information Engineering and elicits both common and desirable IA dimensions. It concludes that regardless of their originating subject field, existing IA frameworks are internally focused and have limited effectiveness for dynamic e-business alliances. To address this deficiency, related subject domains such as Systems Theory and Systems Modelling, Web design and virtual team working are explored and ideas are generated for further architectural components such as events, standards, aggregation level and trust that are not supported by existing IAs, but are of high importance for e-business. These are synthesized with the most prevalent IA dimensions identified earlier into a conceptual framework for IA for electronically mediated business networks, called FEBus ffra. network for Information Architecture for Electronically mediated Business networks. The structural viability and usability of the proposed analytical vehicle are evaluated over the period 2001-2003 using a triangulation of a Delphi study, an electronic survey, and evaluation interviews. The participants, representing three self-selecting samples of experienced UK academics and practitioners interested in IA, confirmed the need for an IA framework for e-business alliances and proposed and proved the scope, merits and limitations of the tool. Their views formed the basis for some amendments to the framework and for recommendations for future research. This thesis presents an original contribution to IA knowledge through the comprehensive critical analysis of frameworks on IA and the development of a set of fundamental requirements for IA for e-business environments. Its importance is also seen in the synthesis of the research on 1A conducted in different subject areas. The architectural tool built as an extension of the reviewed IA works constitutes another original aspect of this research. Finally, the novel multi-method evaluation approach employed in the study and the critical examination of its operability, present an advancement of existing knowledge on methodological diversity in IS research.
468

A mathematical model to simulate small boat behaviour

Browning, Andrew Wilford January 1990 (has links)
The use of mathematical models and associated computer simulation is a well established technique for predicting the behaviour of large marine vessels. For a variety of reasons, mainly related to effects of scale, existing models are unable to adequately predict the manoeuvring characteristics of smaller vessels. The accuracy with which the performance of a boat under autopilot control can be predicted leaves much to be desired. The thesis provides a mathematical model to simulate small boat behaviour and so can assist with the design and testing of marine autopilots. The boat model is presented in six degrees-of-freedom, which, with suitable wave disturbance terms, allows motions such as broaching to be analysed. Instabilities in the performance of an autopilot arising from such sea induced yaw motions can be assessed with a view to improving the control algorithms and methodology. The traditional "regressional" style models used for large ships are not suitable for a small boat model since there exist numerous small boat types and diverse hull shapes. Instead, a modular approach has been adopted where individual forces and moments are categorised in separate sections of the model. This approach is still in its infancy in the field of marine simulation. The modular concept demands a clearer understanding of the physical hydrodynamic processes involved in the boat system, and the formulation of equations which do not rely solely upon approximations to, or multiple regression of, data from sea trials. Although many hydrodynamic coefficients have been introduced into the model, a multi-variable Taylor series expansion of the states about some equilibrium condition has been avoided, since this would infer an approximation to have been made, and the higher order terms rapidly become abstract in their nature and difficult to relate to the real world. The research rectifies the glaring omission of a small boat mathematical model, the framework of which could be expanded to encompass other marine vehicles. Additional forces and moments can be appended to the model in new modules, or existing modules modified to suit new applications. Much more work, covering a greater range and fidelity, is required in order to provide equations which accurately describe the true physical situation.
469

The viable system model (VSM) and organisation theory : a complementary approach to the case of a pharmaceutical manufacturer

Ja'bari, Nasser Wahid January 1995 (has links)
The primary purpose of this research is to explore the relationships between Beer's viable system model (VSM) and mainstream functionalist organisation theory.The latter is taken to include the classical, behavioural and systems models of organisation. For completeness, we also consider organisation theory situated in the interpretive, radical humanist and radical structuralist paradigms of Burrell and Morgan's (1979) sociological grid. Models of mainstream organisation theory have been used extensively by organisation theorists in the structuring of organisations and the design of information systems. Little interest, however, has been paid by organisation theorists to Beer's VSM, which is also used by cyberneticians to structure organisations and design information systems. The problem is that both camps have developed in isolation from one another. Theorists in each camp advocate their own stance regardless what the other might have to offer to their thinking. This situation is a result of a gap between the two camps owing to lack of dialogue between them. The aim of this thesis is to attempt to bridge the gap between the two camps. It is the author's firm belief that this is best done by adopting a complementary approach to pinpoint domains of support each camp may offer to the other. The outcome of this approach is an enhanced model of organisation. Part One of the research begins by introducing the science of cybernetics. Its history, tools, techniques and concepts are then put in place. Building on cybernetic tools and techniques, Beer developed a model of any viable system. Beer's VSM is presented in Chapter 2. Part Two of the thesis is totally devoted to organisational theory. First, we take up models of the functionalist mainstream organisation theory. The approach adopted is first to elaborate on each model, then to contrast each with the VSM. Attention is then directed to organisation theory located in the alternative paradigms, that is, the interpretive, radical humanist and radical structuralist paradigms, respectively. Again, theory of organisation within the above mentioned paradigms is contrasted with the VSM. We mark the end of Part Two by presenting an enhanced model of organisation. This model is the outcome of the comparison which took place between the functionalist organisation theory and the VSM. The argument is that the likelihood of the classical model providing support to the VSM is slim. In fact, the former stands to gain much from the VSM, particularly from the notion of recursive structures which explains how control and communication systems must be designed and organised. The behavioural model, which takes the informal aspects of organisation as its core, appears to be a useful adjunct to the VSM, which concentrates primarily on the formal organisation. Again, the behavioural model stands to gain much from the insights offered by the VSM. At least, the view of openness to the environment would surely give the behavioural model a boost in the right direction. However, we focus our interest on the systems model of organisation, specifically, the notion of semiautonomous work groups encapsulated in the sociotechnical systems approach. By incorporating this notion into the VSM we can, it is hoped, enhance the VSM. Once again, the insights of the VSM, especially that of recursivity of its structure, is of immense significance. In Part Three, the enhanced model is put to the test. This is done by applying it to an existing pharmaceutical manufacturer. The model proves to be not only practical, but also powerful in highlighting domains requiring attention if the effectiveness and efficiency of the organisation in concern is to improve, which the VSM, on its own, cannot provide.
470

Learning classifier systems in robotic environments

Hurst, Jacob Machar January 2003 (has links)
No description available.

Page generated in 0.0839 seconds