551 |
Realising relative autonomy and adaptation in smart objects systemsPérez Hernández, Marco Eric January 2018 (has links)
The common approach for engineering of applications for the Internet of Things (IoT) relies heavily on remote resources, particularly in the cloud. As a result, data is collected and functionality is centralised in the cloud platforms leaving devices with only raw data gathering and actuation functions. IoT envisions an environment where devices can act as smart objects that are able to make decisions and operate autonomously for the benefit of the human users. Usually, autonomous functions are mixed with automatic functions that only consider the human user point of view. In this work, we propose an IoT application development framework based on goaldirected and role-based smart objects. This framework is composed of a conceptual basis, a software architecture, a middleware architecture and an adaptation method. First, we define the concepts of smart object, its autonomy and the collective of smart objects from a thorough examination of the smart object, its properties and key processes. Then, we develop a set of abstractions and the software architecture for smart objects. For easing the development effort and making this approach practical, we define a middleware architecture, intended to serve as blueprint for concrete middleware solutions. We also implemented a prototype based on this architecture. Functional components of the architecture enable smart object systems to adapt to volatile situations. We propose a method for adaptation based on the selection of smart objects, services and roles. Finally, we develop an agent-based model for simulation of IoT environments under conditions of heterogeneity, volatility and large quantities of smart objects. We use this model together with a case study and a qualitative comparison of existing solutions to evaluate our framework. Our results show that the proposed approach is a feasible and scalable alternative for IoT application development based on smart objects that incorporates the concept of relative autonomy, in this context, and the adaptation at individual and collective level.
|
552 |
Software-supported participatory design : design and evaluation of the tool PDotHeintz, Matthias Martin January 2017 (has links)
Participatory Design (PD) is a common software development approach that actively includes end-users in the design process. This ensures tailored results and can lead to a strong feeling of ownership and overall empowers end-users. Commonly applied paper-based PD approaches have several shortcomings. A prototype presented on paper is not interactive for the end-user to experience it. Preparing PD ideas captured as physical artefacts (e.g. sketches on acetates) for further data analysis can be unduly time consuming. Using software tools to conduct PD activities instead of relying on paper-based methods can address these shortcomings. The author has been motivated to design, develop, and evaluate two such tools - PDotCapturer and PDotAnalyser. PDotCapturer is used by end-users participating in PD activities to create new designs from scratch or express (re-)design ideas. PDotAnalyser is used by designers to work with and further analyse the ideas captured. PDotCapturer is compared with similar paper-based approaches to evaluate the relative effectiveness of tool-based and paper-based PD activities in terms of quantity and quality of design ideas elicited. To perform this comparison, the coding scheme CAt+ (Categories plus Attributes) to rate the quality of PD ideas is developed. CAt+ can also be used to filter and aggregate PD ideas to support designers in making sense of as well as addressing such ideas for re-design. Results of the comparisons of paper-based and tool-based approaches show that paper is advantageous in some regards (e.g. number of ideas gathered), but the tool is comparable or in some regards outperforms paper (e.g. user preference). Given the additional advantages tool-usage can bring (e.g. automated analysis support), the context where paper-based or tool-based PD approaches suit better is discussed. For future work the use of PDotCapturer and PDotAnalyser in diverse and distributed settings will be explored.
|
553 |
n-Dimensional prediction of RT-SOA QoSMcKee, David Wesley January 2017 (has links)
Service-Orientation has long provided an effective mechanism to integrate heterogeneous systems in a loosely coupled fashion as services. However, with the emergence of Internet of Things (IoT) there is a growing need to facilitate the integration of real-time services executing in non-controlled, non-real-time, environments such as the Cloud. As such there has been a drive in recent years to develop mechanisms for deriving reliable Quality of Service (QoS) definitions based on the observed performance of services, specifically in order to facilitate a Real-Time Quality of Service (RT-QoS) definition. Due to the overriding challenge in achieving this is the lack of control over the hosting Cloud system many approaches either look at alternative methods that ignore the underlying infrastructure or assume some level of control over interference such as the provision of a Real-Time Operating System (RTOS). There is therefore a major research challenge to find methods that facilitate RT-QoS in environments that do not provide the level of control over interference that is traditionally required for real-time systems. This thesis presents a comprehensive review and analysis of existing QoS and RT-QoS techniques. The techniques are classified into seven categories and the most significant approaches are tested for their ability to provide QoS definitions that are not susceptible to dynamic changing levels of interference. This work then proposes a new n-dimensional framework that models the relationship between resource utilisation, resource availability on host servers, and the response-times of services. The framework is combined with real-time schedulability tests to dynamically provide guarantees on response-times for ranges of resource availabilities and identifies when those conditions are no longer suitable. The proposed framework is compared against the existing techniques using simulation and then evaluated in the domain of Cloud computing where the approach demonstrates an average overallocation of 12%, and provides alerts across 94% of QoS violations within the first 14% of execution progress.
|
554 |
Unsupervised human activity analysis for intelligent mobile robotsDuckworth, Paul January 2017 (has links)
The success of intelligent mobile robots in daily living environments depends on their ability to understand human movements and behaviours. One goal of recent research is to understand human activities performed in real human environments from long term observation. We consider a human activity to be a temporally dynamic configuration of a person interacting with key objects within the environment that provide some functionality. This can be a motion trajectory made of a sequence of 2-dimensional points representing a person’s position, as well as more detailed sequences of high-dimensional body poses, a collection of 3-dimensional points representing body joints positions, as estimated from the point of view of the robot. The limited field of view of the robot, restricted by the limitations of its sensory modalities, poses the challenge of understanding human activities from obscured, incomplete and noisy observations. As an embedded system it also has perceptual limitations which restrict the resolution of the human activity representations it can hope to achieve. In this thesis an approach for unsupervised learning of activities implemented on an autonomous mobile robot is presented. This research makes the following novel contributions: 1) A qualitative spatial-temporal vector space encoding of human activities as observed by an autonomous mobile robot. 2) Methods for learning a low dimensional representation of common and repeated patterns from multiple encoded visual observations. In order to handle the perceptual challenges, multiple abstractions are applied to the robot’s perception data. The human observations are first encoded using a leg-detector, an upper-body image classifier, and a convolutional neural network for pose estimation, while objects within the environment are automatically segmented from a 3-dimensional point cloud representation. Central to the success of the presented framework is mapping these encodings into an abstract qualitative space in order to generalise patterns invariant to exact quantitative positions within the real world. This is performed using a number of qualitative spatial-temporal representations which capture different aspects of the relations between the human subject and the objects in the environment. The framework auto-generates a vocabulary of discrete spatial-temporal descriptors extracted from the video sequences and each observation is represented as a vector over this vocabulary. Analogously to information retrieval on text corpora we use generative probabilistic techniques to recover latent, semantically meaningful, concepts in the encoded observations in an unsupervised manner. The relatively small number of concepts discovered are defined as multinomial distributions over the vocabulary and considered as human activity classes, granting the robot a high-level understanding of visually observed complex scenes. We validate the framework using, 1) A dataset collected from a physical robot autonomously patrolling and performing tasks in an office environment during a six week deployment, and 2) a high-dimensional “full body pose” dataset captured over multiple days by a mobile robot observing a kitchen area of an office environment from multiple view points. We show that the emergent categories from our framework align well with how humans interpret behaviours andsimple activities. Our presented framework models each extended observation as a probabilistic mixture over the learned activities, meaning it can learn human activity models even when embedded in continuous video sequences without the need for manual temporal segmentation, which can be time consuming and costly. Finally, we present methods for learning such human activity models in an incremental and continuous setting using variational inference methods to update the activity distribution online. This allows the mobile robot to efficiently learn and update its models of human activity over time, discarding the raw data, allowing for life-long learning.
|
555 |
Joint perceptual learning and natural language acquisition for autonomous robotsAl-Omari, Muhannad A. R. I. January 2017 (has links)
Understanding how children learn the components of their mother tongue and the meanings of each word has long fascinated linguists and cognitive scientists. Equally, robots face a similar challenge in understanding language and perception to allow for a natural and effortless human-robot interaction. Acquiring such knowledge is a challenging task, unless this knowledge is preprogrammed, which is no easy task either, nor does it solve the problem of language difference between individuals or learning the meaning of new words. In this thesis, the problem of bootstrapping knowledge in language and vision for autonomous robots is addressed through novel techniques in grammar induction and word grounding to the perceptual world. The learning is achieved in a cognitively plausible loosely-supervised manner from raw linguistic and visual data. The visual data is collected using different robotic platforms deployed in real-world and simulated environments and equipped with different sensing modalities, while the linguistic data is collected using online crowdsourcing tools and volunteers. The presented framework does not rely on any particular robot or any specific sensors; rather it is flexible to what the modalities of the robot can support. The learning framework is divided into three processes. First, the perceptual raw data is clustered into a number of Gaussian components to learn the ‘visual concepts’. Second, frequent co-occurrence of words and visual concepts are used to learn the language grounding, and finally, the learned language grounding and visual concepts are used to induce probabilistic grammar rules to model the language structure. In this thesis, the visual concepts refer to: (i) people’s faces and the appearance of their garments; (ii) objects and their perceptual properties; (iii) pairwise spatial relations; (iv) the robot actions; and (v) human activities. The visual concepts are learned by first processing the raw visual data to find people and objects in the scene using state-of-the-art techniques in human pose estimation, object segmentation and tracking, and activity analysis. Once found, the concepts are learned incrementally using a combination of techniques: Incremental Gaussian Mixture Models and a Bayesian Information Criterion to learn simple visual concepts such as object colours and shapes; spatio-temporal graphs and topic models to learn more complex visual concepts, such as human activities and robot actions. Language grounding is enabled by seeking frequent co-occurrence between words and learned visual concepts. Finding the correct language grounding is formulated as an integer programming problem to find the best many-to-many matches between words and concepts. Grammar induction refers to the process of learning a formal grammar (usually as a collection of re-write rules or productions) from a set of observations. In this thesis, Probabilistic Context Free Grammar rules are generated to model the language by mapping natural language sentences to learned visual concepts, as opposed to traditional supervised grammar induction techniques where the learning is only made possible by using manually annotated training examples on large datasets. The learning framework attains its cognitive plausibility from a number of sources. First, the learning is achieved by providing the robot with pairs of raw linguistic and visual inputs in a “show-and-tell” procedure akin to how human children learn about their environment. Second, no prior knowledge is assumed about the meaning of words or the structure of the language, except that there are different classes of words (corresponding to observable actions, spatial relations, and objects and their observable properties). Third, the knowledge in both language and vision is obtained in an incremental manner where the gained knowledge can evolve to adapt to new observations without the need to revisit previously seen ones (previous observations). Fourth, the robot learns about the visual world first, then it learns about how it maps to language, which aligns with the findings of cognitive studies on language acquisition in human infants that suggest children come to develop considerable cognitive understanding about their environment in the pre-linguistic period of their lives. It should be noted that this work does not claim to be modelling how humans learn about objects in their environments, but rather it is inspired by it. For validation, four different datasets are used which contain temporally aligned video clips of people or robots performing activities, and sentences describing these video clips. The video clips are collected using four robotic platforms, three robot arms in simple block-world scenarios and a mobile robot deployed in a challenging real-world office environment observing different people performing complex activities. The linguistic descriptions for these datasets are obtained using Amazon Mechanical Turk and volunteers. The analysis performed on these datasets suggest that the learning framework is suitable to learn from complex real-world scenarios. The experimental results show that the learning framework enables (i) acquiring correct visual concepts from visual data; (ii) learning the word grounding for each of the extracted visual concepts; (iii) inducing correct grammar rules to model the language structure; (iv) using the gained knowledge to understand previously unseen linguistic commands; and (v) using the gained knowledge to generate well-formed natural language descriptions of novel scenes.
|
556 |
Intelligent support for exploration of data graphsAl-Tawil, Marwan Ahmad Talal January 2017 (has links)
This research investigates how to support a user’s exploration through data graphs generated from semantic databases in a way leading to expanding the user’s domain knowledge. To be effective, approaches to facilitate exploration of data graphs should take into account the utility from a user’s point of view. Our work focuses on knowledge utility – how useful exploration paths through a data graph are for expanding the user’s knowledge. The main goal of this research is to design an intelligent support mechanism to direct the user to ‘good’ exploration paths through big data graphs for knowledge expansion. We propose a new exploration support mechanism underpinned by the subsumption theory for meaningful learning, which postulates that new knowledge is grasped by starting from familiar concepts in the graph which serve as knowledge anchors from where links to new knowledge are made. A core algorithmic component for adapting the subsumption theory for generating exploration paths is the automatic identification of Knowledge Anchors in a Data Graph (KADG). Several metrics for identifying KADG and the corresponding algorithms for implementation have been developed and evaluated against human cognitive structures. A subsumption algorithm which utilises KADG for generating exploration paths for knowledge expansion is presented and evaluated in the context of a semantic data browser in a musical instrument domain. The resultant exploration paths are evaluated in a controlled user study to examine whether they increase the users’ knowledge as compared to free exploration. The findings show that exploration paths using knowledge anchors and subsumption lead to significantly higher increase in the users’ conceptual knowledge. The approach can be adopted in applications providing data graph exploration to facilitate learning and sensemaking of layman users who are not fully familiar with the domain presented in the data graph.
|
557 |
An approach to pathfinding for real-world situationsCook, Sarah January 2018 (has links)
People plan their routes through new environments every day, but what factors influence these wayfinding decisions? In a world increasingly dependent on electronic navigation assistance devices, finding a way of automatically selecting routes suitable for pedestrian travel is an important challenge. With a greater freedom of movement than vehicular transport, and different requirements, an alternative approach should be taken to find an answer for pedestrian journeys than those taken in cars. Although previous research has produced a number of pedestrian route recommendation systems, the majority of these are restricted to a single route type or user group. The aim of this research was to develop an approach to route suggestion which could recommend routes according to the type of journey (everyday, leisure or tourist) a person is making. To achieve this aim, four areas of research were undertaken. Firstly, six experiments containing 450 participants were used to investigate the preference of seven different environment and route attributes (length, turns, decision points, vegetation, land use, dwellings and points of interest) for two attribute categories (simplicity and attractiveness) and three journey types (everyday, leisure and tourist). These empirically determined preferences were then used to find the rank-orders of the attributes, by comparing more of them simultaneously than earlier studies, and found either new rankings (for attractiveness, leisure journeys and tourist journey) or extended those already known (everyday journeys). Using these ranks and previously accepted relationships, an environment model was defined and built based on an annotated graph. This model can be built automatically from OpenStreetMap data, and is therefore simple enough to be applicable to many geographical areas, but it is detailed enough to allow route selection. Algorithms based on an extended version of Dijkstra’s shortest path algorithm were constructed. These used weighted minimum cost functions linked with attribute ranks, to select routes for different journey types. By avoiding the computational complexity of previous approaches, these algorithms could potentially be widely used in a variety of different platforms, and extended for different groups of users. Finally, the routes suggested by the algorithms were compared to participant recommendations for ‘simple’ routes with five start/end points, and for each of the three journey types (everyday, leisure and tourist). These comparisons determined that only length is required to select simple and everyday routes, but that the multi-attribute cost functions developed for leisure and tourist journeys select routes that are similar to those chosen by the participants. This indicates that the algorithms’ routes are appropriate for people to use in leisure and tourist journeys.
|
558 |
Passenger train unit scheduling optimisationLin, Zhiyuan January 2014 (has links)
This thesis deals with optimisation approaches for the train unit scheduling problem (TUSP). Given a train operator’s fixed timetables and a fleet of train units of different types, the TUSP aims at determining an assignment plan such that each train trip in the timetable is appropriately covered by a single or coupled units, with certain objectives achieved and certain constraints respected. From the perspective of a train unit, scheduling assigns a sequence of trains to it as its daily workload. The TUSP also includes some auxiliary activities such as empty-running generation, coupling/decoupling control, platform assignment, platform/siding/depot capacity control, re-platforming, reverse, shunting movements from/to sidings or depots and unit blockage resolution. It is also relevant with activities like unit overnight balance, maintenance provision and unit rostering. In general, it is a very complex planning process involving various aspects. Current literature on optimisation methods for the TUSP is very scarce, and for those existing ones they are generally unsuitable for the UK railway industry, either due to different problem settings and operational regulations or simplifications on some critical factors in practice. Moreover, there is no known successful commercial software for automatically optimising train unit scheduling in the world as far as the author is aware, in contrast with bus vehicle scheduling, crew scheduling and flight scheduling. This research aims at taking an initial step for filling the above gaps. A two-level framework for solving the TUSP has been proposed based on the connection-arc graph representation. The network-level as an integer multicommodity flow model captures the essence of the rail network and allocates the optimum amount of train unit resources to each train globally to ensure the overall optimality, and the station-level process (post-processing) resolves the remaining local issues like unit blockage. Several ILP formulations are presented to solve the network-level model. A local convex hull method is particularly used to realise difficult requirements and tighten LP relaxation and some further discussions over this method is also given. Dantzig-Wolfe decomposition is used to convert an arc formulation to a path formulation. A customised branch-and-price solver is designed to solve the path formulation. Extensive computational experiments have been conducted based on real-world problem instances from ScotRail. The results are satisfied by rail practitioners from ScotRail and are generally competitive or better than the manual ones. Experiments for fine-tuning the branch-and-price solver, solution quality analysis, demand estimation and post-processing have also been carried out and the results are reported. This research has laid a promising foundation leading to a continuation EPSRC funded project (EP/M007243/1) in collaboration with FirstGroup and Tracsis plc.
|
559 |
Ontology learning from the Arabic text of the Qur’an : concepts identification and hierarchical relationships extractionAlrehaili, Sameer Mabrouk A. January 2017 (has links)
Recent developments in ontology learning have highlighted the growing role ontologies play in linguistic and computational research areas such as language teaching and natural language processing. The ever-growing availability of annotations for the Qur’an text has made the acquisition of the ontological knowledge promising. However, the availability of resources and tools for Arabic ontology is not comparable with other languages. Manual ontology development is labour-intensive, time-consuming and it requires knowledge and skills of domain experts. This thesis aims to develop new methods for Ontology learning from the Arabic text of the Qur’an, including concepts identification and hierarchical relationships extraction. The thesis presents a methodology for reducing human intervention in building ontology from Classical Arabic Language of the Qur’an text. The set of concepts, which is a crucial step in ontology learning, was generated based on a set of patterns made of lexical and inflectional information. The concepts were identified based on adapted weighting schema that exploit a combination of knowledge to learn the relevance degree of a term. Statistical, domain-specific knowledge and internal information of Multi-Word Terms (MWTs) were combined to learn the relevance of generated terms. This methodology which represents the major contribution of the thesis was experimentally investigated using different terms generation methods. As a result, we provided the Arabic Qur’anic Terms (AQT) as a training resource for machine learning based term extraction. This thesis also introduces a new approach for hierarchical relations extraction from Arabic text of the Qur’an. A set of hierarchical relations occurring between identified concepts are extracted based on hybrid methods including head-modifier, set of markers for copula construct in Arabic text, referents. We also compared a number of ontology alignment methods for matching ontological bilingual Qur’anic resources. In addition, a multi-dimensional resource named Arabic Qur’anic Database (AQD) about the Qur’an is made for Arabic computational researchers, allowing regular expression query search over the included annotations. The search tool was successfully applied to find instances for a given complex rule made of different combined resources.
|
560 |
Efficient iterative solution algorithms for numerical models of multiphase flowAlrehaili, Ahlam Hamdan S. January 2018 (has links)
This thesis is concerned with the development and application of optimally efficient numerical methods for the simulation of vascular tumour growth, based upon the multiphase fluid model introduced by Hubbard and Byrne [57]. This multiphase model involves the flow and interaction of four different, but coupled, phases which are each treated as incompressible fluids. Following a short review of models for tumour growth we describe in detail the model of Hubbard and Byrne [57], and introduce the discretization schemes used. This involves a finite volume scheme to approximate mass conservation and conforming finite element schemes to approximate momentum conservation and a reaction-diffusion equation for the background nutrient concentration. The momentum conservation system is represented as a Stokes-like flow of each phase, with source terms that reflect the phase interactions. It will be demonstrated that the solution of these coupled momentum equations, approximated using a Taylor-Hood finite element method in two dimensions, is the most computationally intensive component of the solution algorithm. The nonlinear system arising from the nutrient equation is the second most computationally expensive component. The solvers presented in this work for the discretized systems are based on preconditioned Krylov methods. Algebraic multigrid (AMG) preconditioner and a novel block preconditioner are used with Krylov methods for solving the linear systems arising from the nutrient equation at each Newton step and from the momentum equation, respectively. In each case these are shown to be very efficient algorithms: when the preconditioning strategies are applied to practical problems, the CPU time and memory are demonstrated to scale almost linearly with the problem size. Finally, the basic multiphase tumour model is extended to consider drug delivery and the inclusion of additional phases. To solve this extended model our preconditioning strategy is extended to cases with more than four phases. This is again demonstrated to perform optimally.
|
Page generated in 0.0299 seconds