51 
An artificial intelligence framework for investigative reasoningRamezani, Ramin January 2014 (has links)
Problem solving is one of the major paradigms in Artificial Intelligence research in which an intelligent task to automate is interpreted as a series of problems to be solved. Various problem solving techniques have been spawned in the field of AI, mostly by concentrating on a certain reasoning approach to tackle a particular class of problems. For instance, theorem proving, constraint solving and machine learning provide powerful techniques for solving AI problems. In all these approaches, background knowledge needs to be provided, from which the system will infer new knowledge. Often, however, in real world scenarios, there may not be enough background information for any single solver to solve the problem. In these situations, some researches have demonstrated the benefits of using combined reasoning, i.e., a reasoning process which employs various, often disparate, problem solving techniques in concert, in order to solve a given task. The systems that engage such reasoning processes are called combined reasoning systems. Their power draws upon disparate techniques they employ. As such, combined reasoning systems are supposed to be more capable than their constituents. In this thesis we mainly focus on using a combined reasoning approach in solving a type of problems that cannot be solved by any of the aforementioned standalone systems. We refer to this type as investigation problem which models to some extent a generic situation which might arise in, say, medical diagnosis or the solving of a crime. That is, there are a number of possible diagnoses/suspects (candidates), and the problem is to use the facts of the case to rank them in terms of their likelihood of being the cause of the illness/guilty of the crime. Such ranking often leads to further medical tests/police enquiries focusing on the most likely candidates, which will bring to light further information about the current case. We use the term dynamic investigation problems to describe a series of such problems to be solved. Solving each problem entails using the facts of the case, coupled with prior knowledge about the domain to narrow down the candidates to just one. However, when there is no upright solution due to lack of some essential information, additional relevant information can often be found in related past cases thereby irregularities can be observed and utilized. Hence, dynamic investigation problems are hybrid machinelearning/constraint solving problems, and as such are more realistic and of interest to the wider AI community. In this thesis we focus on formal definition, exploration, generation and solution of 'Dynamic Investigation Problems', and we develop a framework which performs 'Investigative Reasoning', that is a framework in which a combination of reasoning techniques are incorporated in order to tackle dynamic investigation problems.

52 
Gaze contingent robotic control in minimally invasive surgeryFujii, Kenko January 2014 (has links)
Recent advances in minimally invasive surgery (MIS) have allowed patients to benefit from reduced trauma, faster recover times, and shorter hospitalisation. As a result, surgeons operate in a lessthancomfortable posture which is both physically and mentally challenging. This has led to a more demanding training scheme to acquire the relevant surgical skills. Navigating and operating with flexible instruments such as endoscopes can also induce spatial disorientation within the surgeon, where such instances are associated with increased pain for the patient and more critically, risk of perforating delicate patient tissue. Furthermore, the increased physical separation between the surgeon and the operative site and newly introduced surgical instruments have significantly changed the ergonomics and surgical workflow, which in turn increases the cognitive burden on the surgeon. The perceptual and ergonomic challenges during flexible endoscope based MIS are investigated through using the wealth of perceptual information the gaze can provide. In particular, the visualisation, navigation and ergonomic issues during MIS procedures are studied. A gaze parameter based framework is introduced to assess the use of a new fieldofview expansion technique for improved visualisation and camera trajectory comprehension when disorientated. Flexible instruments such as endoscopes suffer from disorientation inducing perceptualmotor misalignment. A misalignment quantification approach and a classification method based on gaze to infer varying perceptualmotor misalignment are developed to enable ergonomic assessment during endoscopic procedures. A novel robotic assisted gaze controlled camera system is developed to improve camera navigation where the user's control intentions are conveyed to the robotic laparoscope via realtime gaze gestures. To further improve the ergonomics of the gaze contingent system, an online calibration algorithm is integrated into the system. Throughout the thesis, detailed validation and discussion of the results are conducted to demonstrate the potential clinical value of the work.

53 
Dense semantic SLAMSalasMoreno, Renato F. January 2014 (has links)
Simultaneous Localisation and Mapping (SLAM) began as a technique to enable realtime robotic navigation on previously unexplored environments. The created maps however were designed for the sole purpose of localising the robot (i.e. what is the position and orientation of the robot in relation to the map) and several systems demonstrated the increasing descriptive power of map representations, which on visiononly SLAM solutions consisted of simple sparse cornerlike features as well as edges, planes and most recently fully dense surfaces that abandon the notion of sparse structures altogether. Early sparse representations enjoyed the benefit of being simple to maintain as features could be added, optimised and removed independently while being memory and compute efficient, making them suitable for robust realtime camera tracking that relies on a consistent map. However, sparse representations are limiting when it comes to interaction, as for example, a robot aiming to safely navigate in an environment would need to sense complete surfaces in addition to empty space. Furthermore, sparse features can only be detected on highlytextured areas and during slow motion. Recent dense methods overcome the limitations of sparse methods as they can work in situations where corner features would fail to be detected due to blurry images created during rapid camera motion and also enable to correctly reason about occlusions and complete 3D surfaces, thus raising the interaction capabilities to new levels. This is only possible thanks to the advent of commodity parallel processing power and large amount of memory on Graphic Processing Units (GPUs) that needs careful consideration during algorithm design. However, increasing the map density makes creating consistent structures more challenging due to the vast amount of parameters to optimise and the interdependencies amongst them. More importantly, our interest is in making interaction even more sophisticated by abandoning the idea that an environment is a dense monolithic structure in favour of one composed of discrete detachable objects and bounded regions having physical properties and metadata. This work explores the development of a new type of visual SLAM system representing the map with semantically meaningful objects and planar regions which we call Dense Semantic SLAM, enabling new types of interaction where applications that can go beyond asking the question of "where am I" towards "what is around me and what can I do with it". In a way it can be seen as a return to lightweight sparsebased representations while keeping the predictive power of dense methods with added scene understanding at the object and region levels.

54 
Scalable verification techniques for dataparallel programsChong, Nathan January 2014 (has links)
This thesis is about scalable formal verification techniques for software. A verification technique is scalable if it is able to scale to reasoning about real (rather than synthetic or toy) programs. Scalable verification techniques are essential for practical program verifiers. In this work, we consider three key characteristics of scalability: precision, performance and automation. We explore tradeoffs between these factors by developing verification techniques in the context of dataparallel programs, as exemplified by graphics processing unit (GPU) programs (called kernels). This thesis makes three original contributions to the field of program verification: 1. An empirical study of candidatebased invariant generation that explores the tradeoffs between precision and performance. An invariant is a property that captures program behaviours by expressing a fact that always holds at a particular program point. The generation of invariants is critical for automatic and precise verification. Over a benchmark suite comprising 356 GPU kernels, we find that candidatebased invariant generation allows precise reasoning for 256 (72%) kernels. 2. Barrier invariants: a new abstraction for precise and scalable reasoning about datadependent GPU kernels, an important class of kernel beyond the scope of existing techniques. Our evaluation shows that barrier invariants enable us to capture a functional specification for three distinct prefix sum implementations for problem sizes using hundreds of threads and racefreedom for a realworld stream compaction example. 3. The interval of summations: a new abstraction for precise and scalable reasoning for parallel prefix sums, an important dataparallel primitive. We give theoretical results showing that the interval of summations is, surprisingly, both sound and complete. That is, all correct prefix sums can be precisely captured by this abstraction. Our evaluation shows that the interval of summations allow us to automatically prove full functional correctness of four distinct prefix sum implementations for all poweroftwo problem sizes up to 2^{20}.

55 
Spatial stochastic population models for the analysis of cityscale systemsGünther, Marcel Christoph January 2014 (has links)
Recent advances in technology have led to a surge in innovations in the area of spatially aware applications such as locally operating social networks, retail, advertising, local weather and traffic services. Such applications are often supported by large datacollection and dissemination processes, designed to work on largescale, inexpensive, infrastructurelight wireless \adhoc networks. As a consequence, novel modelling techniques are required for the purpose of capacity planning and in order to build online prediction models based on large quantities of locationaware data. In this thesis we study the spatiotemporal evolution of population systems related to such cityscale challenges. In particular we focus on largescale, spatial population processes that are not amenable to fluidflow or meanfield approximation techniques because of locally or temporarily varying population sizes. Our main contributions are  Providing novel ways of incorporating space and mobility in largescale spatial populations models.  Illustrating how, for a certain class of spatial population processes, the timeevolution of higherorder population moments can be obtained efficiently using hybridsimulation analysis.  Presenting case studies for realistic spatial systems from different application areas to show that our modelling techniques are wellsuited for the analysis of network and protocol performance of static and mobile \adhoc communication networks as well as for building fast online prediction models.

56 
Infinite hidden conditional random fields for the recognition of human behaviourBousmalis, Konstantinos January 2014 (has links)
While detecting and interpreting temporal patterns of nonverbal behavioral cues in a given context is a natural and often unconscious process for humans, it remains a rather difficult task for computer systems. In this thesis we are primarily motivated by the problem of recognizing expressions of highlevel behavior, and specifically agreement and disagreement. We thoroughly dissect the problem by surveying the nonverbal behavioral cues that could be present during displays of agreement and disagreement; we discuss a number of methods that could be used or adapted to detect these suggested cues; we list some publicly available databases these tools could be trained on for the analysis of spontaneous, audiovisual instances of agreement and disagreement, we examine the few existing attempts at agreement and disagreement classification, and we discuss the challenges in automatically detecting agreement and disagreement. We present experiments that show that an existing discriminative graphical model, the Hidden Conditional Random Field (HCRF) is the best performing on this task. The HCRF is a discriminative latent variable model which has been previously shown to successfully learn the hidden structure of a given classification problem (provided an appropriate validation of the number of hidden states). We show here that HCRFs are also able to capture what makes each of these social attitudes unique. We present an efficient technique to analyze the concepts learned by the HCRF model and show that these coincide with the findings from social psychology regarding which cues are most prevalent in agreement and disagreement. Our experiments are performed on a spontaneous expressions dataset curated from real televised debates. The HCRF model outperforms conventional approaches such as Hidden Markov Models and Support Vector Machines. Subsequently, we examine existing graphical models that use Bayesian nonparametrics to have a countably infinite number of hidden states and adapt their complexity to the data at hand. We identify a gap in the literature that is the lack of a discriminative such graphical model and we present our suggestion for the first such model: an HCRF with an infinite number of hidden states, the Infinite Hidden Conditional Random Field (IHCRF). In summary, the IHCRF is an undirected discriminative graphical model for sequence classification and uses a countably infinite number of hidden states. We present two variants of this model. The first is a fully nonparametric model that relies on Hierarchical Dirichlet Processes and a Markov Chain Monte Carlo inference approach. The second is a semiparametric model that uses Dirichlet Process Mixtures and relies on a meanfield variational inference approach. We show that both models are able to converge to a correct number of represented hidden states, and perform as well as the best finite HCRFs chosen via crossvalidation for the difficult tasks of recognizing instances of agreement, disagreement, and pain in audiovisual sequences.

57 
Dense vision in imageguided surgeryChang, PingLin January 2014 (has links)
Imageguided surgery needs an efficient and effective camera tracking system in order to perform augmented reality for overlaying preoperative models or label cancerous tissues on the 2D video images of the surgical scene. Tracking in endoscopic/laparoscopic scenes however is an extremely difficult task primarily due to tissue deformation, instrument invasion into the surgical scene and the presence of specular highlights. State of the art featurebased SLAM systems such as PTAM fail in tracking such scenes since the number of good features to track is very limited. When the scene is smoky and when there are instrument motions, it will cause featurebased tracking to fail immediately. The work of this thesis provides a systematic approach to this problem using dense vision. We initially attempted to register a 3D preoperative model with multiple 2D endoscopic/laparoscopic images using a dense method but this approach did not perform well. We subsequently proposed stereo reconstruction to directly obtain the 3D structure of the scene. By using the dense reconstructed model together with robust estimation, we demonstrate that dense stereo tracking can be incredibly robust even within extremely challenging endoscopic/laparoscopic scenes. Several validation experiments have been conducted in this thesis. The proposed stereo reconstruction algorithm has turned out to be the state of the art method for several publicly available ground truth datasets. Furthermore, the proposed robust dense stereo tracking algorithm has been proved highly accurate in synthetic environment (< 0.1 mm RMSE) and qualitatively extremely robust when being applied to real scenes in RALP prostatectomy surgery. This is an important step toward achieving accurate imageguided laparoscopic surgery.

58 
Completenessviacanonicity in coalgebraic logicsDahlqvist, Fredrik Paul Herbert January 2015 (has links)
This thesis aims to provide a suite of techniques to generate completeness re sults for coalgebraic logics with axioms of arbitrary rank. We have chosen to investigate the possibility to generalize what is arguably one of the most suc cessful methods to prove completeness results in 'classical' modal logic, namely completenessviacanonicity. This technique is particularly wellsuited to a coal gebraic generalization because of its clean and abstract algebraic formalism. In the case of classical modal logic, it can be summarized in two steps, first it isolates the purely algebraic problem of canonicity, i.e. of determining when a variety of boolean Algebras with Operators (BAOs) is closed under canonical extension (i.e. canonical). Secondly, it connects the notion of canonical vari eties to that of canonical models to explicitly build models, thereby proving completeness. The classical algebraic theory of canonicity is geared towards normal logics, or, in algebraic terms, BAOs (or generalizations thereof). Most coalgebraic log ics are not normal, and we thus develop the algebraic theory of canonicity for Boolean Algebra with Expansions (BAEs), or more generally for Distributive Lattice Expansions (DLEs). We present new results about a class of expan sions defined by weaker preservation properties than meet or join preservation, namely (anti)kadditive and (anti)kmultiplicative expansions. We show how canonical and Sahlqvist equations can be built from such operations. In order to connect the theory of canonicity in DLEs and BAEs to coalgebraic logic, we choose to work in the abstract formulation of coalgebraic logic. An abstract coalgebraic logic is defined by a functor L : BA → BA, and we can heuristically separate these logics in two classes. In the first class the functor L is relatively simple, and in particular can be interpreted as defining a BAE. This class includes the predicate lifting style of coalgebraic logics. In the second class the functor L can be very complicated and the whole theory requires a different approach. This class includes the nabla style of coalgebraic logics. For simple functors, we develop results on strong completeness and then prove strong completenessviacanonicity in the presence of canonical frame con ditions for strongly complete abstract coalgebraic logics. In particular we show coalgebraic completenessviacanonicity for Graded Modal Logic, Intuitionistic Logic, the distributive full Lambek calculus, and the logic of trees of arbitrary branching degrees defined by the List functor. These results are to the best of our knowledge, new. For a complex functor L we use an indirect approach via the notion of functor presentation. This allows us to represent L as the quotient of a much simpler polynomial functor. Polynomial functors define BAEs and can thus be treated as objects in the first class of functors, in particular we can apply all the above mentioned techniques to the logics defined by such functors. We develop techniques that ensure that results obtained for the simple presenting logic can be transferred back to the complicated presented logic. We can then prove strongcompletenessviacanonicity in the presence of canonical frame conditions for coalgebraic logics which do not define a BAE, such as the nabla coalgebraic logics.

59 
HomeShaper : regulating the use of bandwidth resources in home networksPediaditakis, Dimosthenis January 2015 (has links)
It is estimated that the number of worldwide broadband Internet subscribers increases at a staggering rate of 8% per year. This fact, along with the ever increasing data consumption demands, have pushed the envelope in the design of faster and better broadband Internet and wireless LAN communications. Nonetheless, home users still experience periods during which the available network resources do not suffice to meet everyone's requirements, confirming Parkinson's law of bandwidth absorption: 'network traffic expands to fit the available bandwidth'. Unsurprisingly, numerous sociological studies indicate that a highly desired management functionality is: 'the ability of users to effectively regulate the use of a home network's bandwidth resources'. Past research on this topic usually proposes overcomplicated solutions that are specific to certain technologies and not tailored to the unique characteristics of home networks. First, the average home user does not possess the skills to efficiently manage his/her own network. Second, home networking equipment offer limited management functionalities via heterogeneous user interfaces. Finally, home networks exhibit highly dynamic performance characteristics, affecting the amount of available bandwidth resources over time and space. This thesis presents HomeShaper, a programmable bandwidth management framework which accepts as input a set of userdefined requirements in the form of highlevel contracts (e.g. guaranteed rate, capping, prioritisation etc.), and transparently reconfigures the underlying home network infrastructure in order to fulfil them. HomeShaper provides strong guarantees about the correctness of the resulting network configurations, preventing inconsistencies by means of verification. Furthermore it allows the specification of adaptive bandwidth control behaviours, used to dynamically enable or disable individual contracts, responding to the changing network performance conditions. The developers of home network management applications can easily specify custom adaptive behaviours encoded in the form of 'teleoreactive' programs, and rely on the tools and the abstractions provided by HomeShaper runtime.

60 
Strategies for optimising DRAM repairMilbourn, Joseph John January 2010 (has links)
Dynamic Random Access Memories (DRAM) are large complex devices, prone to defects during manufacture. Yield is improved by the provision of redundant structures used to repair these defects. This redundancy is often implemented by the provision of excess memory capacity and programmable address logic allowing the replacement of faulty cells within the memory array. As the memory capacity of DRAM devices has increased, so has the complexity of their redundant structures, introducing increasingly complex restrictions and interdependencies upon the use of this redundant capacity. Currently redundancy analysis algorithms solving the problem of optimally allocating this redundant capacity must be manually customised for each new device. Compromises made to reduce the complexity, and human error, reduce the efficacy of these algorithms. This thesis develops a methodology for automating the customisation of these redundancy analysis algorithms. Included are: a modelling language describing the redundant structures (including the restrictions and interdependencies placed upon their use), algorithms manipulating this model to generate redundancy analysis algorithms, and methods for translating those algorithms into executable code. Finally these concepts are used to develop a prototype software tool capable of generating redundancy analysis algorithms customised for a specified device.

Page generated in 0.168 seconds