• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3252
  • 1209
  • 892
  • 505
  • 219
  • 178
  • 161
  • 161
  • 160
  • 160
  • 160
  • 160
  • 160
  • 159
  • 77
  • Tagged with
  • 8705
  • 4043
  • 2509
  • 2432
  • 2432
  • 805
  • 805
  • 588
  • 579
  • 554
  • 551
  • 525
  • 486
  • 480
  • 471
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

An artificial intelligence framework for investigative reasoning

Ramezani, Ramin January 2014 (has links)
Problem solving is one of the major paradigms in Artificial Intelligence research in which an intelligent task to automate is interpreted as a series of problems to be solved. Various problem solving techniques have been spawned in the field of AI, mostly by concentrating on a certain reasoning approach to tackle a particular class of problems. For instance, theorem proving, constraint solving and machine learning provide powerful techniques for solving AI problems. In all these approaches, background knowledge needs to be provided, from which the system will infer new knowledge. Often, however, in real world scenarios, there may not be enough background information for any single solver to solve the problem. In these situations, some researches have demonstrated the benefits of using combined reasoning, i.e., a reasoning process which employs various, often disparate, problem solving techniques in concert, in order to solve a given task. The systems that engage such reasoning processes are called combined reasoning systems. Their power draws upon disparate techniques they employ. As such, combined reasoning systems are supposed to be more capable than their constituents. In this thesis we mainly focus on using a combined reasoning approach in solving a type of problems that cannot be solved by any of the aforementioned standalone systems. We refer to this type as investigation problem which models to some extent a generic situation which might arise in, say, medical diagnosis or the solving of a crime. That is, there are a number of possible diagnoses/suspects (candidates), and the problem is to use the facts of the case to rank them in terms of their likelihood of being the cause of the illness/guilty of the crime. Such ranking often leads to further medical tests/police enquiries focusing on the most likely candidates, which will bring to light further information about the current case. We use the term dynamic investigation problems to describe a series of such problems to be solved. Solving each problem entails using the facts of the case, coupled with prior knowledge about the domain to narrow down the candidates to just one. However, when there is no upright solution due to lack of some essential information, additional relevant information can often be found in related past cases thereby irregularities can be observed and utilized. Hence, dynamic investigation problems are hybrid machine-learning/constraint solving problems, and as such are more realistic and of interest to the wider AI community. In this thesis we focus on formal definition, exploration, generation and solution of 'Dynamic Investigation Problems', and we develop a framework which performs 'Investigative Reasoning', that is a framework in which a combination of reasoning techniques are incorporated in order to tackle dynamic investigation problems.

Gaze contingent robotic control in minimally invasive surgery

Fujii, Kenko January 2014 (has links)
Recent advances in minimally invasive surgery (MIS) have allowed patients to benefit from reduced trauma, faster recover times, and shorter hospitalisation. As a result, surgeons operate in a less-than-comfortable posture which is both physically and mentally challenging. This has led to a more demanding training scheme to acquire the relevant surgical skills. Navigating and operating with flexible instruments such as endoscopes can also induce spatial disorientation within the surgeon, where such instances are associated with increased pain for the patient and more critically, risk of perforating delicate patient tissue. Furthermore, the increased physical separation between the surgeon and the operative site and newly introduced surgical instruments have significantly changed the ergonomics and surgical workflow, which in turn increases the cognitive burden on the surgeon. The perceptual and ergonomic challenges during flexible endoscope based MIS are investigated through using the wealth of perceptual information the gaze can provide. In particular, the visualisation, navigation and ergonomic issues during MIS procedures are studied. A gaze parameter based framework is introduced to assess the use of a new field-of-view expansion technique for improved visualisation and camera trajectory comprehension when disorientated. Flexible instruments such as endoscopes suffer from disorientation inducing perceptual-motor misalignment. A misalignment quantification approach and a classification method based on gaze to infer varying perceptual-motor misalignment are developed to enable ergonomic assessment during endoscopic procedures. A novel robotic assisted gaze controlled camera system is developed to improve camera navigation where the user's control intentions are conveyed to the robotic laparoscope via real-time gaze gestures. To further improve the ergonomics of the gaze contingent system, an online calibration algorithm is integrated into the system. Throughout the thesis, detailed validation and discussion of the results are conducted to demonstrate the potential clinical value of the work.

Dense semantic SLAM

Salas-Moreno, Renato F. January 2014 (has links)
Simultaneous Localisation and Mapping (SLAM) began as a technique to enable real-time robotic navigation on previously unexplored environments. The created maps however were designed for the sole purpose of localising the robot (i.e. what is the position and orientation of the robot in relation to the map) and several systems demonstrated the increasing descriptive power of map representations, which on vision-only SLAM solutions consisted of simple sparse corner-like features as well as edges, planes and most recently fully dense surfaces that abandon the notion of sparse structures altogether. Early sparse representations enjoyed the benefit of being simple to maintain as features could be added, optimised and removed independently while being memory and compute efficient, making them suitable for robust real-time camera tracking that relies on a consistent map. However, sparse representations are limiting when it comes to interaction, as for example, a robot aiming to safely navigate in an environment would need to sense complete surfaces in addition to empty space. Furthermore, sparse features can only be detected on highly-textured areas and during slow motion. Recent dense methods overcome the limitations of sparse methods as they can work in situations where corner features would fail to be detected due to blurry images created during rapid camera motion and also enable to correctly reason about occlusions and complete 3D surfaces, thus raising the interaction capabilities to new levels. This is only possible thanks to the advent of commodity parallel processing power and large amount of memory on Graphic Processing Units (GPUs) that needs careful consideration during algorithm design. However, increasing the map density makes creating consistent structures more challenging due to the vast amount of parameters to optimise and the interdependencies amongst them. More importantly, our interest is in making interaction even more sophisticated by abandoning the idea that an environment is a dense monolithic structure in favour of one composed of discrete detachable objects and bounded regions having physical properties and metadata. This work explores the development of a new type of visual SLAM system representing the map with semantically meaningful objects and planar regions which we call Dense Semantic SLAM, enabling new types of interaction where applications that can go beyond asking the question of "where am I" towards "what is around me and what can I do with it". In a way it can be seen as a return to lightweight sparse-based representations while keeping the predictive power of dense methods with added scene understanding at the object and region levels.

Scalable verification techniques for data-parallel programs

Chong, Nathan January 2014 (has links)
This thesis is about scalable formal verification techniques for software. A verification technique is scalable if it is able to scale to reasoning about real (rather than synthetic or toy) programs. Scalable verification techniques are essential for practical program verifiers. In this work, we consider three key characteristics of scalability: precision, performance and automation. We explore trade-offs between these factors by developing verification techniques in the context of data-parallel programs, as exemplified by graphics processing unit (GPU) programs (called kernels). This thesis makes three original contributions to the field of program verification: 1. An empirical study of candidate-based invariant generation that explores the trade-offs between precision and performance. An invariant is a property that captures program behaviours by expressing a fact that always holds at a particular program point. The generation of invariants is critical for automatic and precise verification. Over a benchmark suite comprising 356 GPU kernels, we find that candidate-based invariant generation allows precise reasoning for 256 (72%) kernels. 2. Barrier invariants: a new abstraction for precise and scalable reasoning about data-dependent GPU kernels, an important class of kernel beyond the scope of existing techniques. Our evaluation shows that barrier invariants enable us to capture a functional specification for three distinct prefix sum implementations for problem sizes using hundreds of threads and race-freedom for a real-world stream compaction example. 3. The interval of summations: a new abstraction for precise and scalable reasoning for parallel prefix sums, an important data-parallel primitive. We give theoretical results showing that the interval of summations is, surprisingly, both sound and complete. That is, all correct prefix sums can be precisely captured by this abstraction. Our evaluation shows that the interval of summations allow us to automatically prove full functional correctness of four distinct prefix sum implementations for all power-of-two problem sizes up to 2^{20}.

Spatial stochastic population models for the analysis of city-scale systems

Günther, Marcel Christoph January 2014 (has links)
Recent advances in technology have led to a surge in innovations in the area of spatially aware applications such as locally operating social networks, retail, advertising, local weather and traffic services. Such applications are often supported by large data-collection and dissemination processes, designed to work on large-scale, inexpensive, infrastructure-light wireless \adhoc networks. As a consequence, novel modelling techniques are required for the purpose of capacity planning and in order to build on-line prediction models based on large quantities of location-aware data. In this thesis we study the spatio-temporal evolution of population systems related to such city-scale challenges. In particular we focus on large-scale, spatial population processes that are not amenable to fluid-flow or mean-field approximation techniques because of locally or temporarily varying population sizes. Our main contributions are - Providing novel ways of incorporating space and mobility in large-scale spatial populations models. - Illustrating how, for a certain class of spatial population processes, the time-evolution of higher-order population moments can be obtained efficiently using hybrid-simulation analysis. - Presenting case studies for realistic spatial systems from different application areas to show that our modelling techniques are well-suited for the analysis of network and protocol performance of static and mobile \adhoc communication networks as well as for building fast on-line prediction models.

Infinite hidden conditional random fields for the recognition of human behaviour

Bousmalis, Konstantinos January 2014 (has links)
While detecting and interpreting temporal patterns of nonverbal behavioral cues in a given context is a natural and often unconscious process for humans, it remains a rather difficult task for computer systems. In this thesis we are primarily motivated by the problem of recognizing expressions of high--level behavior, and specifically agreement and disagreement. We thoroughly dissect the problem by surveying the nonverbal behavioral cues that could be present during displays of agreement and disagreement; we discuss a number of methods that could be used or adapted to detect these suggested cues; we list some publicly available databases these tools could be trained on for the analysis of spontaneous, audiovisual instances of agreement and disagreement, we examine the few existing attempts at agreement and disagreement classification, and we discuss the challenges in automatically detecting agreement and disagreement. We present experiments that show that an existing discriminative graphical model, the Hidden Conditional Random Field (HCRF) is the best performing on this task. The HCRF is a discriminative latent variable model which has been previously shown to successfully learn the hidden structure of a given classification problem (provided an appropriate validation of the number of hidden states). We show here that HCRFs are also able to capture what makes each of these social attitudes unique. We present an efficient technique to analyze the concepts learned by the HCRF model and show that these coincide with the findings from social psychology regarding which cues are most prevalent in agreement and disagreement. Our experiments are performed on a spontaneous expressions dataset curated from real televised debates. The HCRF model outperforms conventional approaches such as Hidden Markov Models and Support Vector Machines. Subsequently, we examine existing graphical models that use Bayesian nonparametrics to have a countably infinite number of hidden states and adapt their complexity to the data at hand. We identify a gap in the literature that is the lack of a discriminative such graphical model and we present our suggestion for the first such model: an HCRF with an infinite number of hidden states, the Infinite Hidden Conditional Random Field (IHCRF). In summary, the IHCRF is an undirected discriminative graphical model for sequence classification and uses a countably infinite number of hidden states. We present two variants of this model. The first is a fully nonparametric model that relies on Hierarchical Dirichlet Processes and a Markov Chain Monte Carlo inference approach. The second is a semi--parametric model that uses Dirichlet Process Mixtures and relies on a mean--field variational inference approach. We show that both models are able to converge to a correct number of represented hidden states, and perform as well as the best finite HCRFs ---chosen via cross--validation--- for the difficult tasks of recognizing instances of agreement, disagreement, and pain in audiovisual sequences.

Dense vision in image-guided surgery

Chang, Ping-Lin January 2014 (has links)
Image-guided surgery needs an efficient and effective camera tracking system in order to perform augmented reality for overlaying preoperative models or label cancerous tissues on the 2D video images of the surgical scene. Tracking in endoscopic/laparoscopic scenes however is an extremely difficult task primarily due to tissue deformation, instrument invasion into the surgical scene and the presence of specular highlights. State of the art feature-based SLAM systems such as PTAM fail in tracking such scenes since the number of good features to track is very limited. When the scene is smoky and when there are instrument motions, it will cause feature-based tracking to fail immediately. The work of this thesis provides a systematic approach to this problem using dense vision. We initially attempted to register a 3D preoperative model with multiple 2D endoscopic/laparoscopic images using a dense method but this approach did not perform well. We subsequently proposed stereo reconstruction to directly obtain the 3D structure of the scene. By using the dense reconstructed model together with robust estimation, we demonstrate that dense stereo tracking can be incredibly robust even within extremely challenging endoscopic/laparoscopic scenes. Several validation experiments have been conducted in this thesis. The proposed stereo reconstruction algorithm has turned out to be the state of the art method for several publicly available ground truth datasets. Furthermore, the proposed robust dense stereo tracking algorithm has been proved highly accurate in synthetic environment (< 0.1 mm RMSE) and qualitatively extremely robust when being applied to real scenes in RALP prostatectomy surgery. This is an important step toward achieving accurate image-guided laparoscopic surgery.

Completeness-via-canonicity in coalgebraic logics

Dahlqvist, Fredrik Paul Herbert January 2015 (has links)
This thesis aims to provide a suite of techniques to generate completeness re- sults for coalgebraic logics with axioms of arbitrary rank. We have chosen to investigate the possibility to generalize what is arguably one of the most suc- cessful methods to prove completeness results in 'classical' modal logic, namely completeness-via-canonicity. This technique is particularly well-suited to a coal- gebraic generalization because of its clean and abstract algebraic formalism. In the case of classical modal logic, it can be summarized in two steps, first it isolates the purely algebraic problem of canonicity, i.e. of determining when a variety of boolean Algebras with Operators (BAOs) is closed under canonical extension (i.e. canonical). Secondly, it connects the notion of canonical vari- eties to that of canonical models to explicitly build models, thereby proving completeness. The classical algebraic theory of canonicity is geared towards normal logics, or, in algebraic terms, BAOs (or generalizations thereof). Most coalgebraic log- ics are not normal, and we thus develop the algebraic theory of canonicity for Boolean Algebra with Expansions (BAEs), or more generally for Distributive Lattice Expansions (DLEs). We present new results about a class of expan- sions defined by weaker preservation properties than meet or join preservation, namely (anti)-k-additive and (anti-)k-multiplicative expansions. We show how canonical and Sahlqvist equations can be built from such operations. In order to connect the theory of canonicity in DLEs and BAEs to coalgebraic logic, we choose to work in the abstract formulation of coalgebraic logic. An abstract coalgebraic logic is defined by a functor L : BA → BA, and we can heuristically separate these logics in two classes. In the first class the functor L is relatively simple, and in particular can be interpreted as defining a BAE. This class includes the predicate lifting style of coalgebraic logics. In the second class the functor L can be very complicated and the whole theory requires a different approach. This class includes the nabla style of coalgebraic logics. For simple functors, we develop results on strong completeness and then prove strong completeness-via-canonicity in the presence of canonical frame con- ditions for strongly complete abstract coalgebraic logics. In particular we show coalgebraic completeness-via-canonicity for Graded Modal Logic, Intuitionistic Logic, the distributive full Lambek calculus, and the logic of trees of arbitrary branching degrees defined by the List functor. These results are to the best of our knowledge, new. For a complex functor L we use an indirect approach via the notion of functor presentation. This allows us to represent L as the quotient of a much simpler polynomial functor. Polynomial functors define BAEs and can thus be treated as objects in the first class of functors, in particular we can apply all the above mentioned techniques to the logics defined by such functors. We develop techniques that ensure that results obtained for the simple presenting logic can be transferred back to the complicated presented logic. We can then prove strong-completeness-via-canonicity in the presence of canonical frame conditions for coalgebraic logics which do not define a BAE, such as the nabla coalgebraic logics.

HomeShaper : regulating the use of bandwidth resources in home networks

Pediaditakis, Dimosthenis January 2015 (has links)
It is estimated that the number of worldwide broadband Internet subscribers increases at a staggering rate of 8% per year. This fact, along with the ever increasing data consumption demands, have pushed the envelope in the design of faster and better broadband Internet and wireless LAN communications. Nonetheless, home users still experience periods during which the available network resources do not suffice to meet everyone's requirements, confirming Parkinson's law of bandwidth absorption: 'network traffic expands to fit the available bandwidth'. Unsurprisingly, numerous sociological studies indicate that a highly desired management functionality is: 'the ability of users to effectively regulate the use of a home network's bandwidth resources'. Past research on this topic usually proposes over-complicated solutions that are specific to certain technologies and not tailored to the unique characteristics of home networks. First, the average home user does not possess the skills to efficiently manage his/her own network. Second, home networking equipment offer limited management functionalities via heterogeneous user interfaces. Finally, home networks exhibit highly dynamic performance characteristics, affecting the amount of available bandwidth resources over time and space. This thesis presents HomeShaper, a programmable bandwidth management framework which accepts as input a set of user-defined requirements in the form of high-level contracts (e.g. guaranteed rate, capping, prioritisation etc.), and transparently reconfigures the underlying home network infrastructure in order to fulfil them. HomeShaper provides strong guarantees about the correctness of the resulting network configurations, preventing inconsistencies by means of verification. Furthermore it allows the specification of adaptive bandwidth control behaviours, used to dynamically enable or disable individual contracts, responding to the changing network performance conditions. The developers of home network management applications can easily specify custom adaptive behaviours encoded in the form of 'teleo-reactive' programs, and rely on the tools and the abstractions provided by HomeShaper runtime.

Strategies for optimising DRAM repair

Milbourn, Joseph John January 2010 (has links)
Dynamic Random Access Memories (DRAM) are large complex devices, prone to defects during manufacture. Yield is improved by the provision of redundant structures used to repair these defects. This redundancy is often implemented by the provision of excess memory capacity and programmable address logic allowing the replacement of faulty cells within the memory array. As the memory capacity of DRAM devices has increased, so has the complexity of their redundant structures, introducing increasingly complex restrictions and interdependencies upon the use of this redundant capacity. Currently redundancy analysis algorithms solving the problem of optimally allocating this redundant capacity must be manually customised for each new device. Compromises made to reduce the complexity, and human error, reduce the efficacy of these algorithms. This thesis develops a methodology for automating the customisation of these redundancy analysis algorithms. Included are: a modelling language describing the redundant structures (including the restrictions and interdependencies placed upon their use), algorithms manipulating this model to generate redundancy analysis algorithms, and methods for translating those algorithms into executable code. Finally these concepts are used to develop a prototype software tool capable of generating redundancy analysis algorithms customised for a specified device.

Page generated in 0.168 seconds