• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3267
  • 1226
  • 892
  • 505
  • 219
  • 178
  • 161
  • 161
  • 160
  • 160
  • 160
  • 160
  • 160
  • 159
  • 77
  • Tagged with
  • 8737
  • 4075
  • 2533
  • 2456
  • 2456
  • 805
  • 805
  • 588
  • 579
  • 554
  • 552
  • 525
  • 486
  • 480
  • 472
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Exploring the influence of haptic force feedback on 3D selection

Pawar, V. M. January 2013 (has links)
This thesis studies the effects of haptic force feedback on 3D interaction performance. To date, Human-Computer Interaction (HCI) in three dimensions is not well understood. Within platforms, such as Immersive Virtual Environments (IVEs), implementing `good' methods of interaction is difficult. As reflected by the lack of 3D IVE applications in common use, typical performance constraints include inaccurate tracking, lack of additional sensory inputs, in addition to general design issues related to the implemented interaction technique and connected input devices. In total, this represents a broad set of multi-disciplinary challenges. By implementing techniques that address these problems, we intend to use IVE platforms to study human 3D interaction and the effects of different types of feedback. A promising area of work is the development of haptic force feedback devices. Also called haptic interfaces, these devices can exert a desired force onto the user simulating a physical interaction. When described as a sensory cue, it is thought that this information is important for the selection and manipulation of 3D objects. To date, there are a lot of studies investigating how best to integrate haptic devices within IVEs. Whilst there are still fundamental integration and device level problems to solve, previous work demonstrates that haptic force feedback can improve 3D interaction performance. By investigating this claim further, this thesis explores the role of haptic force feedback on 3D interaction performance in more detail. In particular, we found additional complexities whereby different types of haptic force feedback conditions can either help but also hinder user performance. By discussing these new results, we begin to examine the utility of haptic force feedback. By focusing our user studies on 3D selection, we explored the influence of haptic force feedback on the strategies taken to target virtual objects when using either `distal' and `natural' interaction technique designs. We first outlined novel methods for integrating and calibrating large scale haptic devices within a CAVE-like IVE. Secondly, we described our implementation of distal and natural selection techniques tailored to the available hardware, including the collision detection mechanisms used to render different haptic responses. Thirdly, we discussed the evaluation framework used to assess different interaction techniques and haptic force feedback responses within a common IVE setup. Finally, we provide a detailed assessment of user performance highlighting the effects of haptic force feedback on 3D selection, which is the main contribution of this work. We expect the presented findings will add to the existing literature that evaluates novel 3D interaction technique designs for IVEs. We also hope that this thesis will provide a basis to develop future interaction models that include the effects of haptic force feedback.
232

Probabilistic prediction of Alzheimer's disease from multimodal image data with Gaussian processes

Young, J. M. January 2015 (has links)
Alzheimer’s disease, the most common form of dementia, is an extremely serious health problem, and one that will become even more so in the coming decades as the global population ages. This has led to a massive effort to develop both new treatments for the condition and new methods of diagnosis; in fact the two are intimately linked as future treatments will depend on earlier diagnosis, which in turn requires the development of biomarkers that can be used to identify and track the disease. This is made possible by studies such as the Alzheimer’s disease neuroimaging initiative which provides previously unimaginable quantities of imaging and other data freely to researchers. It is the task of early diagnosis that this thesis focuses on. We do so by borrowing modern machine learning techniques, and applying them to image data. In particular, we use Gaussian processes (GPs), a previously neglected tool, and show they can be used in place of the more widely used support vector machine (SVM). As combinations of complementary biomarkers have been shown to be more useful than the biomarkers are individually, we go on to show GPs can also be applied to integrate different types of image and non-image data, and thanks to their properties this improves results further than it does with SVMs. In the final two chapters, we also look at different ways to formulate both the prediction of conversion to Alzheimer’s disease as a machine learning problem and the way image data can be used to generate features for input as a machine learning algorithm. Both of these show how unconventional approaches may improve results. The result is an advance in the state-of-the-art for a very clinically important problem, which may prove useful in practice and show a direction of future research to further increase the usefulness of such methods.
233

Learning from experience in the engineering of non-orthogonal architectural surfaces : a computational design system

Jonas, K. January 2013 (has links)
This research paints a comprehensive picture of the current state of the conception and engineering of non-orthogonal architectural surfaces. The present paradigm in the design and engineering of these elaborate building structures is such that the overall form is decided first and it is then broken down into building components (façade cladding, or structural or shell elements) retrospectively. Subsequently, there is a division between the creation of the design and then the reverse engineering of it. In most of these projects, the discretisation of elaborate architectural surfaces into building components has little to do with how the form has been created, and the logic of the global form and its local subdivision are not of the same order. Experience gained through project work in the sponsoring company Buro Happold has been harnessed to inform the implementation of a design tool prototype. It is an open, extendable system. The development of the tool aims at stepping outside the current paradigm in practice; provides an integrated process of bottom-up generation of form and top-down search and optimisation, using an evolutionary method. The assertion of this thesis is that non-orthogonal design, which mimics a natural form in appearance, can be derived using mechanisms found in nature. These mechanisms, e.g. growth and evolution, can be transferred in such a way that they integrate aspects of the aesthetic, manufacturing, construction or performance. Designs are then created with an inherent logic. Growing form by adding discrete local geometries to produce larger componential surfaces ensures that the local parts and the global geometry are coherent and of the same kind. The aspiration is to make use of computational methods to contribute to the design and buildability of non-orthogonal architectural surfaces, and to further the discussion, development and application of digital design tools in practice.
234

Toward least-privilege isolation for software

Bittau, A. January 2009 (has links)
Hackers leverage software vulnerabilities to disclose, tamper with, or destroy sensitive data. To protect sensitive data, programmers can adhere to the principle of least-privilege, which entails giving software the minimal privilege it needs to operate, which ensures that sensitive data is only available to software components on a strictly need-to-know basis. Unfortunately, applying this principle in practice is dif- cult, as current operating systems tend to provide coarse-grained mechanisms for limiting privilege. Thus, most applications today run with greater-than-necessary privileges. We propose sthreads, a set of operating system primitives that allows ne-grained isolation of software to approximate the least-privilege ideal. sthreads enforce a default-deny model, where software components have no privileges by default, so all privileges must be explicitly granted by the programmer. Experience introducing sthreads into previously monolithic applications|thus, partitioning them|reveals that enumerating privileges for sthreads is dicult in practice. To ease the introduction of sthreads into existing code, we include Crowbar, a tool that can be used to learn the privileges required by a compartment. We show that only a few changes are necessary to existing code in order to partition applications with sthreads, and that Crowbar can guide the programmer through these changes. We show that applying sthreads to applications successfully narrows the attack surface by reducing the amount of code that can access sensitive data. Finally, we show that applications using sthreads pay only a small performance overhead. We applied sthreads to a range of applications. Most notably, an SSL web server, where we show that sthreads are powerful enough to protect sensitive data even against a strong adversary that can act as a man-in-the-middle in the network, and also exploit most code in the web server; a threat model not addressed to date.
235

Rapid interactive modelling and tracking for mixed and augmented reality

Freeman, R. M. January 2011 (has links)
We present a novel approach to mixed reality setup and configuration that is rapid, interactive, live, and video-based. Where the operator is directly involved in a responsive modelling process and can specify, define and semantically label the reconstruction. By using commonly available hardware and making minimal demands on the operator’s skill, our approach makes mixed reality more accessible for wider application. Some tasks vital to a mixed reality system are either too time consuming or too complex to be carried out whilst the system is active. 3-dimensional scene modelling, specification and registration are such tasks, commonly performed by skilled operators in an off-line initialisation phase prior to system activation. In this thesis we propose a new on-line interactive method, where a creative video-based modelling process is performed during the run-time phase of operation. Using primitive shape-based modelling techniques, traditionally applied to still photographic image reconstruction, we demonstrate how extrinsic camera calibration, scene reconstruction, specification and registration can be effectively achieved whilst a mixed reality system is active. The two steps required to realise manual on-line video-based modelling are described in this thesis. The first step shows how such modelling techniques can be applied to live video. The second step shows how freely moving cameras can be used to support the modelling processes by combining tracking techniques into a single application. To estimate the potential reconstruction accuracy for both steps a series of tests are performed. Underlying our video-based modelling approach, we present two new algorithms for translating 2-dimensional user interactions into well specified 3-dimensional geometric models, as well as a new approach to combined modelling and tracking that utilises both markers and appearance based tracking techniques in a single solution. Finally, we present a new algorithm for estimating tracking error in real-time, which we use to aid our modelling processes and support our accuracy testing.
236

Algorithmic trading : model of execution probability and order placement strategy

Yingsaeree, C. January 2012 (has links)
Most equity and derivative exchanges around the world are nowadays organised as order-driven markets where market participants trade against each other without the help of market makers or other intermediaries as in quote-driven markets. In these markets, traders have a choice either to execute their trade immediately at the best available price by submitting market orders or to trade patiently by submitting limit orders to execute a trade at a more favourable price. Consequently, determining an appropriate order type and price for a particular trade is a fundamental problem faced everyday by all traders in such markets. On one hand, traders would prefer to place their orders far away from the current best price to increase their pay-offs. On the other hand, the farther away from the current best price the lower the chance that their orders will be executed. As a result, traders need to find a right trade-off between these two opposite choices to execute their trades effectively. Undoubtedly, one of the most important factors in valuing such trade-off is a model of execution probability as the expected profit of traders who decide to trade via limit orders is an increasing function of the execution probability. Although a model of execution probability is a crucial component for making this decision, the research into how to model this probability is still limited and requires further investigation. The objective of this research is, hence, to extend this literature by investigating various ways in which the execution probability can be modelled with the aim to find a suitable model for modelling this probability as well as a way to utilise these models to make order placement decisions in algorithmic trading systems. To achieve this, this thesis is separated into four main experiments: 1. The first experiment analyses the behaviour of previously proposed execution probability models in a controlled environment by using data generated from simulation models of order-driven markets with the aim to identify the advantage, disadvantage and limitation of each method. 2. The second experiment analyses the relationship between execution probabilities and price fluctuations as well as a method for predicting execution probabilities based on previous price fluctuations and other related variables. 3. The third experiment investigates a way to estimate the execution probability in the simulation model utilised in the first experiment without resorting to computer simulation by deriving a model for describing the dynamic of asset price in this simulation model and utilising the derived model to estimate the execution probability. 4. The final experiment assesses the performance of utilising the developed execution probability models when applying them to make order placement decisions for liquidity traders who must fill his order before some specific deadline. The experiments with previous models indicate that survival analysis is the most appropriate method for modelling the execution probability because of its ability to handle censored observations caused by unexecuted and cancelled orders. However, standard survival analysis models (i.e. the proportional hazards model and accelerated failure time model) are not flexible enough to model the effect of explanatory variables such as limit order price and bid-ask spread. Moreover, the amount of the data required to fit these models at several price levels simultaneously grows linearly with the number of price levels. This might cause a problem when we want to model the execution probability at all possible price levels. To amend this problem, the second experiment purposes to model the execution probability during a specified time horizon from the maximum price fluctuations during the specified period. This model not only reduces the amount of the data required to fit the model in such situation, but it also provides a natural way to apply traditional time series analysis techniques to model the execution probability. Additionally, it also enables us to empirically illustrate that future execution probabilities are strongly correlated to past execution probabilities. In the third experiment, we propose a framework to model the dynamic of asset price from the stochastic properties of order arrival and cancellation processes. This establishes the relationship between microscopic dynamic of the limit order book and a long-term dynamic of the asset price process. Unlike traditional methods that model asset price dynamic using one-dimensional stochastic process, the proposed framework models this dynamic using a two dimensional stochastic process where the additional dimension represents information about the last price change. Finally, the results from the last experiment indicate that the proposed framework for making order placement decision based on the developed execution probability model outperform naive order placement strategy and the best static strategy in most situations.
237

Utilizing output in Web application server-side testing

Alshahwan, N. January 2012 (has links)
This thesis investigates the utilization of web application output in enhancing automated server-side code testing. The server-side code is the main driving force of a web application generating client-side code, maintaining the state and communicating with back-end resources. The output observed in those elements provides a valuable resource that can potentially enhance the efficiency and effectiveness of automated testing. The thesis aims to explore the use of this output in test data generation, test sequence regeneration, augmentation and test case selection. This thesis also addresses the web-specific challenges faced when applying search based test data generation algorithms to web applications and dataflow analysis of state variables to test sequence regeneration. The thesis presents three tools and four empirical studies to implement and evaluate the proposed approaches: SWAT (Search based Web Application Tester) is a first application of search based test data generation algorithms for web applications. It uses values dynamically mined from the intermediate and the client-side output to enhance the search based algorithm. SART (State Aware Regeneration Tool) uses dataflow analysis of state variables, session state and database tables, and their values to regenerate new sequences from existing sequences. SWAT-U (SWAT-Uniqueness) augments test suites with test cases that produce outputs not observed in the original test suite’s output. Finally, the thesis presents an empirical study of the correlation between new output based test selection criteria and fault detection and structural coverage. The results confirm that using the output does indeed enhance the effectiveness and efficiency of search based test data generation and enhances test suites’ effectiveness for test sequence regeneration and augmentation. The results also report that output uniqueness criteria are strongly correlated with both fault detection and structural coverage and are complementary to structural coverage.
238

Regional variation models of white matter microstructure

Morgan, G. L. January 2012 (has links)
Diffusion-weighted MRI (DW-MRI) is a powerful in vivo imaging technique that is particularly sensitive to the underlying microstructure of white matter tissue in the brain. Many models of the DW-MRI signal exist that allow us to relate the signals we measure to various aspects of the tissue structure, including measures of diffusivity, cellularity and even axon size. From histology, we know that many of these microstructure measures display distinct patterns of variation on length scales greater than the average voxel size. However very few methods exist that use this spatial coherence to inform and guide parameter estimation. Instead, most techniques treat each voxel of data independently. This is particularly problematic when estimating parameters such as axon radius which only weakly influence the signal, as the resulting estimates are noisy. Several methods have been proposed that spatially smooth parameter estimates after fitting the model in each voxel. However if the parameter estimates are very noisy, the underlying trend is likely to be obscured. These methods are also unable to account for spatial coupling that may exist between the various parameters. This thesis introduces a novel framework, the Regional Variation Model (RVM), which exploits the underlying spatial coherence within white matter tracts to estimate trends of microstructure variation across large regions of interest. We fit curves describing parameter variation directly to the diffusion-weighted signals which should capture spatial changes in a more natural way as well as reducing the effects of noise. This allows for more precise estimates of a range of microstructure indices, including axon radius. The resulting curves, which show how microstructure parameters vary spatially through white matter regions, can also be used to detect groupwise differences with potentially greater power than traditional methods.
239

Planning plausible human motions for navigation and collision avoidance

Chen, J.-R. January 2014 (has links)
This thesis investigates the plausibility of computer-generated human motions for navigation and collision avoidance. To navigate a human character through obstacles in an virtual environment, the problem is often tackled by finding the shortest possible path to the destination with smoothest motions available. This is because such solution is regarded as cost-effective and free-flowing in that it implicitly minimises the biomechanical efforts and potentially precludes anomalies such as frequent and sudden change of behaviours, and hence more plausible to human eyes. Previous research addresses this problem in two stages: finding the shortest collision-free path (motion planning) and then fitting motions onto this path accordingly (motion synthesis). This conventional approach is not optimal because the decoupling of these two stages introduces two problems. First, it forces the motion-planning stage to deliberately simplify the collision model to avoid obstacles. Secondly, it over-constrains the motion-synthesis stage to approximate motions to a sub-optimal trajectory. This often results in implausible animations that travel along erratic long paths while making frequent and sudden behaviour changes. In this research, I argue that to provide more plausible navigation and collision avoidance animation, close-proximity interaction with obstacles is crucial. To address this, I propose to combine motion planning and motion synthesis to search for shorter and smoother solutions. The intuition is that by incorporating precise collision detection and avoidance with motion capture database queries, we will be able to plan fine-scale interactions between obstacles and moving crowds. The results demonstrate that my approach can discover shorter paths with steadier behaviour transitions in scene navigation and crowd avoidance. In addition, this thesis attempts to propose a set of metrics that can be used to evaluate the plausibility of computer-generated navigation animations.
240

Re-feedback : freedom with accountability for causing congestion in a connectionless internetwork

Briscoe, R. January 2009 (has links)
This dissertation concerns adding resource accountability to a simplex internetwork such as the Internet, with only necessary but sufficient constraint on freedom. That is, both freedom for applications to evolve new innovative behaviours while still responding responsibly to congestion; and freedom for network providers to structure their pricing in any way, including flat pricing. The big idea on which the research is built is a novel feedback arrangement termed ‘re-feedback’. A general form is defined, as well as a specific proposal (re-ECN) to alter the Internet protocol so that self-contained datagrams carry a metric of expected downstream congestion. Congestion is chosen because of its central economic role as the marginal cost of network usage. The aim is to ensure Internet resource allocation can be controlled either by local policies or by market selection (or indeed local lack of any control). The current Internet architecture is designed to only reveal path congestion to end-points, not networks. The collective actions of self-interested consumers and providers should drive Internet resource allocations towards maximisation of total social welfare. But without visibility of a cost-metric, network operators are violating the architecture to improve their customer’s experience. The resulting fight against the architecture is destroying the Internet’s simplicity and ability to evolve. Although accountability with freedom is the goal, the focus is the congestion metric, and whether an incentive system is possible that assures its integrity as it is passed between parties around the system, despite proposed attacks motivated by self-interest and malice. This dissertation defines the protocol and canonical examples of accountability mechanisms. Designs are all derived from carefully motivated principles. The resulting system is evaluated by analysis and simulation against the constraints and principles originally set. The mechanisms are proven to be agnostic to specific transport behaviours, but they could not be made flow-ID-oblivious.

Page generated in 0.0482 seconds