411 |
Activity recognition in naturalistic environments using body-worn sensorsHammerla, Nils Yannick January 2015 (has links)
The research presented in this thesis investigates how deep learning and feature learning can address challenges that arise for activity recognition systems in naturalistic, ecologically valid surroundings such as the private home. One of the main aims of ubiquitous computing is the development of automated recognition systems for human activities and behaviour that are sufficiently robust to be deployed in realistic, in-the-wild environments. In most cases, the targeted application scenarios are people’s daily lives, where systems have to abide by practical usability and privacy constraints. We discuss how these constraints impact data collection and analysis and demonstrate how common approaches to the analysis of movement data effectively limit the practical use of activity recognition systems in every-day surroundings. In light of these issues we develop a novel approach to the representation and modelling of movement data based on a data-driven methodology that has applications in activity recognition, behaviour imaging, and skill assessment in ubiquitous computing. A number of case studies illustrate the suitability of the proposed methods and outline how study design can be adapted to maximise the benefit of these techniques, which show promising performance for clinical applications in particular.
|
412 |
Polynomial approximations for infinite-dimensional optimization problemsBampou, Dimitra January 2013 (has links)
Many real-life decision problems in management science and engineering involve decisions that are functions of time and/or uncertainty. The resulting optimization models are therefore naturally formulated on infinite-dimensional function spaces. However, such infinite-dimensional optimization problems are notoriously difficult, and to solve them one usually has to resort to approximation methods. The objective of this thesis is to devise polynomial approximations for solving continuous linear programs and multi-stage stochastic programs, both of which constitute important classes of infinite-dimensional optimization problems with manifold practical applications. Approximating the functional decision variables by polynomials allows us to apply sum-of-squares techniques from algebraic geometry to reformulate the resulting problems as tractable semidefinite programs, which can be solved efficiently with interior point algorithms. Continuous linear programs represent deterministic optimization problems whose decision variables are functions of time subject to pointwise and dynamic linear constraints. They have attracted considerable interest due to their potential for modelling manufacturing, scheduling and routing problems. While efficient simplex-type algorithms have been developed for separated continuous linear programs, crude time discretization remains the method of choice for solving general (non-separated) problem instances. In this thesis we propose a more generic approximation scheme for non-separated continuous linear programs, which are believed to be NP-hard. We approximate the functional decision variables (policies) by polynomial and piecewise polynomial decision rules. To estimate the approximation error, we also compute a lower bound by solving a dual continuous linear program in (piecewise) polynomial decision rules. Multi-stage stochastic programming provides a versatile framework for optimal decision making under uncertainty, but it gives rise to hard functional optimization problems since the adaptive recourse decisions must be modelled as functions of some or all uncertain parameters. We propose to approximate these recourse decisions by polynomial decision rules and show that the best polynomial decision rule of a fixed degree can be computed efficiently. Again, we show that the suboptimality of the best polynomial decision rule can be estimated efficiently by solving a dual version of the stochastic program in polynomial decision rules. Recent progress in the theory of dynamic risk measures has found a strong echo in stochastic programming, where the time-consistency of dynamic decision making under uncertainty is currently under scrutiny. We extend the concepts of coherence and time consistency to stochastic programming models subject to distributional ambiguity, which motivates us to introduce robust dynamic risk measures. We discuss conditions under which these robust risk measures inherit coherence and time-consistency from their nominal counterparts. We also propose an approximation scheme based on polynomial decision rules for solving linear multi-stage stochastic programs involving robust dynamic risk measures.
|
413 |
The decision rule approach to optimisation under uncertainty : theory and applicationsGeorghiou, Angelos January 2013 (has links)
Optimisation under uncertainty has a long and distinguished history in operations research. Decision-makers realised early on that the failure to account for uncertainty in optimisation problems can lead to substantial unexpected losses or even infeasible solutions. Therefore, approximating the uncertain parameters by their average or nominal values may result in decisions that perform poorly in scenarios that deviate from the average. For the last sixty years, scenario tree-based stochastic programming has been the method of choice for solving optimisation problems affected by parameter uncertainty. This method approximates the random problem parameters by finite scenarios that can be arranged as a tree. Unfortunately, this approximation suffers from a curse of dimensionality: the tree needs to branch whenever new uncertainties are revealed, and thus its size grows exponentially with the number of decision stages. It has recently been argued that stochastic programs can quite generally be made tractable by restricting the space of recourse decisions to those that exhibit a linear data dependence. An attractive feature of this linear decision rule approximation is that it typically leads to polynomial-time solution schemes. Unfortunately, the simple structure of linear decision rules sacrifices optimality in return for scalability. The worst-case performance of linear decision rules is in fact rather disappointing. When applied to two-stage robust optimisation problems with m linear constraints, the underlying worst-case approximation ratio has been shown to be of the order O(√m). Therefore, in this thesis we endeavour to construct efficiently computable instance-wise bounds on the loss of optimality incurred by the linear decision rule approximation. The contributions of this thesis are as follows. (i)We develop an efficient algorithm for assessing the loss of optimality incurred by the linear decision rule approximation. The key idea is to apply the linear decision rule restriction not only to the primal but also to a dual version of the stochastic program. Since both problems share a similar structure, both problems can be solved in polynomial-time. The gap between their optimal values estimates the loss of optimality incurred by the linear decision rule approximation. (ii) We design an improved approximation based on non-linear decision rules, which can be useful if the optimality gap of the linear decision rules is deemed unacceptably high. The idea takes advantage of the fact that one can always map a linearly parameterised non-linear function into a higher dimensional space, where it can be represented as a linear function. This allows us to utilise the machinery developed for linear decision rules to produce superior quality approximations that can be obtained in polynomial time. (iii) We assess the performance of the approximations developed in two operations management problems: a production planning problem and a supply chain design problem. We show that near-optimal solutions can be found in problem instances with many stages and random parameters. We additionally compare the quality of the decision rule approximation with classical approximation techniques. (iv) We develop a systematic approach to reformulate multi-stage stochastic programs with a large (possibly infinite) number of stages as static robust optimisation problem that can be solved with a constraint sampling technique. The method is motivated via an investment planning problem in the electricity industry.
|
414 |
Lock inference for JavaGudka, Khilan January 2013 (has links)
Atomicity is an important property for concurrent software, as it provides a stronger guarantee against errors caused by unanticipated thread interactions than race-freedom does. However, concurrency control in general is tricky to get right because current techniques are too low-level and error-prone. With the introduction of multicore processors, the problems are compounded. Consequently, a new software abstraction is gaining popularity to take care of concurrency control and the enforcing of atomicity properties, called atomic sections. One possible implementation of their semantics is to acquire a global lock upon entry to each atomic section, ensuring that they execute in mutual exclusion. However, this cripples concurrency, as non-interfering atomic sections cannot run in parallel. Transactional memory is another automated technique for providing atomicity, but relies on the ability to rollback conflicting atomic sections and thus places restrictions on the use of irreversible operations, such as I/O and system calls, or serialises all sections that use such features. Therefore, from a language designer's point of view, the challenge is to implement atomic sections without compromising performance or expressivity. This thesis explores the technique of lock inference, which infers a set of locks for each atomic section, while attempting to balance the requirements of maximal concurrency, minimal locking overhead and freedom from deadlock. We focus on lock-inference techniques for tackling large Java programs that make use of mature libraries. This improves upon existing work, which either (i) ignores libraries, (ii) requires library implementors to annotate which locks to take, or (iii) only considers accesses performed up to one-level deep in library call chains. As a result, each of these prior approaches may result in atomicity violations. This is a problem because even simple uses of I/O in Java programs can involve large amounts of library code. Our approach is the first to analyse library methods in full and thus able to soundly handle atomic sections involving complicated real-world side effects, while still permitting atomic sections to run concurrently in cases where their lock sets are disjoint. To validate our claims, we have implemented our techniques in Lockguard, a fully automatic tool that translates Java bytecode containing atomic sections to an equivalent program that uses locks instead. We show that our techniques scale well and despite protecting all library accesses, we obtain performance comparable to the original locking policy of our benchmarks.
|
415 |
Reconfigurable architectures for cryptographic systemsLe Masle, Adrien January 2013 (has links)
Field Programmable Gate Arrays (FPGAs) are suitable platforms for implementing cryptographic algorithms in hardware due to their flexibility, good performance and low power consumption. Computer security is becoming increasingly important and security requirements such as key sizes are quickly evolving. This creates the need for customisable hardware designs for cryptographic operations capable of covering a large design space. In this thesis we explore the four design dimensions relevant to cryptography - speed, area, power consumption and security of the crypto-system - by developing parametric designs for public-key generation and encryption as well as side-channel attack countermeasures. There are four contributions. First, we present new architectures for Montgomery multiplication and exponentiation based on variable pipelining and variable serial replication. Our implementations of these architectures are compared to the best implementations in the literature and the design space is explored in terms of speed and area trade-offs. Second, we generalise our Montgomery multiplier design ideas by developing a parametric model to allow rapid optimisation of a general class of algorithms containing loops with dependencies carried from one iteration to the next. By predicting the throughput and the area of the design, our model facilitates and speeds up design space exploration. Third, we develop new architectures for primality testing including the first hardware architecture for the NIST approved Lucas primality test. We explore the area, speed and power consumption trade-offs by comparing our Lucas architectures on CPU, FPGA and ASIC. Finally, we tackle the security issue by presenting two novel power attack countermeasures based on on-chip power monitoring. Our constant power framework uses a closed-loop control system to keep the power consumption of any FPGA implementation constant. Our attack detection framework uses a network of ring-oscillators to detect the insertion of a shunt resistor-based power measurement circuit on a device's power rail. This countermeasure is lightweight and has a relatively low power overhead compared to existing masking and hiding countermeasures.
|
416 |
Medium-term planning in deregulated energy markets with decision rulesMartins da Silva Rocha, Paula Cristina January 2013 (has links)
The ongoing deregulation of energy markets has greatly impacted the power industry. In this new environment, firms shift their focus from cost-efficient energy supply to more profit-oriented goals, trading energy at the price set by the market. Consequently, traditional management approaches based on cost minimisation disregarding market uncertainties and financial risk are no longer applicable. In this thesis, we investigate medium-term planning problems in deregulated energy markets. These problems typically involve taking decisions over many periods and are affected by significant uncertainty, most notably energy price uncertainty. Multistage stochastic programming provides a flexible framework for modelling this type of dynamic decision-making process: it allows for future decisions to be represented as decision rules, that is, as measurable functions of the observable data. Multistage stochastic programs are generally intractable. Instead of using classical scenario tree-based techniques, we reduce their computational complexity by restricting the set of decision rules to those that exhibit an affine or quadratic data dependence. Decision rule approaches typically lead to polynomial-time solution schemes and are therefore ideal to tackle industry-size energy problems. However, the favourable scalability properties of the decision rule approach come at the cost of a loss of optimality. Fortunately, the degree of suboptimality can be measured efficiently by solving the dual of the stochastic program under consideration in linear or quadratic decision rules. The approximation error is then estimated by the gap between the optimal values of the primal and the dual decision rule problems. We develop this dual decision rule technique for general quadratic stochastic programs. Using these techniques, we solve a mean-variance portfolio optimisation problem faced by an electricity retailer. We observe that incorporating adaptivity into the model is beneficial in a risk minimisation framework, especially in the presence of high spot price variability or large market prices of risk. For a problem instance involving six electricity derivatives and a monthly planning horizon with daily trading periods, the solution time amounts to a few seconds. In contrast, scenario tree methods result in excessive run times since they require a prohibitively large number of scenarios to preclude arbitrage. Moreover, we address the medium-term scheduling of a cascaded hydropower system. To reduce computational complexity, we partition the planning horizon into hydrological macroperiods, each of which accommodates many trading microperiods, and we account for intra-stage variability through the use of price duration curves. Using linear decision rules, a solution to a real-sized hydro storage problem with a yearly planning horizon comprising 52 weekly macroperiods can be located in a few minutes, with an approximation error of less than 10%.
|
417 |
Decision rule approximations for dynamic optimization under uncertaintyVayanos, Phebe Theofano January 2013 (has links)
Dynamic decision problems affected by uncertain data are notoriously hard to solve due to the presence of adaptive decision variables which must be modeled as functions or decision rules of some (or all) of the uncertain parameters. All exact solution techniques suffer from the curse of dimensionality while most solution schemes assume that the decision-maker cannot influence the sequence in which the uncertain parameters are revealed. The main objective of this thesis is to devise tractable approximation schemes for dynamic decision-making under uncertainty. For this purpose, we develop new decision rule approximations whereby the adaptive decisions are approximated by finite linear combinations of prescribed basis functions. In the first part of this thesis, we develop a tractable unifying framework for solving convex multi-stage robust optimization problems with general nonlinear dependence on the uncertain parameters. This is achieved by combining decision rule and constraint sampling approximations. The synthesis of these two methodologies provides us with a versatile data-driven framework, which circumvents the need for estimating the distribution of the uncertain parameters and offers almost complete freedom in the choice of basis functions. We obtain a-priori probabilistic guarantees on the feasibility properties of the optimal decision rule and demonstrate asymptotic consistency of the approximation. We then investigate the problem of hedging and pricing path-dependent electricity derivatives such as swing options, which play a crucial risk management role in today’s deregulated energy markets. Most of the literature on the topic assumes that a swing option can be assigned a unique fair price. This assumption nevertheless fails to hold in real-world energy markets, where the option admits a whole interval of prices consistent with those of traded instruments. We formulate two large-scale robust optimization problems whose optimal values yield the endpoints of this interval. We analyze and exploit the structure of the optimal decision rule to formulate approximate problems that can be solved efficiently with the decision rule approach discussed in the first part of the thesis. Most of the literature on stochastic and robust optimization assumes that the sequence in which the uncertain parameters unfold is independent of the decision-maker’s actions. Nevertheless, in numerous real-world decision problems, the time of information discovery can be influenced by the decision-maker. In the last part of this thesis, we propose a decision rule-based approximation scheme for multi-stage problems with decision-dependent information discovery. We assess our approach on a problem of infrastructure and production planning in offshore oil fields.
|
418 |
Data management strategies for relative quality of service in virtualised storage systemsFranciosi, Felipe Mainieri January 2013 (has links)
The amount of data managed by organisations continues to grow relentlessly. Driven by the high costs of maintaining multiple local storage systems, there is a well established trend towards storage consolidation using multi-tier Virtualised Storage Systems (VSSs). At the same time, storage infrastructures are increasingly subject to stringent Quality of Service (QoS) demands. Within a VSS, it is challenging to match desired QoS with delivered QoS, considering the latter can vary dramatically both across and within tiers. Manual efforts to achieve this match require extensive and ongoing human intervention. Automated efforts are based on workload analysis, which ignores the business importance of infrequently accessed data. This thesis presents our design, implementation and evaluation of data maintenance strategies in an enhanced version of the popular Linux Extended 3 Filesystem which features support for the elegant specification of QoS metadata while maintaining compatibility with stock kernels. Users and applications specify QoS requirements using a chmod-like interface. System administrators are provided with a character device kernel interface that allows for profiling of the QoS delivered by the underlying storage. We propose a novel score-based metric, together with associated visualisation resources, to evaluate the degree of QoS matching achieved by any given data layout. We also design and implement new inode and datablock allocation and migration strategies which exploit this metric in seeking to match the QoS attributes set by users and/or applications on files and directories with the QoS actually delivered by each of the filesystem’s block groups. To create realistic test filesystems we have included QoS metadata support in the Impressions benchmarking framework. The effectiveness of the resulting data layout in terms of QoS matching is evaluated using a special kernel module that is capable of inspecting detailed filesystem data on-the-fly. We show that our implementations of the proposed inode and datablock allocation strategies are capable of dramatically improving data placement with respect to QoS requirements when compared to the default allocators.
|
419 |
Internet auction processes and mechanismsLeung, Timothy January 2013 (has links)
The nature of E-commerce over the Internet has seen significant changes over the years. Instead of companies selling items to consumers, consumers are increasingly selling items to fellow consumers on a global-scale, and Internet auctions have been the mechanism of choice in achieving this. In fact, auctioning allows the departure from the fixed price model, which some regard as too rigid to be able to respond swiftly to varying supply and demand fluctuations and changes, and the Internet plays a pivotal role in catalysing the widespread acceptance of such a variable pricing model on a global scale. Internet auctions exhibit characteristics which are often not shared by conventional auctions, e.g. auctions of xed duration which encourage sniping (bidders submit their bids moments before the close of an auction thereby preventing other bidders from submitting counter-bids), the acceptance of multiple bids in a single auction, and a maximum threshold whereby the auction will terminate at that price point. Internet auctions have significantly greater scope incorporating algorithms of increased complexity than conventional auction procedures. In this thesis, the characteristics and properties of different Internet auction algorithms are modelled mathematically based on a series of operational assumptions which characterise the arrival rate of bids, as well as the distribution from which the private values of buyers are sampled. From this, closed-form expressions of several key performance metrics are determined, including the average selling price in a given auction, as well as the average auction duration. In cases where a seller may be selling a commodity and auctions repeat themselves with the same items for sale multiple times, the income per unit time may also be qunatified. Simulation experiments have been performed and analysed in the context of the mathematical models, and reasonable agreements are observed.
|
420 |
Semantic types for class-based objectsRowe, Reuben January 2013 (has links)
We investigate semantics-based type assignment for class-based object-oriented programming. Our motivation is developing a theoretical basis for practical, expressive, type-based analysis of the functional behaviour of object-oriented programs. We focus our research using Featherweight Java, studying two notions of type assignment:- one using intersection types, the other a ‘logical’ restriction of recursive types. We extend to the object-oriented setting some existing results for intersection type systems. In doing so, we contribute to the study of denotational semantics for object-oriented languages. We define a model for Featherweight Java based on approximation, which we relate to our intersection type system via an Approximation Result, proved using a notion of reduction on typing derivations that we show to be strongly normalising. We consider restrictions of our system for which type assignment is decidable, observing that the implicit recursion present in the class mechanism is a limiting factor in making practical use of the expressive power of intersection types. To overcome this, we consider type assignment based on recursive types. Such types traditionally suffer from the inability to characterise convergence, a key element of our approach. To obtain a semantic system of recursive types for Featherweight Java we study Nakano’s systems, whose key feature is an approximation modality which leads to a ‘logical’ system expressing both functional behaviour and convergence. For Nakano’s system, we consider the open problem of type inference. We introduce insertion variables (similar to the expansion variables of Kfoury and Wells), which allow to infer when the approximation modality is required. We define a type inference procedure, and conjecture its soundness based on a technique of Cardone and Coppo. Finally, we consider how Nakano’s approach may be applied to Featherweight Java and discuss how intersection and logical recursive types may be brought together into a single system.
|
Page generated in 0.0436 seconds