• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 4
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Benchmarking smart homes using a humanoid robot approach

Veerapuneni, Satish Kumar. January 2004 (has links)
Thesis (M.S.)--University of Florida, 2004. / Title from title page of source document. Document formatted into pages; contains 63 pages. Includes vita. Includes bibliographical references.
2

A Memory Allocation Framework for Optimizing Power Consumption and Controlling Fragmentation

Panwar, Ashish January 2015 (has links) (PDF)
Large physical memory modules are necessary to meet performance demands of today's ap- plications but can be a major bottleneck in terms of power consumption during idle periods or when systems are running with workloads which do not stress all the plugged memory resources. Contribution of physical memory in overall system power consumption becomes even more signi cant when CPU cores run on low power modes during idle periods with hardware support like Dynamic Voltage Frequency Scaling. Our experiments show that even 10% of memory allocations can make references to all the banks of physical memory on a long running system primarily due to the randomness in page allocation. We also show that memory hot-remove or memory migration for large blocks is often restricted, in a long running system, due to allocation policies of current Linux VM which mixes movable and unmovable pages. Hence it is crucial to improve page migration for large contiguous blocks for a practical realization of power management support provided by the hardware. Operating systems can play a decisive role in effectively utilizing the power management support of modern DIMMs like PASR(Partial Array Self Refresh) in these situations but have not been using them so far. We propose three different approaches for optimizing memory power consumption by in- ducing bank boundary awareness in the standard buddy allocator of Linux kernel as well as distinguishing user and kernel memory allocations at the same time to improve the movability of memory sections (and hence memory-hotplug) by page migration techniques. Through a set of minimal changes in the standard buddy system of Linux VM, we have been able to reduce the number of active memory banks significantly (upto 80%) as well as to improve performance of memory-hotplug framework (upto 85%).
3

Variants of Hegselmann-Krause Model

Shiragur, Kirankumar Shivanand January 2016 (has links) (PDF)
The Hegselmann-Krause system (HK system for short) is one of the most popular models for the dynamics of opinion formation in multi agent systems. Agents are modeled as points in opinion space, and at every time step, each agent moves to the mass center of all the agents within unit distance. The rate of convergence of HK systems has been the subject of several recent works and the current best bounds are O(n3) in one dimension and O(n4) in higher dimension where n being the number of agents. In this work, we investigate the convergence behavior of a few natural variations of the HK system and their e act on the dynamics. In the rest variation, we only allow pairs of agents who are friends in an underlying social network to communicate with each other and we can construct conjurations. In the second variation, only one of the agents updates its position at each time step and selection of such an agent may be at random or based on some preened order; as before, these updates of agents also take social information into consideration. In the third variant, agents may not move exactly to the mass center but somewhere close to it. In the fourth variant, we allow all agents to interact with one another, but instead of assigning equal weights to all neighbors as in the HK model, we assign Gaussian weights which are inversely proportional to the distance between agents. In the fifth variant, we consider the Synchronized Bounded In hence model where the agents have in hence bounds instead of con dance bounds, which changes the way agents interact with each other. In our nil variant, we consider the dynamics of HK systems with strategic agents where we have an additional set of agents called as strategic agents whose opinions are chosen freely at each time step. One of the goals using these strategic agents is to lower the convergence time. The dynamics of all the variants are qualitatively very different from that of the classical HK system. Nevertheless, we prove convergence or show some other interesting results for all of these models. To be more specific, for the rest and third variant we show that these systems make only polynomial number of non-trivial steps, regardless of the social network in the rest vary-ant and noise patterns in the third variant. For the second variant, however, we again show polynomial number of non-trivial steps but in expectation regardless of the social network and interestingly different dynamics. For the fourth variant, we prove an upper bound for the convergence time of Gaussian weighted HK model. For the fifth variant, we consider a special case of this SBI model and prove convergence for this case. For the final variant, we improve the existing results for the optimal convergence time for dumb-bell and equidistant configurations.
4

Approximate Dynamic Programming and Reinforcement Learning - Algorithms, Analysis and an Application

Lakshminarayanan, Chandrashekar January 2015 (has links) (PDF)
Problems involving optimal sequential making in uncertain dynamic systems arise in domains such as engineering, science and economics. Such problems can often be cast in the framework of Markov Decision Process (MDP). Solving an MDP requires computing the optimal value function and the optimal policy. The idea of dynamic programming (DP) and the Bellman equation (BE) are at the heart of solution methods. The three important exact DP methods are value iteration, policy iteration and linear programming. The exact DP methods compute the optimal value function and the optimal policy. However, the exact DP methods are inadequate in practice because the state space is often large and in practice, one might have to resort to approximate methods that compute sub-optimal policies. Further, in certain cases, the system observations are known only in the form of noisy samples and we need to design algorithms that learn from these samples. In this thesis we study interesting theoretical questions pertaining to approximate and learning algorithms, and also present an interesting application of MDPs in the domain of crowd sourcing. Approximate Dynamic Programming (ADP) methods handle the issue of large state space by computing an approximate value function and/or a sub-optimal policy. In this thesis, we are concerned with conditions that result in provably good policies. Motivated by the limitations of the PBE in the conventional linear algebra, we study the PBE in the (min, +) linear algebra. It is a well known fact that deterministic optimal control problems with cost/reward criterion are (min, +)/(max, +) linear and ADP methods have been developed for such systems in literature. However, it is straightforward to show that infinite horizon discounted reward/cost MDPs are neither (min, +) nor (max, +) linear. We develop novel ADP schemes namely the Approximate Q Iteration (AQI) and Variational Approximate Q Iteration (VAQI), where the approximate solution is a (min, +) linear combination of a set of basis functions whose span constitutes a subsemimodule. We show that the new ADP methods are convergent and we present a bound on the performance of the sub-optimal policy. The Approximate Linear Program (ALP) makes use of linear function approximation (LFA) and offers theoretical performance guarantees. Nevertheless, the ALP is difficult to solve due to the presence of a large number of constraints and in practice, a reduced linear program (RLP) is solved instead. The RLP has a tractable number of constraints sampled from the original constraints of the ALP. Though the RLP is known to perform well in experiments, theoretical guarantees are available only for a specific RLP obtained under idealized assumptions. In this thesis, we generalize the RLP to define a generalized reduced linear program (GRLP) which has a tractable number of constraints that are obtained as positive linear combinations of the original constraints of the ALP. The main contribution here is the novel theoretical framework developed to obtain error bounds for any given GRLP. Reinforcement Learning (RL) algorithms can be viewed as sample trajectory based solution methods for solving MDPs. Typically, RL algorithms that make use of stochastic approximation (SA) are iterative schemes taking small steps towards the desired value at each iteration. Actor-Critic algorithms form an important sub-class of RL algorithms, wherein, the critic is responsible for policy evaluation and the actor is responsible for policy improvement. The actor and critic iterations have deferent step-size schedules, in particular, the step-sizes used by the actor updates have to be generally much smaller than those used by the critic updates. Such SA schemes that use deferent step-size schedules for deferent sets of iterates are known as multitimescale stochastic approximation schemes. One of the most important conditions required to ensure the convergence of the iterates of a multi-timescale SA scheme is that the iterates need to be stable, i.e., they should be uniformly bounded almost surely. However, the conditions that imply the stability of the iterates in a multi-timescale SA scheme have not been well established. In this thesis, we provide veritable conditions that imply stability of two timescale stochastic approximation schemes. As an example, we also demonstrate that the stability of a widely used actor-critic RL algorithm follows from our analysis. Crowd sourcing (crowd) is a new mode of organizing work in multiple groups of smaller chunks of tasks and outsourcing them to a distributed and large group of people in the form of an open call. Recently, crowd sourcing has become a major pool for human intelligence tasks (HITs) such as image labeling, form digitization, natural language processing, machine translation evaluation and user surveys. Large organizations/requesters are increasingly interested in crowd sourcing the HITs generated out of their internal requirements. Task starvation leads to huge variation in the completion times of the tasks posted on to the crowd. This is an issue for frequent requesters desiring predictability in the completion times of tasks specified in terms of percentage of tasks completed within a stipulated amount of time. An important task attribute that affects the completion time of a task is its price. However, a pricing policy that does not take the dynamics of the crowd into account might fail to achieve the desired predictability in completion times. Here, we make use of the MDP framework to compute a pricing policy that achieves predictable completion times in simulations as well as real world experiments.

Page generated in 0.1626 seconds