• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 320
  • 89
  • 39
  • 33
  • 31
  • 12
  • 8
  • 6
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 640
  • 113
  • 95
  • 62
  • 61
  • 60
  • 57
  • 55
  • 53
  • 51
  • 50
  • 45
  • 43
  • 39
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Automatic Instance-based Tailoring of Parameter Settings for Metaheuristics

Dobslaw, Felix January 2011 (has links)
Many industrial problems in various fields, such as logistics, process management, orproduct design, can be formalized and expressed as optimization problems in order tomake them solvable by optimization algorithms. However, solvers that guarantee thefinding of optimal solutions (complete) can in practice be unacceptably slow. Thisis one of the reasons why approximative (incomplete) algorithms, producing near-optimal solutions under restrictions (most dominant time), are of vital importance. Those approximative algorithms go under the umbrella term metaheuristics, each of which is more or less suitable for particular optimization problems. These algorithmsare flexible solvers that only require a representation for solutions and an evaluation function when searching the solution space for optimality.What all metaheuristics have in common is that their search is guided by certain control parameters. These parameters have to be manually set by the user andare generally problem and interdependent: A setting producing near-optimal resultsfor one problem is likely to perform worse for another. Automating the parameter setting process in a sophisticated, computationally cheap, and statistically reliable way is challenging and a significant amount of attention in the artificial intelligence and operational research communities. This activity has not yet produced any major breakthroughs concerning the utilization of problem instance knowledge or the employment of dynamic algorithm configuration. The thesis promotes automated parameter optimization with reference to the inverse impact of problem instance diversity on the quality of parameter settings with respect to instance-algorithm pairs. It further emphasizes the similarities between static and dynamic algorithm configuration and related problems in order to show how they relate to each other. It further proposes two frameworks for instance-based algorithm configuration and evaluates the experimental results. The first is a recommender system for static configurations, combining experimental design and machine learning. The second framework can be used for static or dynamic configuration,taking advantage of the iterative nature of population-based algorithms, which is a very important sub-class of metaheuristics. A straightforward implementation of framework one did not result in the expected improvements, supposedly because of pre-stabilization issues. The second approach shows competitive results in the scenario when compared to a state-of-the-art model-free configurator, reducing the training time by in excess of two orders of magnitude.
42

PStorM: Profile Storage and Matching for Feedback-Based Tuning of MapReduce Jobs

Ead, Mostafa January 2012 (has links)
The MapReduce programming model has become widely adopted for large scale analytics on big data. MapReduce systems such as Hadoop have many tuning parameters, many of which have a significant impact on performance. The map and reduce functions that make up a MapReduce job are developed using arbitrary programming constructs, which makes them black-box in nature and prevents users from making good parameter tuning decisions for a submitted MapReduce job. Some research projects, such as the Starfish system, aim to provide automatic tuning decisions for input MapReduce jobs. Starfish and similar systems rely on an execution profile of a MapReduce job being tuned, and this profile is assumed to come from a previous execution of the same job. Managing these execution profiles has not been previously studied. This thesis presents PStorM, a profile store that organizes the collected profiling information in a scalable and extensible data model, and a profile matcher that accurately picks the relevant profiling information even for previously unseen MapReduce jobs. PStorM is currently integrated with the Starfish system, providing the necessary profiles that Starfish needs to tune a job. The thesis presents results that demonstrate the accuracy and efficiency of profile matching. The results also show that the profiles returned by PStorM lead to Starfish tuning decisions that are as good as the decisions made by profiles collected from a previous run of the job.
43

Automatic dynamic tuning of parallel/distributed applications on computational grids

Fernandes de Carvalho Costa, Genaro 20 July 2009 (has links)
Las aplicaciones paralelas presentan diversos problemas de prestaciones al ser cambiadas para ejecutar en entornos Grid. Las características del sistema pueden ser diferentes en cada ejecución y en algunos casos dentro de una misma ejecución. Los recursos remotos comparten enlaces de red y, los procesos de la aplicación comparten las maquinas con otros procesos. En esos escenarios se propone el uso de la técnica de sintonización dinámica de prestaciones para ayudar las aplicaciones a adaptarse a los cambios en las características del sistema con la intención de resolver los problemas de prestaciones. Esta tesis analiza el problema de la ejecución de aplicaciones paralelas en entornos Grid, herramientas para el análisis de prestaciones disponibles y modelos adecuados para la sintonización dinámica de aplicaciones paralelas en este tipo de entorno. De este análisis, se propone una arquitectura para sintonización dinámica de aplicaciones paralelas en entornos Grid llamada GMATE. Esta arquitectura incluye diversas contribuciones. En el caso donde un gestor de recursos de alto nivel decide el mapeo de procesos de la aplicación, se proponen dos aproximaciones para el seguimiento de procesos que habilita GMATE a localizar donde las capas de software del Grid ha mapeado los procesos de la aplicación. Una aproximación consiste en la integración de GMATE a las capas de software del Grid. La otra integra los componentes requeridos de GMATE dentro de los binarios de la aplicación. El primer enfoque requiere derechos de administración en cuanto que el segundo incrementa el binario del la aplicación, lo que torna más lento el arranque de la misma. Para respetar las políticas de las organizaciones propietarias de los recursos, todas las comunicaciones usan el mismo certificado de seguridad de la aplicación. Esas comunicaciones son hechas con base en las capas del Grid. Esta aproximación habilita la arquitectura a monitorizar y sintonizar los procesos de la aplicación de forma dinámica a las restricciones de cortafuegos de las organizaciones y políticas de uso de la red en las mismas. Para bajar la necesidad de comunicaciones en la arquitectura GMATE, se encapsula parte de la lógica requerida para colectar las medidas y para cambiar los parámetros de la aplicación en componentes encuestados que ejecutan dentro de espacio de memoria del proceso de la aplicación. Para colectar medidas, se ha creado componentes sensores que disminuyen la necesidad de comunicaciones llevando el procesamiento de eventos para dentro del proceso de la aplicación. Diferente de la instrumentación tradicional, los sensores pueden retrasar la transmisión de los eventos y ejecutar operaciones básicas como sumas, temporizadores, promedios o generación de eventos basados en umbrales. Esas capacidades reducen los requerimientos de comunicaciones que son útiles en situaciones de bajo ancho de banda. Se propone también el encapsulamiento de la lógica de cambio en las aplicaciones en componentes actuadores. Los actuadores son instalados en el flujo de ejecución de la aplicación y posibilita la sincronización y baja intrusión en cambio de variables y ejecución de funciones para modificar la aplicación. La arquitectura propuesta posibilita la comunicación entre sensores y actuadores lo cual habilita la sintonizaciones sencillas sin necesidad de comunicaciones. Para aplicar la sintonización dinámica en entornos Grid, necesitamos de un modelo de prestaciones que pueda ser usado en entornos con heterogeneidad de red y procesadores para guiar el proceso. Se propone un modelo de prestaciones basado en una heurística para lograr usar el máximo número de esclavos y el mejor tamaño de grano en una ejecución Maestro-Esclavo en sistemas Grid. Se considera que una clase de aplicaciones pueden ser desarrolladas con la capacidad de cambiar el tamaño de grano en tiempo de ejecución y que esa acción cambia la relación entre cómputo y comunicación. En el escenario donde usuarios reciben un conjunto de recursos para una ejecución paralela, esos recursos pueden configurar un sistema de múltiples clústeres. La heurística del modelo presentado permite guiar la selección de los recursos para disminuir el tiempo total de ejecución de la aplicación. Se intenta buscar el punto donde el maestro usa el máximo número de esclavos posible asignando prioridad a aquellos más rápidos. Se presentan los resultados de sintonización del tamaño de grano y número de esclavos en ejecuciones Maestro-Esclavo en entornos Grid donde se logra bajar el tiempo total de ejecución de la aplicación y aumentando la eficiencia de uso de los recursos. Se presentan las implementaciones de las aplicaciones multiplicación de matrices, N-Body y cargas sintéticas las cuales tienen diferentes grados en la relación entre cómputo y comunicación en escenarios de cambio del tamaño de grano. / When moving to Grid Computing, parallel applications face several performance problems. The system characteristics are different in each execution and sometimes within the same execution. Remote resources share network links and in some cases, the processes share machines using per-core allocation. In such scenarios we propose to use automatic performance tuning techniques to help an application adapt itself thus a system changes in order to overcome performance bottlenecks. This thesis analyzes such problems of parallel application execution in Computational Grids, available tools for performance analysis and models to suit automatic dynamic tuning in such environments. From such an analysis, we propose system architecture for automatic dynamic tuning of parallel applications on computational Grids named GMATE. Its architecture includes several contributions. In cases where a Grid meta-scheduler decides application mapping, we propose two process tracking approaches that enable GMATE to locate where a Grid middleware maps application processes. One approach consists of the integration of GMATE components as Grid middleware. The other involves the need to embed a GMATE component inside application binaries. The first requires site administration privileges while the other increases the application binary which slows down application startup. To obey organizational policies, all communications use the same application security certificates for authentication. The same communications are performed using Grid middleware API. That approach enables the monitoring and tuning process to adapt dynamically to organizational firewall restrictions and network usage policies. To lower the communication needs of GMATE, we encapsulate part of the logic required to collect measurements and change application parameters in components that run inside the processing space. For measurements collection, we create sensor components that reduce the communication needs by event processing inside the process space. Different from traditional instrumentation, sensors can postpone the event transmission and perform basic operations such as summarizations, timers, averages or threshold based event generation. That reduces the communication requirements in cases where network bandwidth is expensive. We also encapsulate the modifications used to tune the application in components called actuators. Actuators may be installed at some point in the program flow execution and provide synchronization and low overhead control of application variables and function executions. As sensors and actuators can communicate with each other, we can perform simple tuning within process executions without the need for communication. As the dynamic tuning is performance model-centric, we need a performance model that can be used on heterogeneous processors and network such Grid Systems. We propose a heuristic performance model to find the maximum number of workers and best grain size of a Master-Worker execution in such systems. We assume that some classes of application may be built capable of changing grain size at runtime and that change action can modify an application's compute-communication ratio. When users request a set of resources for a parallel execution, they may receive a multi-cluster configuration. The heuristic model allows for shrinking the set of resources without decreasing the application execution time. The idea is to reach the maximum number of workers the master can use, giving high priority to the faster ones. We presented the results of the dynamic tuning of grain size and the number of workers in Master-Worker applications on Grid systems, lowering the total application execution time while raising system efficiency. We used the implementation of Matrix-Multiplication, N-Body and synthetic workloads to try out different compute-communication ratio changes in different grain size selections.
44

Optimal Online Tuning of an Adaptive Controller

Huebsch, Jesse January 2004 (has links)
A novel adaptive controller, suitable for linear and non-linear systems was developed. The controller is a discrete algorithm suitable for computer implementation and is based on gradient descent adaptation rules. Traditional recursive least squares based algorithms suffer from performance deterioration due to the continuous reduction of a covariance matrix used for adaptation. When this covariance matrix becomes too small, recursive least squares algorithms respond slow to changes in model parameters. Gradient descent adaptation was used to avoid the performance deterioration with time associated with regression based adaptation such as Recursive Least Squares methods. Stability was proven with Lyapunov stability theory, using an error filter designed to fulfill stability requirements. Similarities between the proposed controller with PI control have been found. A framework for on-line tuning was developed using the concept of estimation tracks. Estimation tracks allow the estimation gains to be selected from a finite set of possible values, while meeting Lyapunov stability requirements. The trade-off between sufficient excitation for learning and controller performance, typical for dual adaptive control techniques, are met by properly tuning the adaptation and filter gains to drive the rate of adaptation in response to a fixed excitation signal. Two methods for selecting the estimation track were developed. The first method uses simulations to predict the value of the bicriteria cost function that is a combination of prediction and feedback errors, to generate a performance score for each estimation track. The second method uses a linear matrix inequality formulation to find an upper bound on feedback error within the range of uncertainty of the plant parameters and acceptable reference signals. The linear matrix inequality approach was derived from a robust control approach. Numerical simulations were performed to systematically evaluate the performance and computational burden of configuration parameters, such as the number of estimation tracks used for tuning. Comparisons were performed for both tuning methods with an arbitrarily tuned adaptive controller, with arbitrarily selected tuning parameters as well as a common adaptive control algorithm.
45

Automatically Tuning Database Server Multiprogramming Level

Abouzour, Mohammed January 2007 (has links)
Optimizing database systems to achieve the maximum attainable throughput of the underlying hardware is one of the many difficult tasks that face Database Administrators. With the increased use of database systems in many environments, this task has even become more difficult. One of the parameters that needs to be configured is the number of worker tasks that the database server uses (the multiprogramming level). This thesis will focus on how to automatically adjust the number of database server worker tasks to achieve maximum throughput under varying workload characteristics. The underlying intuition is that every workload has an optimal multiprogramming level that can achieve the best throughput given the workload characteristic.
46

Optimal Online Tuning of an Adaptive Controller

Huebsch, Jesse January 2004 (has links)
A novel adaptive controller, suitable for linear and non-linear systems was developed. The controller is a discrete algorithm suitable for computer implementation and is based on gradient descent adaptation rules. Traditional recursive least squares based algorithms suffer from performance deterioration due to the continuous reduction of a covariance matrix used for adaptation. When this covariance matrix becomes too small, recursive least squares algorithms respond slow to changes in model parameters. Gradient descent adaptation was used to avoid the performance deterioration with time associated with regression based adaptation such as Recursive Least Squares methods. Stability was proven with Lyapunov stability theory, using an error filter designed to fulfill stability requirements. Similarities between the proposed controller with PI control have been found. A framework for on-line tuning was developed using the concept of estimation tracks. Estimation tracks allow the estimation gains to be selected from a finite set of possible values, while meeting Lyapunov stability requirements. The trade-off between sufficient excitation for learning and controller performance, typical for dual adaptive control techniques, are met by properly tuning the adaptation and filter gains to drive the rate of adaptation in response to a fixed excitation signal. Two methods for selecting the estimation track were developed. The first method uses simulations to predict the value of the bicriteria cost function that is a combination of prediction and feedback errors, to generate a performance score for each estimation track. The second method uses a linear matrix inequality formulation to find an upper bound on feedback error within the range of uncertainty of the plant parameters and acceptable reference signals. The linear matrix inequality approach was derived from a robust control approach. Numerical simulations were performed to systematically evaluate the performance and computational burden of configuration parameters, such as the number of estimation tracks used for tuning. Comparisons were performed for both tuning methods with an arbitrarily tuned adaptive controller, with arbitrarily selected tuning parameters as well as a common adaptive control algorithm.
47

Automatically Tuning Database Server Multiprogramming Level

Abouzour, Mohammed January 2007 (has links)
Optimizing database systems to achieve the maximum attainable throughput of the underlying hardware is one of the many difficult tasks that face Database Administrators. With the increased use of database systems in many environments, this task has even become more difficult. One of the parameters that needs to be configured is the number of worker tasks that the database server uses (the multiprogramming level). This thesis will focus on how to automatically adjust the number of database server worker tasks to achieve maximum throughput under varying workload characteristics. The underlying intuition is that every workload has an optimal multiprogramming level that can achieve the best throughput given the workload characteristic.
48

GPTT: A Cross-Platform Graphics Performance Tuning Tool for Embedded System

Lin, Keng-Yu 22 August 2006 (has links)
This thesis presents a new cross-platform graphics performance tool, GPTT (Graphics Performance Tuning Tool), which is designed for helping developers to find the performance bottleneck of their games or applications on embedded systems. The functions of performance tool are embedded into the standard graphics library, OpenGL ES, to achieve cross-platform. In order to verify the proposed tool, we also implement the OpenGL ES specification in addition to the tool itself. The performance tool is separated into visualization part and measurement part from which it successfully decreases the load in embedded system, while running the application. Via the tool it identifies many bottlenecks that can be improved.
49

A tuning circuit for MOSFET C filter

Lin, Chang-Chih 16 January 2007 (has links)
MOSFET-C filters is popular in analog filters, the major reason is the simplicity. They are easily implemented with opamps and have similar architectures to active RC filters [1], this saves much of the design time. The frequency response of analog continuous time filters is determined by resistors, capacitors, inductors or transconductors. However, the process variation, temperature drift and aging, make the integrated RC time constants vary about 30 percent [2]~[3]. We proposed a switched-capacitor tuning circuit , which can be used in MOSFET-C Filter and the novel tuning circuit doesn¡¦t need off chip capacitor. The novel circuit has following advantages (1). Small chip size. (2). Simplicity (3). Low reference clock frequency.
50

Microstrip post production tuning bar error and compact resonators using negative refractive index metamaterials

Scher, Aaron David 29 August 2005 (has links)
In this thesis, two separate research topics are undertaken both in the general area of compact RF/microwave circuit design. The first topic involves characterizing the parasitic effects and error due to unused post-production tuning bars. Such tuning bars are used in microwave circuit designs to allow the impedance or length of a microstrip line to be adjusted after fabrication. In general, the tuning bars are simply patterns of small, isolated sections of conductor adjacent to the thru line. Changing the impedance or length of the thru line involves bonding the appropriate tuning bars to the line. Unneeded tuning bars are simply not removed and left isolated. Ideally, there should be no coupling between these unused tuning bars and the thru line. Therefore, the unused tuning bars should have a negligible effect on the circuit??s overall performance. To nullify the parasitic effects of the tuning bars, conventional wisdom suggests placing the bars 1.0 to 1.5 substrate heights away from the main line. While successful in the past, this practice may not result in the most efficient and cost-effective placement of tuning bars in today??s compact microwave circuits. This thesis facilitates the design of compact tuning bar configurations with minimum parasitic effects by analyzing the error attributable to various common tuning bar configurations with a range of parameters and offset distances. The error is primarily determined through electromagnetic simulations, and the accuracy of these simulations is verified by experimental results. The second topic in this thesis involves the design of compact microwave resonators using the transmission line approach to create negative refractive index metamaterials. A survey of the major developments and fundamental concepts related to negative refractive index technology (with focus on the transmission line approach) is given. Following is the design and measurement of the compact resonators. The resonators are also compared to their conventional counterparts to demonstrate both compactness and harmonic suppression.

Page generated in 0.1764 seconds