• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 354
  • 79
  • 42
  • 1
  • Tagged with
  • 476
  • 476
  • 117
  • 94
  • 71
  • 45
  • 44
  • 43
  • 40
  • 40
  • 40
  • 40
  • 37
  • 34
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

ASALBP: the Alternative Subgraphs Assembly Line Balancing Problem. Formalization and Resolution Procedures

Capacho Betancourt, Liliana 29 February 2008 (has links)
Hoy en día, los problemas de equilibrado de líneas de montaje se encuentran comúnmente en la mayoría de sistemas industriales y de manufactura. Básicamente, estos problemas consisten en asignar un conjunto de tareas a una secuencia ordenada de estaciones de trabajo, de manera que se respeten las restricciones de precedencia y se optimice una medida de eficiencia dada (como, por ejemplo, el número de estaciones de trabajo o el tiempo ciclo). Dada la complejidad de los problemas de equilibrado de líneas, en los trabajos de investigación tradicionalmente se consideraban numerosas simplificaciones en las que, por ejemplo, una sola línea serial procesaba un único modelo de un solo producto. Además, los problemas estaban principalmente restringidos por las relaciones de precedencia y el tiempo ciclo. Sin embargo, la disponibilidad de recursos computacionales de hoy en día, así como la necesidad de las empresas a adaptarse a los rápidos cambios en los procesos de producción, han motivado tanto a investigadores como a gerentes a tratar problemas más realistas. Algunos ejemplos incluyen problemas que procesan modelos mixtos, estaciones de trabajo y líneas en paralelo, consideran múltiples objetivos y restricciones adicionales, como la capacidad de proceso de las estaciones de trabajo y la ubicación de los recursos en la línea de montaje.Esta tesis doctoral trata un nuevo problema de equilibrado de líneas, que ha sido titulado ASALBP: the Alternative Subgraphs Assembly Line Balancing Problem, en el que se consideran variantes alternativas para diferentes partes de un proceso de montaje o de manufactura. Cada alternativa puede ser representada por un subgrafo de precedencias, que determina las tareas requeridas para procesar un producto particular, las restricciones de precedencia y los tiempos de proceso. Para resolver eficientemente el ASALBP, se deben resolver dos problemas simultáneamente: (1) el problema de decisión para seleccionar un subgrafo de montaje para cada parte que admite alternativas y (2) el problema de equilibrado para asignar las tareas a las estaciones de trabajo. El análisis del estado del arte revela que este problema no ha sido estudiado previamente en la literatura, lo que ha conducido a la caracterización y a la definición de un nuevo problema. Por otra parte, dado que no es posible representar las variantes de montaje en un diagrama de precedencias estándar, se propone el S-grafo como una herramienta de diagramación, para representar en un único grafo todas las alternativas de montaje.Habitualmente, los problemas de equilibrado de líneas que consideran alternativas de montaje se resuelven en dos etapas. En la etapa inicial, el diseñador de sistema selecciona una de las variantes posibles utilizando cierto criterio de decisión como por ejemplo tiempo total de proceso. Una vez que se han seleccionado las alternativas de montaje, y se dispone de un diagrama de precedencias (es decir, el problema de planificación ha sido resuelto), la línea de montaje es equilibrada en una segunda etapa. Sin embargo, utilizando dicho procedimiento de dos etapas no se puede garantizar que una solución óptima del problema global se pueda obtener, porque las decisiones tomadas por el diseñador de sistema restringen el problema y causan perdida de información; es decir, cuando se selecciona una alternativa priori los efectos de las posibilidades restantes quedan sin explorar. Por ejemplo, si el diseñador de sistema utiliza tiempo total de proceso como criterio de decisión, la alternativa con el tiempo total de proceso más grande será descartada a pesar de que pueda ser la que proporcione la mejor solución del problema (es decir, requiere el mínimo número de estaciones de trabajo o el mínimo tiempo ciclo). Por lo tanto, pareciera razonable considerar que para solucionar eficientemente un ALBP que implica alternativas de proceso, todas las alternativas de montaje deben ser tomadas en cuenta en el proceso de equilibrado. Para este propósito, en esta tesis el problema de selección de una variante de montaje y el problema de equilibrado de la línea se consideran conjuntamente en lugar de independientemente.Para resolver el Problema de Equilibrado de Líneas con Alternativas de Montaje (ASALBP) se usan varios enfoques. El problema se formaliza y se resuelve de manera óptima a través de dos modelos de programación matemática. Un enfoque aproximativo es usado para resolver problemas de tamaño industrial. Además, se proponen procedimientos de optimización local con el objetivo de mejorar la calidad de las soluciones obtenidas por los métodos heurísticos desarrollados en este trabajo. / Nowadays assembly line balancing problems are commonly found in most industrial and manufacturing systems. Basically, these problems seek to assign a set of assembly tasks to an ordered sequence of workstations in such a way that precedence constraints are maintained and a given efficiency measure (e.g. the number of workstations or the cycle time) is optimized.Because of the computational complexity of balancing problems, research works traditionally considered numerous simplifying assumptions in which, for example, a single model of a unique product were processed in a single line; moreover, problems were mainly restricted by precedence and cycle time constrains. Nevertheless, the current availability of computing resources and the enterprises need to adapt to rapid changes in production and manufacturing processes have encouraged researchers and decision-makers to address more realistic problems. Some examples include problems that involve mixed models, parallel workstations and parallel lines, multiple objectives and also further restrictions such as workstation processing capacity and resource allocation constraints. This doctoral thesis addresses a novel assembly line balancing problem, entitled here ASALBP: the Alternative Subgraphs Assembly Line Balancing Problem, which considers alternative variants for different parts of an assembly or manufacturing process. Each variant can be represented by a precedence subgraph that establishes the tasks required to process a particular product, their precedence requirements and their processing times. Therefore, to efficiently solve the Alternative Subgraphs Assembly Line Balancing Problem two subproblems need to be solved simultaneously: (1) the decision problem that selects one assembly variant for each part that admit alternatives and (2) the balancing problem that assigns the tasks to the workstations. The analysis of the state-of-the-art carried out revealed that the Alternative Subgraphs Assembly Line Balancing Problem has not been addressed before in literature studies, which leaded to the characterization and definition of this new problem. Moreover, due to the impossibility of representing assembly variants in a standard precedence graph, the S-Graph is proposed here as a diagramming tool to represent all available assembly alternatives in a unique diagram. Habitually, problems involving assembly alternatives are solved by using a two-stage based approach. In the initial stage, the system designer selects one of the possible variants according to criteria such as total processing time. Once the assembly alternatives have been selected, and a precedence graph is available (i.e. the assembly planning problem has been already solved), the line is then balanced in the second stage. However, by following this two-stage procedure it cannot be guaranteed that an optimal solution of the global problem can be obtained, because the decisions taken by the system designer restrict the problem and cause information loss; i.e., a priori selection of an alternative leaves the effects of the other possibilities unexplored. For instance, if the system designer uses total processing time as decision criterion, the alternative with largest total processing time will be discarded notwithstanding it may provide the best solution of the problem (i.e., it requires the minimum number of workstations or minimum cycle time). Therefore, it seems reasonable to consider that to solve efficiently an ALBP that involves processing alternatives all possibilities must be considered within the balancing process. For this purpose, in this thesis both the variant selection problem and the balancing problem are jointly considered instead of independently.Different approaches are used here to address the Alternative Subgraphs Assembly Line Balancing Problem (ASALBP). The problem is formalize and optimally solved by means of two mathematical programming models. An approximate approach is used to address industrial-scale problems. Furthermore, local optimization procedures are proposed aiming at improving the quality of the solutions provided by all heuristic methods developed here.
22

Energy-oriented optimizations towards sustainable internet

Ricciardi, Sergio 22 November 2012 (has links)
The Internet infrastructure, comprising both network and cloud facilities, has reached huge capacities but its development has not been compensated at the same rate as for its energy consumption. The energy consumption and the concomitant green house gases (GHG) emissions of the Internet are becoming major issues in the information and communication society. In such a context, there is a lack of a comprehensive energy-oriented paradigm for the Internet infrastructure that takes into account the absorbed energy, the emitted GHGs and the availability of renewable energy sources. This Thesis is focused on these very issues and tries to address the lack of such a paradigm in the Internet infrastructure by proposing energy models for energy-efficient architectures, energy-aware algorithms and protocols conceived to optimize the use of energy and minimize GHGs emissions, while preserving the traditional criteria such as network and datacenters load balancing to serve as many demands as possible and maximizing the system availability. In order to achieve the energy-oriented paradigm for the Internet infrastructure, specific problems were addressed step-wise, and then tied together in a comprehensive energy-oriented framework. Towards this goal, the power consumption of current and future energy-aware architectures was modeled through energy models that characterize the energy consumption of network equipment under different traffic loads, and power management strategies were assessed to allow network infrastructures to achieve advanced functionalities with limited energy budget. Integrated routing and wavelength assignment (RWA) schemes have been proposed (ILP formulations, heuristics and meta-heuristics, game theory, minimum affinity, minimum cut) in order to take advantage of different scenarios (complete or partial knowledge of network status, global control or individual selfishness of network elements, different requisites of computational and space complexity). Energy-aware RWA algorithms require an underlying routing protocol distributing up-to-date information about the energy consumption and GHG emissions of the network elements. Link state advertisement (LSA) messages of the OSPF-TE protocol have been extended to carry energy-related information. New TLVs have been added directly to the TE extensions of OSPF and flooded over the network. The connections re-optimization problem has been formulated as an iterative refinement process of multiple local search steps structured as a GRASP meta-heuristic, which re-reroutes connections to maintain the network traffic load balanced and free resources to serve incoming connections. To support the research tasks, a WDM-routed networks simulator, SimulNet, has been developed for the design and the evaluation of RWA and optimization algorithms. Energy-Farm, an energy manager for the modern and future grid/cloud data center infrastructures, was developed to reduce datacenters ecological footprint. Through the service-demand matching algorithm and the job aggregation capabilities, it allows turning off idle servers, while respecting both the demand requirements and the logical and physical dependencies. The risks related to energy-oriented attacks were pointed out for the first time and the potential impacts of network-based DoS attacks under the energy consumption perspective were evaluated. Finally, a holistic vision on the energy-oriented Internet is provided in which energy-efficient architectures are powered by a smart grid power distribution system employing renewable energy sources and are controlled by an intelligent energy-aware control plane, able to operate the Internet to minimize its ecological footprint. The research works leading to this Thesis delineate an energy-oriented paradigm for a sustainable high-performance Internet infrastructure that optimize the Internet ecological footprint while not disrupting the performance, towards sustainable society growth and prosperity. / La infraestructura de Internet, tanto de red como de centros de proceso de datos, ya alcanza un enorme volumen, pero este incremento no ha sido compensado con la misma rapidez en aspectos relacionados con el gasto energético. El consumo de energía y las emisiones de gases efecto invernadero (GEI) de Internet han pasado a ser un problema relevante en la sociedad de la información y las comunicaciones. En este entorno, falta un paradigma de largo alcance orientado a la energía, que considere el consumo de energía, las emisiones de GEI y la disponibilidad de recursos renovables. Esta Tesis está enfocada hacia estos problemas e intenta compensar la falta de ese paradigma en la infraestructura de Internet, proponiendo modelos energéticos para nuevas arquitecturas, así como algoritmos y protocolos conscientes de la energía para optimizar su uso y minimizar las emisiones de GEI, preservando los objetivos de calidad tradicionales de redes y centros de procesamiento de datos, así como asegurar la posibilidad de servir el mayor número de demandas posible y maximizar la disponibilidad del sistema. Para alcanzar una infraestructura de Internet orientada a la energía, se han solucionado problemas específicos y ligados a una estructura común de largo alcance. Hacia este objetivo, se ha modelado a través de modelos energéticos el consumo de los dispositivos bajo diferentes cargas, y se han valorado diversas estrategias de gestión de la energía para que las infraestructuras de red alcancen funcionalidades avanzadas con un presupuesto de energía limitado. Se han propuesto esquemas integrados de encaminamiento y asignación de longitud de onda (RWA) (formulaciones ILP, heurísticas y meta-heurísticas, teoría de los juegos, mínima afinidad, mínimo corte) para diferentes escenarios (conocimiento completo o parcial del estado de la red, control global o individual de los elementos de red, diferentes requisitos de computación y de espacio). Los algoritmos de RWA conscientes de la energía requieren un protocolo de encaminamiento que distribuya informaciones actualizadas sobre el consumo energético y las emisiones de GEI de los elementos de red. Se han desarrollado extensiones de los mensajes de aviso sobre el estado de la red (LSA) del protocolo OSPF-TE para transportar informaciones sobre la energía, añadiendo nuevos TLVs directamente a las extensiones TE de OSPF. El problema de la optimización de las conexiones se ha formulado como un proceso de refinado iterativo de pasos múltiples estructurado como una meta-heurística GRASP, que permite encaminar las conexiones para mantener el tráfico de la red balanceado y liberar recursos para servir posteriores conexiones. Para respaldar las tareas de investigación, se ha desarrollado SimulNet, un simulador de redes de encaminamiento de longitudes de ondas (WDM), para el diseño, optimización y evaluación de algoritmos de RWA. Se ha desarrollado EnergyFarm, un gestor de energía para los modernos centros de procesamiento de datos que, a través de un algoritmo de armonización entre demanda y servicio ofrecido y funcionalidades de agregación de las tareas, permite apagar los servidores no usados respetando los requisitos de las peticiones y las dependencias físicas y lógicas de los dispositivos. Se han evidenciado por primera vez los riesgos relacionados con los ataques orientados a la energía y se ha valorado su potencial impacto. Finalmente, se ha proporcionada una visión holística de Internet orientada a la energía, en la que arquitecturas eficientes energéticamente están alimentadas por una smart grid con fuentes renovables y controlada por un plano de control inteligente y consciente de la energía, capaz de operar en Internet para minimizar su huella ecológica. Los trabajos de investigación de esta Tesis conducen hacia un paradigma orientado a la energía para una infraestructura sostenible de Internet de alto rendimiento que optimice su huella ecológica sin afectar el rendimiento.
23

Automatic synthesis and optimization of chip multiprocessors

Nikitin, Nikita 05 April 2013 (has links)
The microprocessor technology has experienced an enormous growth during the last decades. Rapid downscale of the CMOS technology has led to higher operating frequencies and performance densities, facing the fundamental issue of power dissipation. Chip Multiprocessors (CMPs) have become the latest paradigm to improve the power-performance efficiency of computing systems by exploiting the parallelism inherent in applications. Industrial and prototype implementations have already demonstrated the benefits achieved by CMPs with hundreds of cores.CMP architects are challenged to take many complex design decisions. Only a few of them are:- What should be the ratio between the core and cache areas on a chip?- Which core architectures to select?- How many cache levels should the memory subsystem have?- Which interconnect topologies provide efficient on-chip communication?These and many other aspects create a complex multidimensional space for architectural exploration. Design Automation tools become essential to make the architectural exploration feasible under the hard time-to-market constraints. The exploration methods have to be efficient and scalable to handle future generation on-chip architectures with hundreds or thousands of cores.Furthermore, once a CMP has been fabricated, the need for efficient deployment of the many-core processor arises. Intelligent techniques for task mapping and scheduling onto CMPs are necessary to guarantee the full usage of the benefits brought by the many-core technology. These techniques have to consider the peculiarities of the modern architectures, such as availability of enhanced power saving techniques and presence of complex memory hierarchies.This thesis has several objectives. The first objective is to elaborate the methods for efficient analytical modeling and architectural design space exploration of CMPs. The efficiency is achieved by using analytical models instead of simulation, and replacing the exhaustive exploration with an intelligent search strategy. Additionally, these methods incorporate high-level models for physical planning. The related contributions are described in Chapters 3, 4 and 5 of the document.The second objective of this work is to propose a scalable task mapping algorithm onto general-purpose CMPs with power management techniques, for efficient deployment of many-core systems. This contribution is explained in Chapter 6 of this document.Finally, the third objective of this thesis is to address the issues of the on-chip interconnect design and exploration, by developing a model for simultaneous topology customization and deadlock-free routing in Networks-on-Chip. The developed methodology can be applied to various classes of the on-chip systems, ranging from general-purpose chip multiprocessors to application-specific solutions. Chapter 7 describes the proposed model.The presented methods have been thoroughly tested experimentally and the results are described in this dissertation. At the end of the document several possible directions for the future research are proposed.
24

The hiring problem and its algorithmic applications

Helmi Mohamed Elsadek, Ahmed Mohamed 08 April 2013 (has links)
The hiring problem is a simple model for on-line decision-making under uncertainty, recently introduced in the literature. Despite some related work dates back to 2000, the name and the first extensive studies were written in 2007 and 2008. The problem has been introduced explicitly first by Broder et al. in 2008 as a natural extension to the well-known secretary problem. Soon afterwards, Archibald and Martínez in 2009 introduced a discrete (combinatorial) model of the hiring problem, where the candidates seen so far could be ranked from best to worst without the need to know their absolute quality scores. This thesis introduces an extensive study for the hiring problem under the formulation given by Archibald and Martínez, explores the connections with other on-line selection processes in the literature, and develops one interesting application of our results to the field of data streaming algorithms. In the hiring problem we are interested in the design and analysis of hiring strategies. We study in detail two hiring strategies, namely hiring above the median and hiring above the m-th best. Hiring above the median hires the first interviewed candidate then any coming candidate is hired if and only if his relative rank is better than the median rank of the already hired staff, and others are discarded. Hiring above the m-th best hires the first m candidates in the sequence, then any coming candidate is hired if and only if his relative rank is larger than the m-th best among all hired candidates, and others are discarded. For both strategies, we were able to obtain exact and asymptotic distributional results for various quantities of interest (which we call hiring parameters). Our fundamental parameter is the number of hired candidates, together with other parameters like waiting time, index of last hired candidate and distance between the last two hirings give us a clear picture of the hiring rate or the dynamics of the hiring process for the particular strategy under study. There is another group of parameters like score of last hired candidate, score of best discarded candidate and number of replacements that give us an indicator of the quality of the hired staff. For the strategy hiring above the median, we study more quantities like number of hired candidates conditioned on the first one and probability that the candidate with score q is getting hired. We study the selection rule 1/2-percentile rule introduced by Krieger et al., in 2007, and the seating plan (1/2,1) of the Chinese restaurant process (CRP) introduced by Pitman, which are very similar to hiring above the median. The connections between hiring above the m-th best and the notion of m-records, and also the seating plan (0,m) of the CRP are investigated here. We report preliminary results for the number of hired candidates for a generalization of hiring above the median; called hiring above the alpha-quantile (of the hired staff). The explicit results for the number of hired candidates enable us to design an estimator, called RECORDINALITY, for the number of distinct elements in a large sequence of data which may contain repetitions; this problem is known in the literature as cardinality estimation problem. We show that another hiring parameter, the score of best discarded candidate, can also be used to design a new cardinality estimator, which we call DISCARDINALITY. Most of the results presented here have been published or submitted for publication. The thesis leaves some open questions, as well as many promising ideas for future work. One interesting question is how to compare two different strategies; that requires a suitable definition of the notion of optimality, which is still missing in the context of the hiring problem. We are also interested in investigating other variants of the problem like probabilistic hiring strategies, that is when the hiring criteria is not deterministic, unlike all the studied strategies.
25

Mapping, planning and exploration with Pose SLAM

Valencia Carreño, Rafael 19 April 2013 (has links)
This thesis reports research on mapping, path planning, and autonomous exploration. These are classical problems in robotics, typically studied independently, and here we link such problems by framing them within a common SLAM approach, adopting Pose SLAM as the basic state estimation machinery. The main contribution of this thesis is an approach that allows a mobile robot to plan a path using the map it builds with Pose SLAM and to select the appropriate actions to autonomously construct this map. Pose SLAM is the variant of SLAM where only the robot trajectory is estimated and where landmarks are only used to produce relative constraints between robot poses. In Pose SLAM, observations come in the form of relative-motion measurements between robot poses. With regards to extending the original Pose SLAM formulation, this thesis studies the computation of such measurements when they are obtained with stereo cameras and develops the appropriate noise propagation models for such case. Furthermore, the initial formulation of Pose SLAM assumes poses in SE(2) and in this thesis we extend this formulation to SE(3), parameterizing rotations either with Euler angles and quaternions. We also introduce a loop closure test that exploits the information from the filter using an independent measure of information content between poses. In the application domain, we present a technique to process the 3D volumetric maps obtained with this SLAM methodology, but with laser range scanning as the sensor modality, to derive traversability maps. Aside from these extensions to Pose SLAM, the core contribution of the thesis is an approach for path planning that exploits the modeled uncertainties in Pose SLAM to search for the path in the pose graph with the lowest accumulated robot pose uncertainty, i.e., the path that allows the robot to navigate to a given goal with the least probability of becoming lost. An added advantage of the proposed path planning approach is that since Pose SLAM is agnostic with respect to the sensor modalities used, it can be used in different environments and with different robots, and since the original pose graph may come from a previous mapping session, the paths stored in the map already satisfy constraints not easy modeled in the robot controller, such as the existence of restricted regions, or the right of way along paths. The proposed path planning methodology has been extensively tested both in simulation and with a real outdoor robot. Our path planning approach is adequate for scenarios where a robot is initially guided during map construction, but autonomous during execution. For other scenarios in which more autonomy is required, the robot should be able to explore the environment without any supervision. The second core contribution of this thesis is an autonomous exploration method that complements the aforementioned path planning strategy. The method selects the appropriate actions to drive the robot so as to maximize coverage and at the same time minimize localization and map uncertainties. An occupancy grid is maintained for the sole purpose of guaranteeing coverage. A significant advantage of the method is that since the grid is only computed to hypothesize entropy reduction of candidate map posteriors, it can be computed at a very coarse resolution since it is not used to maintain neither the robot localization estimate, nor the structure of the environment. Our technique evaluates two types of actions: exploratory actions and place revisiting actions. Action decisions are made based on entropy reduction estimates. By maintaining a Pose SLAM estimate at run time, the technique allows to replan trajectories online should significant change in the Pose SLAM estimate be detected. The proposed exploration strategy was tested in a common publicly available dataset comparing favorably against frontier based exploration
26

On the complexity of resolution-based proof systems

Oliva Valls, Sergi 02 May 2013 (has links)
Propositional Proof Complexity is the area of Computational Complexity that studies the length of proofs in propositional logic. One of its main questions is to determine which particular propositional formulas have short proofs in a given propositional proof system. In this thesis we present several results related to this question, all on proof systems that are extensions of the well-known resolution proof system. The first result of this thesis is that TQBF, the problem of determining if a fully-quantified propositional CNF-formula is true, is PSPACE-complete even when restricted to instances of bounded tree-width, i.e. a parameter of structures that measures their similarity to a tree. Instances of bounded tree-width of many NP-complete problems are tractable, e.g. SAT, the boolean satisfiability problem. We show that this does not scale up to TQBF. We also consider Q-resolution, a quantifier-aware version of resolution. On the negative side, our first result implies that, unless NP = PSPACE, the class of fully-quantified CNF-formulas of bounded tree-width does not have short proofs in any proof system (and in particular in Q-resolution). On the positive side, we show that instances with bounded respectful tree-width, a more restrictive condition, do have short proofs in Q-resolution. We also give a natural family of formulas with this property that have real-world applications. The second result concerns interpretability. Informally, we say that a first-order formula can be interpreted in another if the first one can be expressed using the vocabulary of the second, plus some extra features. We show that first-order formulas whose propositional translations have short R(const)-proofs, i.e. a generalized version of resolution with DNF-formulas of constant-size terms, are closed under a weaker form of interpretability (that with no extra features), called definability. Our main result is a similar result on interpretability. Also, we show some examples of interpretations and show a systematic technique to transform some Sigma_1-definitions into quantifier-free interpretations. The third and final result is about a relativized weak pigeonhole principle. This says that if at least 2n out of n^2 pigeons decide to fly into n holes, then some hole must be doubly occupied. We prove that the CNF encoding of this principle does not have polynomial-size DNF-refutations, i.e. refutations in the generalized version of resolution with unbounded DNF-formulas. For this proof we discuss the existence of unbalanced low-degree bipartite expanders satisfying a certain robustness condition.
27

Visible, near infrared and thermal hand-based image biometric recognition

Font Aragonès, Xavier 30 May 2013 (has links)
Biometric Recognition refers to the automatic identification of a person based on his or her anatomical characteristic or modality (i.e., fingerprint, palmprint, face) or behavioural (i.e., signature) characteristic. It is a fundamental key issue in any process concerned with security, shared resources, network transactions among many others. Arises as a fundamental problem widely known as recognition, and becomes a must step before permission is granted. It is supposed that protects key resources by only allowing those resources to be used by users that have been granted authority to use or to have access to them. Biometric systems can operate in verification mode, where the question to be solved is Am I who I claim I am? or in identification mode where the question is Who am I? Scientific community has increased its efforts in order to improve performance of biometric systems. Depending on the application many solutions go in the way of working with several modalities or combining different classification methods. Since increasing modalities require some user inconvenience many of these approaches will never reach the market. For example working with iris, face and fingerprints requires some user effort in order to help acquisition. This thesis addresses hand-based biometric system in a thorough way. The main contributions are in the direction of a new multi-spectral hand-based image database and methods for performance improvement. The main contributions are: A) The first multi-spectral hand-based image database from both hand faces: palmar and dorsal. Biometric database are a precious commodity for research, mainly when it offers something new like visual (VIS), near infrared (NIR) and thermography (TIR) images at a time. This database with a length of 100 users and 10 samples per user constitute a good starting point to check algorithms and hand suitability for recognition. B) In order to correctly deal with raw hand data, some image preprocessing steps are necessary. Three different segmentation phases are deployed to deal with VIS, NIR and TIR images specifically. Some of the tough questions to address: overexposed images, ring fingers and the cuffs, cold finger and noise image. Once image segmented, two different approaches are prepared to deal with the segmented data. These two approaches called: Holistic and Geometric define the main focus to extract the feature vector. These feature vectors can be used alone or can be combined in some way. Many questions can be stated: e.g. which approach is better for recognition?, Can fingers alone obtain better performance than the whole hand? and Is thermography hand information suitable for recognition due to its thermoregulation properties? A complete set of data ready to analyse, coming from the holistic and geometric approach have been designed and saved to test. Some innovative geometric approach related to curvature will be demonstrated. C) Finally the Biometric Dispersion Matcher (BDM) is used in order to explore how it works under different fusion schemes, as well as with different classification methods. It is the intention of this research to contrast what happen when using other methods close to BDM like Linear Discriminant Analysis (LDA). At this point, some interesting questions will be solved, e.g. by taking advantage of the finger segmentation (as five different modalities) to figure out if they can outperform what the whole hand data can teach us. / El Reconeixement Biomètric fa referència a la identi cació automàtica de persones fent us d'alguna característica o modalitat anatòmica (empremta digital) o d'alguna característica de comportament (signatura). És un aspecte fonamental en qualsevol procés relacionat amb la seguretat, la compartició de recursos o les transaccions electròniques entre d'altres. És converteix en un pas imprescindible abans de concedir l'autorització. Aquesta autorització, s'entén que protegeix recursos clau, permeten així, que aquests siguin utilitzats pels usuaris que han estat autoritzats a utilitzar-los o a tenir-hi accés. Els sistemes biomètrics poden funcionar en veri cació, on es resol la pregunta: Soc jo qui dic que soc? O en identi cació on es resol la qüestió: Qui soc jo? La comunitat cientí ca ha incrementat els seus esforços per millorar el rendiment dels sistemes biomètrics. En funció de l'aplicació, diverses solucions s'adrecen a treballar amb múltiples modalitats o combinant diferents mètodes de classi cació. Donat que incrementar el número de modalitats, representa a la vegada problemes pels usuaris, moltes d'aquestes aproximacions no arriben mai al mercat. La tesis contribueix principalment en tres grans àrees, totes elles amb el denominador comú següent: Reconeixement biometric a través de les mans. i) La primera d'elles constitueix la base de qualsevol estudi, les dades. Per poder interpretar, i establir un sistema de reconeixement biomètric prou robust amb un clar enfocament a múltiples fonts d'informació, però amb el mínim esforç per part de l'usuari es construeix aquesta Base de Dades de mans multi espectral. Les bases de dades biomètriques constitueixen un recurs molt preuat per a la recerca; sobretot si ofereixen algun element nou com es el cas. Imatges de mans en diferents espectres electromagnètics: en visible (VIS), en infraroig (NIR) i en tèrmic (TIR). Amb un total de 100 usuaris, i 10 mostres per usuari, constitueix un bon punt de partida per estudiar i posar a prova sistemes multi biomètrics enfocats a les mans. ii) El segon bloc s'adreça a les dues aproximacions existents en la literatura per a tractar les dades en brut. Aquestes dues aproximacions, anomenades Holística (tracta la imatge com un tot) i Geomètrica (utilitza càlculs geomètrics) de neixen el focus alhora d'extreure el vector de característiques. Abans de tractar alguna d'aquestes dues aproximacions, però, és necessària l'aplicació de diferents tècniques de preprocessat digital de la imatge per obtenir les regions d'interès desitjades. Diferents problemes presents a les imatges s'han hagut de solucionar de forma original per a cadascuna de les tipologies de les imatges presents: VIS, NIR i TIR. VIS: imatges sobre exposades, anells, mànigues, braçalets. NIR: Ungles pintades, distorsió en forma de soroll en les imatges TIR: Dits freds La segona àrea presenta aspectes innovadors, ja que a part de segmentar la imatge de la ma, es segmenten tots i cadascun dels dits (feature-based approach). Així aconseguim contrastar la seva capacitat de reconeixement envers la ma de forma completa. Addicionalment es presenta un conjunt de procediments geomètrics amb la idea de comparar-los amb els provinents de l'extracció holística. La tercera i última àrea contrasta el procediment de classi cació anomenat Biometric Dispersion Matcher (BDM) amb diferents situacions. La primera relacionada amb l'efectivitat respecte d'altres mètode de reconeixement, com ara l'Anàlisi Lineal Discriminant (LDA) o bé mètodes com KNN o la regressió logística. Les altres situacions que s'analitzen tenen a veure amb múltiples fonts d'informació, quan s'apliquen tècniques de normalització i/o estratègies de combinació (fusió) per millorar els resultats. Els resultats obtinguts no deixen lloc per a la confusió, i són certament prometedors en el sentit que posen a la llum la importància de combinar informació complementària per obtenir rendiments superiors.
28

Estimació dels paràmetres de models ARMA (P,Q) mitjançant algorismes de filtratge òptim

Muñoz Gràcia, Maria Pilar 01 January 1988 (has links)
El objetivo del trabajo era sintetizar un filtro no lineal optimo para la estimación de parámetros de modelos arma. El interés de un filtro de este estilo reside en elhecho que permite evaluar exactamente el numero mínimo deobservaciones necesarias para realizar una estimación conuna determinada precisión.A lo largo del trabajo se analizan ypresentan diversas opciones de síntesis de filtros para estos problemas y se hacen pruebas con cada uno de ellos.Nos parecen especialmente remarcables las expresiones recurrentes entre formas cuadráticas establecidas en el trabajo, que permiten plantear unfiltro especialmente sencillo.Los ejemplos tratados ponen de manifiesto que el numero deobservaciones necesarias depende esencialmente de laposición relativa de los polos y ceros del modelo, y queusualmente la precisión que se puede obtener es muy inferior a la que suponen métodos suboptimos de estimación.
29

Model Predictive Control of Complex Systems including Fault Tolerance Capabilities: Application to Sewer Networks

Ocampo-Martínez, Carlos 27 April 2007 (has links)
El control en temps real de xarxes de clavegueram (RTC) desenvolupa un paper fonamental dins de la gestió dels recursos hídrics relacionats amb el cicle urbà de l'aigua i, en general, amb el seu cicle natural. Un adequat disseny de control per a xarxes de clavegueram evita impactes mediambientals negatius originats per inundacions i/o alta pol·lució producte de condicions meteorològiques xtremes. No obstant, s'ha de tenir en compte que aquestes xarxes, a més de la seva grandària i quantitat de variables i instrumentació, són sistemes rics en dinàmiques complexes i altament no lineals. Aquest fet, unit a les condicions atmosfèriques extremes, fan necessari utilitzar una estratègia de control capaç¸ de suportar totes aquestes condicions. En aquest sentit, dins del camp del (RTC) de xarxes de clavegueram es destaquen les estratègies de control predictiu basat en model (MPC), les quals són alternatives adequades per al control de configuracions multivariable i de gran escala, aplicades com estratègies de control global del sistema. A m´es, permeten optimitzar la resposta del sistema tenint en compte diversos índexs de rendiment (control multiobjectiu). Aquesta tesi s'enfoca en el disseny de controladors MPC per a xarxes de clavegueram considerant diverses metodologies de modelat. Addicionalment, analitza les situacions en les quals es presenten fallades als actuadors de la xarxa, proposant estratègies per a mantenir la resposta del sistema amb la menor degradació possible dels objectius de control, malgrat la presència de la fallada. En la primera part s'introdueixen els conceptes principals dels temes a tractar en la tesi: xarxes de clavegueram, MPC i tolerància a fallades. Seguidament, es presenta la tècnica de modelat utilitzada per a definir el model d'una xarxa de clavegueram. Finalment, es presenta i descriu el cas d'aplicació en la tesi: la xarxa de clavegueram de Barcelona (Espanya). La segona part es centra en dissenyar controladors MPC per al cas d'estudi. S'han considerat dos tipus de model de xarxa: (i) un model lineal, el qual aproxima els comportaments no lineals de la xarxa, donant origen a estratègies MPC lineals amb les seves conegudes avantatges de l'optimització convexa i escalabilitat; i (ii) un model híbrid, el qual inclou les dinàmiques de commutació més representatives d'una xarxa de clavegueram com són els sobreeixidors. En aquest últim cas es proposa una nova etodologia de modelat híbrid per a xarxes de clavegueram i es dissenyen estratègies de control predictives basades en aquests models (HMPC), les quals calculen lleis de control globalment òptimes. Addicionalment, es proposa una estratègia de relaxació del problema d'optimització discreta per a evitar els grans temps de còmput requerits per a calcular la llei de control HMPC. Finalment, la tercera part de la tesi s'encarrega d'estudiar les capacitats de tolerància a fallades en actuadors de llaços de control MPC. En el cas de xarxes de clavegueram, la tesi considera fallades en les comportes de derivació i de retenció d'aigües residuals. A més, es proposa un modelat híbrid per a fallades que faci que el problema d'optimització associat no perdi la seva convexitat. Així, es proposen dos estratègies de HMPC tolerant a fallades (FTMPC): l'estratègia activa, la qual utilitza les avantatges d'una arquitectura de control tolerant a fallades (FTC), i l'estratègia passiva, la qual només depèn de la robustesa intrínseca de les tècniques de control MPC. Com a extensió a l'estudi de tolerància a fallades, es proposa una avaluació d'admissibilitat per a configuracions d'actuadors en fallada agafant com a referència la degradació dels objectius de control. El m-etode, basat en satisfacció de restriccions, permet avaluar l'admissibilitat d'una configuració d'actuadors en fallada i, en cas de no ser admesa, evitaria el procés de resoldre un problema d'optimització amb un alt cost computacional. Paraules clau: control predictiu basat en model, sistemes de clavegueram, sistemes híbrids, MLD, control tolerant a fallades, satisfacció de restriccions. / El control en tiempo real de redes de alcantarillado (RTC) desempeña un papel fundamental dentro de la gestión de los recursos hídricos relacionados con el ciclo urbano del agua y, en general, con su ciclo natural. Un adecuado diseño de control para de redes de alcantarillado evita impactos medioambientales negativos originados por inundaciones y/o alta polución producto de condiciones meteorológicas extremas. Sin embargo, se debe tener en cuenta que estas redes, además de su gran tamaño y cantidad de variables e instrumentación, son sistemas ricos en dinámicas complejas y altamente no lineales. Este hecho, unido a unas condiciones atmosféricas extremas, hace necesario utilizar una estrategia de control capaz de soportar todas estas condiciones. En este sentido, dentro del campo del RTC de redes de alcantarillado se destacan las estrategias de control predictivo basadas en modelo (MPC), las cuales son alternativas adecuadas para el control de configuraciones multivariable y de gran escala, aplicadas como estrategias de control global del sistema. Además, permiten optimizar el desempeño del sistema teniendo en cuenta diversos índices de rendimiento (control multiobjetivo). Esta tesis se enfoca en el diseño de controladores MPC para redes de alcantarillado considerando diversas metodologías de modelado. Adicionalmente, analiza las situaciones en las cuales se presentan fallos en los actuadores de la red, proponiendo estrategias para mantener el desempeño del sistema y evitando la degradación de los objetivos de control a pesar de la presencia del fallo. En la primera parte se introducen los conceptos principales de los temas a tratar en la tesis: redes de alcantarillado, MPC y tolerancia a fallos. Además, se presenta la técnica de modelado utilizada para definir el modelo de una red de alcantarillado. Finalmente, se presenta y describe el caso de aplicación considerado en la tesis: la red de alcantarillado de Barcelona (España). La segunda parte se centra en diseñar controladores MPC para el caso de estudio. Dos tipos de modelo de la red son considerados: (i) un modelo lineal, el cual aproxima los comportamientos no lineales de la red, dando origen a estrategias MPC lineales con sus conocidas ventajas de optimización convexa y escalabilidad; y (ii) un modelo híbrido, el cual incluye las dinámicas de conmutación más representativas de una red de alcantarillado como lo son los rebosaderos. En este último caso se propone una nueva metodología de modelado híbrido para redes de alcantarillado y se diseñan estrategias de control predictivas basadas en estos modelos (HMPC), las cuales calculan leyes de control globalmente óptimas. Adicionalmente se propone una estrategia de relajación del problema de optimización discreto para evitar los grandes tiempos de cálculo que pudieran ser requeridos al obtener la ley de control HMPC. Finalmente, la tercera parte de la tesis se ocupa de estudiar las capacidades de tolerancia a fallos en actuadores de lazos de control MPC. En el caso de redes de alcantarillado, la tesis considera fallos en las compuertas de derivación y de retención de aguas residuales. De igual manera, se propone un modelado híbrido para los fallos que haga que el problema de optimización asociado no pierda su convexidad. Así, se proponen dos estrategias de HMPC tolerante a fallos (FTMPC): la estrategia activa, la cual utiliza las ventajas de una arquitectura de control tolerante a fallos (FTC), y la estrategia pasiva, la cual sólo depende de la robustez intrínseca de las técnicas de control MPC. Como extensión al estudio de tolerancia a fallos, se propone una evaluación de admisibilidad para configuraciones de actuadores en fallo tomando como referencia la degradación de los objetivos de control. El método, basado en satisfacción de restricciones, permite evaluar la admisibilidad de una configuración de actuadores en fallo y, en caso de no ser admitida, evitaría el proceso de resolver un problema de optimización con un alto coste computacional. Palabras clave: control predictivo basado en modelo, sistemas de alcantarillado, sistemas híbridos, MLD, control tolerante a fallos, satisfacción de restricciones. / Real time control (RTC) of sewer networks plays a fundamental role in the management of hydrological systems, both in the urban water cycle, as well as in the natural water cycle. An adequate design of control systems for sewer networks can prevent the negative impact on the environment that Combined Sewer Overflow (CSO) as well as preventing flooding within city limits when extreme weather conditions occur. However, sewer networks are large scale systems with many variables, complex dynamics and strong nonlinear behaviour. Any control strategy applied should be capable of handling these challenging requirements. Within the field of RTC of sewer networks for global network control, the Model Predictive Control (MPC) strategy stands out due to its ability to handle large scale, nonlinear and multivariable systems. Furthermore, this strategy allows performance optimization, taking into account several control objectives simultaneously. This thesis is devoted to the design of MPC controllers for sewer networks, as well as the complementary modelling methodologies. Furthermore, scenarios where actuator faults occur are specially considered and strategies to maintain performance or at least minimizing its degradation in presence of faults are proposed. In the first part of this thesis, the basic concepts are introduced: sewer networks, MPC and fault tolerant control. In addition, the modelling methodologies used to describe such systems are presented. Finally the case study of this thesis is described: the sewer network of the city of Barcelona (Spain). The second part of this thesis is centered on the design of MPC controllers for the proposed case study. Two types of models are considered: (i) a linear model whose corresponding MPC strategy is known for its advantages such as convexity of the optimization problem and existing pro of sofstability, and (ii) a hybrid model which allows the inclusion of state dependent hybrid dynamics such as weirs. In the latter case, a new hybrid modelling methodology is introduced and hybrid model predictive control (HMPC) strategies based on these models are designed. Furthermore, strategies to relax the optimization problem are introduced to reduce calculation time required for the HMPC control law. Finally, the third part of this thesis is devoted to study the fault tolerance capabilities of MPC controllers. Actuator faults in retention and redirection gates are considered. Additionally, hybrid modelling techniques are presented for faults which, in the linear case, can not be treated without loosing convexity of the related optimization problem. Two fault tolerant HMPC strategies are compared: the active strategy, which uses the information from a diagnosis system to maintain control performance, and the passive strategy which only relies on the intrinsic robustness of the MPC control law. As an extension to the study of fault tolerance, the admissibility of faulty actuator configurations is analyzed with regard to the degradation of control objectives. The method, which is based on constraint satisfaction, allows the admissibility evaluation of actuator fault configurations, which avoids the process of solving the optimization problem with its related high computational cost. Keywords: MPC, sewer networks, hybrid systems, MLD, fault tolerant control, constraints satisfaction.
30

Speculative Vectorization for Superscalar Processors

Pajuelo González, Manuel A. (Manuel Alejandro) 24 November 2005 (has links)
Traditional vector architectures have been shown to be very effective in executing regular codes in which the compiler can detect data-level parallelism, i.e. repeating the same computation over different elements in the same code-level data structure.A skilled programmer can easily create efficient vector code from regular applications. Unfortunately, this vectorization can be difficult if applications are not regular or if the programmer does not have an exact knowledge of the underlying architecture. The compiler has a partial knowledge of the program (i.e. it has a limited knowledge of the values of the variables). Because of this, it generates code that is safe for any possible scenario according to its knowledge, and thus, it may lose significant opportunities to exploit SIMD parallelism. In addition to this, we have the problem of legacy codes that have been compiled for former versions of the ISA with no SIMD extensions, which are therefore not able to exploit new SIMD extensions incorporated into newer ISA versions.In this dissertation, we will describe a mechanism that is able to detect and exploit DLP at runtime by speculatively creating vector instructions for prefetching and precomputing data for future instances of their scalar counterparts. This process will be called Speculative Dynamic Vectorization.A more in-depth study of this technique reveals a very positive characteristic: the mechanism can easily be tailored to alleviate the main drawbacks of current superscalar processors, particularly branch mispredictions and the memory gap. In this dissertation, we will describe how to rearrange the basic Speculative Dynamic Vectorization mechanism to alleviate the branch misprediction penalty based on reusing control-flow independent instructions. The memory gap problem will be addressed with a set of mechanisms that exploit the stall cycles due to L2 misses in order to virtually enlarge the instruction window.Finally, more refinements of the basic Speculative Dynamic Vectorization mechanism will be presented to improve its performance at a reasonable cost. / Los procesadores vectoriales han demostrado ser muy eficientes ejecutando códigos regulares en los que el compilador ha detectado Paralelismo a Nivel de Datos. Este paralelismo consiste en repetir los mismos cálculos en diferentes elementos de la misma estructura de alto nivel.Un programador avanzado puede crear código vectorial eficiente para aplicaciones regulares. Por desgracia, esta vectorización puede llegar a ser compleja en aplicaciones regulares o si el programador no tiene suficiente conocimiento de la arquitectura sobre la que se va a ejecutar la aplicación.El compilador tiene un conocimiento parcial del programa. Debido a esto, genera código que se puede ejecutar sin problemas en cualquier escenario según su conocimiento y, por tanto, puede perder oportunidades de explotar el paralelismo SIMD (Single Instruction Multiple Data). Además, existe el problema de los códigos de legado que han sido compilados con versiones anteriores del juego de instrucciones que no disponían de instrucciones SIMD lo cual hace que no se pueda explotar las extensiones SIMD de las nuevas versiones de los juegos de instrucciones.En esta tesis se presentará un mecanismo capaz de detectar y explotar, en tiempo de ejecución, el paralelismo a nivel de datos mediante la creación especulativa de instrucciones vectoriales que prebusquen y precomputen valores para futuras instancias de instrucciones escalares. Este proceso se llamará Vectorización Dinámica Especulativa.Un estudio más profundo de esta técnica conducirá a una conclusión muy importante: este mecanismo puede ser fácilmente modificado para aliviar algunos de los problemas más importantes de los procesadores superescalares. Estos problemas son: los fallos de predicción de saltos y el gap entre procesador y memoria. En esta tesis describiremos como modificar el mecanismo básico de Vectorización Dinámica Especulativa para reducir la penalización de rendimiento producida por los fallos de predicción de saltos mediante el reuso de datos de instrucciones independientes de control. Además, se presentará un conjunto de técnicas que explotan los ciclos de bloqueo del procesador debidos a un fallo en la cache de segundo nivel mediante un agrandamiento virtual de la ventana de instrucciones. Esto reducirá la penalización del problema del gap entre procesador y memoria.Finalmente, se presentarán refinamientos del mecanismo básico de Vectorización Dinámica Especulativa enfocados a mejorar el rendimiento de éste a un bajo coste.

Page generated in 1.2304 seconds