• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 1
  • Tagged with
  • 10
  • 9
  • 8
  • 7
  • 7
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Adaptive memory hierarchies for next generation tiled microarchitectures

Herrero Abellanas, Enric 05 July 2011 (has links)
Les últimes dècades el rendiment dels processadors i de les memòries ha millorat a diferent ritme, limitant el rendiment dels processadors i creant el conegut memory gap. Sol·lucionar aquesta diferència de rendiment és un camp d'investigació d'actualitat i que requereix de noves sol·lucions. Una sol·lució a aquest problema són les memòries “cache”, que permeten reduïr l'impacte d'unes latències de memòria creixents i que conformen la jerarquia de memòria. La majoria de d'organitzacions de les “caches” estan dissenyades per a uniprocessadors o multiprcessadors tradicionals. Avui en dia, però, el creixent nombre de transistors disponible per xip ha permès l'aparició de xips multiprocessador (CMPs). Aquests xips tenen diferents propietats i limitacions i per tant requereixen de jerarquies de memòria específiques per tal de gestionar eficientment els recursos disponibles. En aquesta tesi ens hem centrat en millorar el rendiment i la eficiència energètica de la jerarquia de memòria per CMPs, des de les “caches” fins als controladors de memòria. A la primera part d'aquesta tesi, s'han estudiat organitzacions tradicionals per les “caches” com les privades o compartides i s'ha pogut constatar que, tot i que funcionen bé per a algunes aplicacions, un sistema que s'ajustés dinàmicament seria més eficient. Tècniques com el Cooperative Caching (CC) combinen els avantatges de les dues tècniques però requereixen un mecanisme centralitzat de coherència que té un consum energètic molt elevat. És per això que en aquesta tesi es proposa el Distributed Cooperative Caching (DCC), un mecanisme que proporciona coherència en CMPs i aplica el concepte del cooperative caching de forma distribuïda. Mitjançant l'ús de directoris distribuïts s'obté una sol·lució més escalable i que, a més, disposa d'un mecanisme de marcatge més flexible i eficient energèticament. A la segona part, es demostra que les aplicacions fan diferents usos de la “cache” i que si es realitza una distribució de recursos eficient es poden aprofitar els que estan infrautilitzats. Es proposa l'Elastic Cooperative Caching (ElasticCC), una organització capaç de redistribuïr la memòria “cache” dinàmicament segons els requeriments de cada aplicació. Una de les contribucions més importants d'aquesta tècnica és que la reconfiguració es decideix completament a través del maquinari i que tots els mecanismes utilitzats es basen en estructures distribuïdes, permetent una millor escalabilitat. ElasticCC no només és capaç de reparticionar les “caches” segons els requeriments de cada aplicació, sinó que, a més a més, és capaç d'adaptar-se a les diferents fases d'execució de cada una d'elles. La nostra avaluació també demostra que la reconfiguració dinàmica de l'ElasticCC és tant eficient que gairebé proporciona la mateixa taxa de fallades que una configuració amb el doble de memòria.Finalment, la tesi es centra en l'estudi del comportament de les memòries DRAM i els seus controladors en els CMPs. Es demostra que, tot i que els controladors tradicionals funcionen eficientment per uniprocessadors, en CMPs els diferents patrons d'accés obliguen a repensar com estan dissenyats aquests sistemes. S'han presentat múltiples sol·lucions per CMPs però totes elles es veuen limitades per un compromís entre el rendiment global i l'equitat en l'assignació de recursos. En aquesta tesi es proposen els Thread Row Buffers (TRBs), una zona d'emmagatenament extra a les memòries DRAM que permetria guardar files de dades específiques per a cada aplicació. Aquest mecanisme permet proporcionar un accés equitatiu a la memòria sense perjudicar el seu rendiment global. En resum, en aquesta tesi es presenten noves organitzacions per la jerarquia de memòria dels CMPs centrades en la escalabilitat i adaptativitat als requeriments de les aplicacions. Els resultats presentats demostren que les tècniques proposades proporcionen un millor rendiment i eficiència energètica que les millors tècniques existents fins a l'actualitat. / Processor performance and memory performance have improved at different rates during the last decades, limiting processor performance and creating the well known "memory gap". Solving this performance difference is an important research field and new solutions must be proposed in order to have better processors in the future. Several solutions exist, such as caches, that reduce the impact of longer memory accesses and conform the system memory hierarchy. However, most of the existing memory hierarchy organizations were designed for single processors or traditional multiprocessors. Nowadays, the increasing number of available transistors has allowed the apparition of chip multiprocessors, which have different constraints and require new ad-hoc memory systems able to efficiently manage memory resources. Therefore, in this thesis we have focused on improving the performance and energy efficiency of the memory hierarchy of chip multiprocessors, ranging from caches to DRAM memories. In the first part of this thesis we have studied traditional cache organizations such as shared or private caches and we have seen that they behave well only for some applications and that an adaptive system would be desirable. State-of-the-art techniques such as Cooperative Caching (CC) take advantage of the benefits of both worlds. This technique, however, requires the usage of a centralized coherence structure and has a high energy consumption. Therefore we propose the Distributed Cooperative Caching (DCC), a mechanism to provide coherence to chip multiprocessors and apply the concept of cooperative caching in a distributed way. Through the usage of distributed directories we obtain a more scalable solution and, in addition, has a more flexible and energy-efficient tag allocation method. We also show that applications make different uses of cache and that an efficient allocation can take advantage of unused resources. We propose Elastic Cooperative Caching (ElasticCC), an adaptive cache organization able to redistribute cache resources dynamically depending on application requirements. One of the most important contributions of this technique is that adaptivity is fully managed by hardware and that all repartitioning mechanisms are based on distributed structures, allowing a better scalability. ElasticCC not only is able to repartition cache sizes to application requirements, but also is able to dynamically adapt to the different execution phases of each thread. Our experimental evaluation also has shown that the cache partitioning provided by ElasticCC is efficient and is almost able to match the off-chip miss rate of a configuration that doubles the cache space. Finally, we focus in the behavior of DRAM memories and memory controllers in chip multiprocessors. Although traditional memory schedulers work well for uniprocessors, we show that new access patterns advocate for a redesign of some parts of DRAM memories. Several organizations exist for multiprocessor DRAM schedulers, however, all of them must trade-off between memory throughput and fairness. We propose Thread Row Buffers, an extended storage area in DRAM memories able to store a data row for each thread. This mechanism enables a fair memory access scheduling without hurting memory throughput. Overall, in this thesis we present new organizations for the memory hierarchy of chip multiprocessors which focus on the scalability and of the proposed structures and adaptivity to application behavior. Results show that the presented techniques provide a better performance and energy-efficiency than existing state-of-the-art solutions.
2

Programming, debugging, profiling and optimizing transactional memory programs

Hasanov Zyulkyarov, Ferard 19 July 2011 (has links)
Transactional memory (TM) is a new optimistic synchronization technique which has the potential of making shared memory parallel programming easier compared to locks without giving up from the performance. This thesis explores four aspects in the research of transactional memory. First, it studies how programming with TM compares to locks. During the course of work, it develops the first real transactional application ¿ AtomicQuake. AtomicQuake is adapted from the parallel version of the Quake game server by replacing all lock-based synchronization with atomic blocks. Findings suggest that programming with TM is indeed easier than locks. However the performance of current software TM systems falls behind the efficiently implemented lock-based versions of the same program. Also, the same findings report that the proposed language level extensions are not sufficient for developing robust production level software and that the existing development tools such as compilers, debuggers, and profilers lack support for developing transactional application. Second, this thesis introduces new set of debugging principles and abstractions. These new debugging principles and abstractions enable debugging synchronization errors which manifest at coarse atomic block level, wrong code inside atomic blocks, and also performance errors related to the implementation of the atomic block. The new debugging principles distinguish between debugging at the language level constructs such as atomic blocks and debugging the atomic blocks based on how they are implemented whether TM or lock inference. These ideas are demonstrated by implementing a debugger extension for WinDbg and the ahead-of-time C# to X86 Bartok-STM compiler. Third, this thesis investigates the type of performance bottlenecks in TM applications and introduces new profiling techniques to find and understand these bottlenecks. The new profiling techniques provide in-depth and comprehensive information about the wasted work caused by aborting transactions. The individual profiling abstractions can be grouped in three groups: (i) techniques to identify multiple conflicts from a single program run, (ii) techniques to describe the data structures involved in conflicts by using a symbolic path through the heap, rather than a machine address, and (iii) visualization techniques to summarize which transactions conflict most. The ideas were demonstrated by building a lightweight profiling framework for Bartok-STM and an offline tool which process and display the profiling data. Forth, this thesis explores and introduces new TM specific optimizations which target the wasted work due to aborting transactions. Using the results obtained with the profiling tool it analyzes and optimizes several applications from the STAMP benchmark suite. The profiling techniques effectively revealed TM-specific bottlenecks such as false conflicts and contentions accesses to data structures. The discovered bottlenecks were subsequently eliminated with using the new optimization techniques. Among the optimization highlights are the transaction checkpoints which reduced the wasted work in Intruder with 40%, decomposing objects to eliminate false conflicts in Bayes, early release in Labyrinth which decreased wasted work from 98% to 1%, using less contentions data structures such as chained hashtable in Intruder and Genome which have higher degree of parallelism.
3

Management of Cloud systems applied to eHealth

Vilaplana Mayoral, Jordi 10 September 2015 (has links)
This thesis explores techniques, models and algorithms for an efficient management of Cloud systems and how to apply them to the healthcare sector in order to improve current treatments. It presents two Cloud-based eHealth applications to telemonitor and control smoke-quitting and hypertensive patients. Different Cloud-based models were obtained and used to develop a Cloudbased infrastructure where these applications are deployed. The results show that these applications improve current treatments and that can be scaled as computing requirements grow. Multiple Cloud architectures and models were analyzed and then implemented using different techniques and scenarios. The Smoking Patient Control (S-PC) tool was deployed and tested in a real environment, showing a 28.4% increase in long-term abstinence. The Hypertension Patient Control (H-PC) tool, was successfully designed and implemented, and the computing boundaries were measured / Aquesta tesi explora tèniques, models i algorismes per una gestió eficient en sistemes al Núvol i com aplicar-ho en el sector de la salut per tal de millorar els tractaments actuals. Presenta dues aplicacions de salut electrònica basades en el Núvol per telemonitoritzar i controlar pacients fumadors i hipertensos. S'ha obtingut diferents models basats en el Núvol i s'han utilitzat per a desenvolupar una infraestructura on desplegar aquestes aplicacions. Els resultats mostren que aquestes aplicacions milloren els tractaments actuals així com escalen a mesura que els requeriments computacionals augmenten. Múltiples arquitectures i models han estat analitzats i implementats utilitzant diferents tècniques i escenaris. L'aplicació Smoking Patient Control (S-PC) ha estat desplegada i provada en un entorn real, aconseguint un augment del 28,4% en l'absistinència a llarg termini de pacients fumadors. L'aplicació Hypertension Patient Control (H-PC) ha estat dissenyada i implementada amb èxit, i els seus límits computacionals han estat mesurats. / Esta tesis explora ténicas, modelos y algoritmos para una gestión eficiente de sistemas en la Nube y como aplicarlo en el sector de la salud con el fin de mejorar los tratamientos actuales. Presenta dos aplicaciones de salud electrónica basadas en la Nube para telemonitorizar y controlar pacientes fumadores e hipertensos. Se han obtenido diferentes modelos basados en la Nube y se han utilizado para desarrollar una infraestructura donde desplegar estas aplicaciones. Los resultados muestran que estas aplicaciones mejoran los tratamientos actuales así como escalan a medida que los requerimientos computacionales aumentan. Múltiples arquitecturas y modelos han sido analizados e implementados utilizando diferentes técnicas y escenarios. La aplicación Smoking Patient Control (S-PC) se ha desplegado y provado en un entorno real, consiguiendo un aumento del 28,4% en la abstinencia a largo plazo de pacientes fumadores. La aplicación Hypertension Patient Control (H-PC) ha sido diseñada e implementada con éxito, y sus límites computacionales han sido medidos.
4

Computación distribuida en entornos peer-to-peer con calidad de servicio

Castellà Martínez, Damià 08 July 2011 (has links)
No description available.
5

Ontology Matching based On Class Context: to solve interoperability problem at Semantic Web

Lera Castro, Isaac 17 May 2012 (has links)
When we look at the amount of resources to convert formats to other formats, that is to say, to make information systems useful, it is the time when we realise that our communication model is inefficient. The transformation of information, as well as the transformation of energy, remains inefficient for the efficiency of the converters. In this work, we propose a new way to ``convert'' information, we propose a mapping algorithm of semantic information based on the context of the information in order to redefine the framework where this paradigm merges with multiple techniques. Our main goal is to offer a new view where we can make further progress and, ultimately, streamline and minimize the communication chain in integration process
6

Visual determination, tracking and execution of 2D grasps using a behavior-inspired approach

Recatalá Ballester, Gabriel 21 November 2003 (has links)
This thesis focuses on the definition of a task for the determination, tracking and execution of a grasp on an unknown object. In particular, it is considered the case in which the object is ideally planar and the grasp has to be executed with a two-fingered, parallel-jaw gripper using vision as the source of sensor data. For the specification of this task, an architecture is defined that is based on three basic components -virtual sensors, filters, and actuators-, which can be connected to define a control loop. Each step in this task is analyzed separately, considering several options in some cases.Some of the main contributions of this thesis include: (1) the use of a modular approach to the specification of a control task that provides a basic framework for supporting the concept of behavior; (2) the analysis of several strategies for obtaining a compact representation of the contour of an object; (3) the development of a method for the evaluation and search of a grasp on a planar object for a two-fingered gripper; (4) the specification of different representations of a grasp and the analysis of their use for tracking the grasp between different views of an object; (5) the specification of algorithms for the tracking of a grasp along the views of an object obtained from a sequence of single images and a sequence of stereo images; (6) the definition of parametrized models of the target position of the grasp points and of the feasibility of this target grasp, and of an off-line procedure for the computation of some of the reference values required by this model; and (7) the definition and analysis of a visual servoing control scheme to guide the gripper of a robot arm towards an unknown object using the grasp points computed for that object as control features.
7

The UJI online robot: a distributed architecture for pattern recognition, autonomous grasping and augmented reality

Marín Prades, Raúl 24 May 2002 (has links)
The thesis has been developed at the Intelligent Robotics Laboratory of the University Jaume I (Spain). The objectives are focused on the laboratory's interest fields, which are Telerobotics, Human-Robot Interaction, Manipulation, Visual Servoing, and Service Robotics in general.Basically, the work has consisted of designing and implementing a whole vision based robotic system to control an educational robot via web, by using voice commands like "Grasp the object one" or "Grasp the cube". Our original objectives were upgraded to include the possibility of programming the robot using high level voice commands as well as very quick and significant mouse interactions ("adjustable interaction levels"). Besides this, the User interface has been designed to allow the operator to "predict" the robot movements before sending the programmed commands to the real robot ("Predictive system"). This kind of interface has the particularity of saving network bandwidth and even being used as a whole task specification off-line programming interface. By using a predictive virtual environment and giving more intelligence to the robot supposes a higher level of interaction, which avoids the "cognitive fatigue" associated with many teleoperated systems.The most important novel contributions included in this work are the following:1. Automatic Object Recognition: The system is able to recognize the objects in the robot scenario by using a camera as input (automatic object recognition). This feature allows the user to interact with the robot using high level commands like "Grasp allen".2. Incremental Learning: Due to the fact that the object recognition procedure requires some kind of training before operating efficiently, the UJI Online Robot introduces the In-cremental Learning capability, that means the robot is always learning from the user in-teraction. It means the object recognition module performs better as time goes by.3. Autonomous Grasping: Once an object has been recognized in a scene, the following question is, how can we grasp it? The autonomous grasping module calculates the set of possible grasping points that can be used in order to manipulate an object according to the stability requirements.4. Non-Immersive Virtual Reality: In order to avoid the Internet latency and time-delay effects, the system offers a user interface based on non-immersive virtual reality. Hence, taken the camera data as input, a 3D virtual reality scenario is constructed, which allows specifying tasks that can be confirmed to the real robot in one step.5. Augmented Reality: The 3D virtual scenario is complemented with computer generated information that helps enormously to improve the human performance (e.g. projections of the gripper over the scene is shown, superposition of data in order to avoid robot occlusions, etc.). In some situations the user has more information by controlling the robot from the user interface (web based) than seeing the robot scenario directly.6. Task specification: The system permits specifying complete "Pick & Place" actions, which can be saved into a text file. This robot programming can be accomplished using both, the off-line and the on-line mode.7. Speech recognition/synthesis: To our knowledge this is the first online robot that allows the user to give high-level commands by using simply a microphone. Moreover, the speech synthesizer is integrated into the predictive display, in such a way that the robot re-sponds to the user and asks him/her for confirmation before sending the command to the real scenario.As explained at Chapter I, the novel contributions have been partially published in several sci-entific forums (journals, books, etc.). The most remarkable are for example the acceptance of two papers at the IEEE International Conference on Robotics and Automation 2002, and the publication of an extended article at the Special Issue on web telerobotics of the International Journal on Robotics and Automation (November 2002).We have proved the worth of the system by means of an application in the Education and Training domain. Almost one hundred undergraduate students have been using the web-based interface in order to program "Pick and Place" operations. The results are really encouraging (refer to Chapter VII) for more details. Although we are referring to the project as "The UJI Online Robot", in the Educa-tion and Training domain "The UJI Telerobotic Training System" term has been used instead.Further work is planned to focus on applying Remote Visual Servoing techniques in order to improve the actual system performance. This would avoid having to spend long nights calibrating the robot and the cameras, as well extending the system capabilities to work on less structured environments.
8

High performance computing on biological sequence alignment

Orobitg Cortada, Miquel 17 April 2013 (has links)
L'Alineament Múltiple de Seqüències (MSA) és una eina molt potent per a aplicacions biològiques importants. Els MSA són computacionalment complexos de calcular, i la majoria de les formulacions porten a problemes d'optimització NP-Hard. Per a dur a terme alineaments de milers de seqüències, nous desafiaments necessiten ser resolts per adaptar els algoritmes a l'era de la computació d'altes prestacions. En aquesta tesi es proposen tres aportacions diferents per resoldre algunes limitacions dels mètodes MSA. La primera proposta consisteix en un algoritme de construcció d'arbres guia per millorar el grau de paral•lelisme, amb la finalitat de resoldre el coll d'ampolla de l'etapa de l'alineament progressiu. La segona proposta consisteix en optimitzar la biblioteca de consistència per millorar el temps d'execució, l'escalabilitat, i poder tractar un major nombre de seqüències. Finalment, proposem Multiples Trees Alignment (MTA), un mètode MSA per alinear en paral•lel múltiples arbres guia, avaluar els alineaments obtinguts i seleccionar el millor com a resultat. Els resultats experimentals han demostrat que MTA millora considerablement la qualitat dels alineaments. El Alineamiento Múltiple de Secuencias (MSA) es una herramienta poderosa para aplicaciones biológicas importantes. Los MSA son computacionalmente complejos de calcular, y la mayoría de las formulaciones llevan a problemas de optimización NP-Hard. Para llevar a cabo alineamientos de miles de secuencias, nuevos desafíos necesitan ser resueltos para adaptar los algoritmos a la era de la computación de altas prestaciones. En esta tesis se proponen tres aportaciones diferentes para resolver algunas limitaciones de los métodos MSA. La primera propuesta consiste en un algoritmo de construcción de árboles guía para mejorar el grado de paralelismo, con el fin de resolver el cuello de botella de la etapa del alineamiento progresivo. La segunda propuesta consiste en optimizar la biblioteca de consistencia para mejorar el tiempo de ejecución, la escalabilidad, y poder tratar un mayor número de secuencias. Finalmente, proponemos Múltiples Trees Alignment (MTA), un método MSA para alinear en paralelo múltiples árboles guía, evaluar los alineamientos obtenidos y seleccionar el mejor como resultado. Los resultados experimentales han demostrado que MTA mejora considerablemente la calidad de los alineamientos. Multiple Sequence Alignment (MSA) is a powerful tool for important biological applications. MSAs are computationally difficult to calculate, and most formulations of the problem lead to NP-Hard optimization problems. To perform large-scale alignments, with thousands of sequences, new challenges need to be resolved to adapt the MSA algorithms to the High-Performance Computing era. In this thesis we propose three different approaches to solve some limitations of main MSA methods. The first proposal consists of a new guide tree construction algorithm to improve the degree of parallelism in order to resolve the bottleneck of the progressive alignment stage. The second proposal consists of optimizing the consistency library, improving the execution time and the scalability of MSA to enable the method to treat more sequences. Finally, we propose Multiple Trees Alignments (MTA), a MSA method to align in parallel multiple guide-trees, evaluate the alignments obtained and select the best one as a result. The experimental results demonstrated that MTA improves considerably the quality of the alignments.
9

An adaptive admission control and load balancing algorithm for a QoS-aware Web system

Gilly de la Sierra-Llamazares, Katja 16 November 2009 (has links)
The main objective of this thesis focuses on the design of an adaptive algorithm for admission control and content-aware load balancing for Web traffic. In order to set the context of this work, several reviews are included to introduce the reader in the background concepts of Web load balancing, admission control and the Internet traffic characteristics that may affect the good performance of a Web site. The admission control and load balancing algorithm described in this thesis manages the distribution of traffic to a Web cluster based on QoS requirements. The goal of the proposed scheduling algorithm is to avoid situations in which the system provides a lower performance than desired due to servers' congestion. This is achieved through the implementation of forecasting calculations. Obviously, the increase of the computational cost of the algorithm results in some overhead. This is the reason for designing an adaptive time slot scheduling that sets the execution times of the algorithm depending on the burstiness that is arriving to the system. Therefore, the predictive scheduling algorithm proposed includes an adaptive overhead control.Once defined the scheduling of the algorithm, we design the admission control module based on throughput predictions. The results obtained by several throughput predictors are compared and one of them is selected to be included in our algorithm. The utilisation level that the Web servers will have in the near future is also forecasted and reserved for each service depending on the Service Level Agreement (SLA). Our load balancing strategy is based on a classical policy. Hence, a comparison of several classical load balancing policies is also included in order to know which of them better fits our algorithm. A simulation model has been designed to obtain the results presented in this thesis.
10

Multiple cue integration for robust tracking in dynamic environments: application to video relighting

Moreno Noguer, Francesc 01 September 2005 (has links)
L'anàlisi de moviment i seguiment d'objectes ha estat un dels pricipals focus d'atenció en la comunitat de visió per computador durant les dues darreres dècades. L'interès per aquesta àrea de recerca resideix en el seu ample ventall d'aplicabilitat, que s'extén des de tasques de navegació de vehicles autònoms i robots, fins a aplications en la indústria de l'entreteniment i realitat virtual.Tot i que s'han aconseguit resultats espectaculars en problemes específics, el seguiment d'objectes continua essent un problema obert, ja que els mètodes disponibles són propensos a ser sensibles a diversos factors i condicions no estacionàries de l'entorn, com ara moviments impredictibles de l'objecte a seguir, canvis suaus o abruptes de la il·luminació, proximitat d'objectes similars o fons confusos. Enfront aquests factors de confusió la integració de múltiples característiques ha demostrat que permet millorar la robustesa dels algoritmes de seguiment. En els darrers anys, degut a la creixent capacitat de càlcul dels ordinadors, hi ha hagut un significatiu increment en el disseny de complexes sistemes de seguiment que consideren simultàniament múltiples característiques de l'objecte. No obstant, la majoria d'aquests algoritmes estan basats enheurístiques i regles ad-hoc formulades per aplications específiques, fent-ne impossible l'extrapolació a noves condicions de l'entorn.En aquesta tesi proposem un marc probabilístic general per integrar el nombre de característiques de l'objecte que siguin necessàries, permetent que interactuin mútuament per tal d'estimar-ne el seu estat amb precisió, i per tant, estimar amb precisió la posició de l'objecte que s'està seguint. Aquest marc, s'utilitza posteriorment per dissenyar un algoritme de seguiment, que es valida en diverses seqüències de vídeo que contenen canvis abruptes de posició i il·luminació, camuflament de l'objecte i deformacions no rígides. Entre les característiques que s'han utilitzat per representar l'objecte, cal destacar la paramatrització robusta del color en un espai de color dependent de l'objecte, que permet distingir-lo del fons més clarament que altres espais de color típicament ulitzats al llarg de la literatura.En la darrera part de la tesi dissenyem una tècnica per re-il·luminar tant escenes estàtiques com en moviment, de les que s'en desconeix la geometria. La re-il·luminació es realitza amb un mètode 'basat en imatges', on la generació de les images de l'escena sota noves condicions d'il·luminació s'aconsegueix a partir de combinacions lineals d'un conjunt d'imatges de referència pre-capturades, i que han estat generades il·luminant l'escena amb patrons de llum coneguts. Com que la posició i intensitat de les fonts d'il.luminació que formen aquests patrons de llum es pot controlar, és natural preguntar-nos: quina és la manera més òptima d'il·luminar una escena per tal de reduir el nombre d'imatges de referència? Demostrem que la millor manera d'il·luminar l'escena (és a dir, la que minimitza el nombre d'imatges de referència) no és utilitzant una seqüència de fonts d'il·luminació puntuals, com es fa generalment, sinó a través d'una seqüència de patrons de llum d'una base d'il·luminació depenent de l'objecte. És important destacar que quan es re-il·luminen seqüències de vídeo, les imatges successives s'han d'alinear respecte a un sistema de coordenades comú. Com que cada imatge ha estat generada per un patró de llum diferent il·uminant l'escena, es produiran canvis d'il·luminació bruscos entre imatges de referència consecutives. Sota aquestes circumstàncies, el mètode de seguiment proposat en aquesta tesi juga un paper fonamental. Finalment, presentem diversos resultats on re-il·luminem seqüències de vídeo reals d'objectes i cares d'actors en moviment. En cada cas, tot i que s'adquireix un únic vídeo, som capaços de re-il·luminar una i altra vegada, controlant la direcció de la llum, la seva intensitat, i el color. / Motion analysis and object tracking has been one of the principal focus of attention over the past two decades within the computer vision community. The interest of this research area lies in its wide range of applicability, extending from autonomous vehicle and robot navigation tasks, to entertainment and virtual reality applications.Even though impressive results have been obtained in specific problems, object tracking is still an open problem, since available methods are prone to be sensitive to several artifacts and non-stationary environment conditions, such as unpredictable target movements, gradual or abrupt changes of illumination, proximity of similar objects or cluttered backgrounds. Multiple cue integration has been proved to enhance the robustness of the tracking algorithms in front of such disturbances. In recent years, due to the increasing power of the computers, there has been a significant interest in building complex tracking systems which simultaneously consider multiple cues. However, most of these algorithms are based on heuristics and ad-hoc rules formulated for specific applications, making impossible to extrapolate them to new environment conditions.In this dissertation we propose a general probabilistic framework to integrate as many object features as necessary, permitting them to mutually interact in order to obtain a precise estimation of its state, and thus, a precise estimate of the target position. This framework is utilized to design a tracking algorithm, which is validated on several video sequences involving abrupt position and illumination changes, target camouflaging and non-rigid deformations. Among the utilized features to represent the target, it is important to point out the use of a robust parameterization of the target color in an object dependent colorspace which allows to distinguish the object from the background more clearly than other colorspaces commonly used in the literature.In the last part of the dissertation, we design an approach for relighting static and moving scenes with unknown geometry. The relighting is performed through an -image-based' methodology, where the rendering under new lighting conditions is achieved by linear combinations of a set of pre-acquired reference images of the scene illuminated by known light patterns. Since the placement and brightness of the light sources composing such light patterns can be controlled, it is natural to ask: what is the optimal way to illuminate the scene to reduce the number of reference images that are needed? We show that the best way to light the scene (i.e., the way that minimizes the number of reference images) is not using a sequence of single, compact light sources as is most commonly done, but rather to use a sequence of lighting patterns as given by an object-dependent lighting basis. It is important to note that when relighting video sequences, consecutive images need to be aligned with respect to a common coordinate frame. However, since each frame is generated by a different light pattern illuminating the scene, abrupt illumination changes between consecutive reference images are produced. Under these circumstances, the tracking framework designed in this dissertation plays a central role. Finally, we present several relighting results on real video sequences of moving objects, moving faces, and scenes containing both. In each case, although a single video clip was captured, we are able to relight again and again, controlling the lighting direction, extent, and color.

Page generated in 0.0951 seconds