Spelling suggestions: "subject:"taskallocation"" "subject:"allocation""
51 |
Impact of task allocation challenges in Global Software DevelopmentKonada, Aditya January 2023 (has links)
Context and Background: Global Software Development involves teams located in different geographical locations, time zones, and cultural contexts working together to accomplish a shared goal or complete a software project. These projects, which are conducted by teams in different locations, have been found to be more risky and challenging than those with teams in the same location. Therefore, it is important to have knowledge of the potential challenges of task allocation in global software development and strategies for addressing them to manage a GSD project successfully. Objective: This thesis aims to identify and tabulate all the Task allocation challenges in Global Software Development and synthesize the mitigation strategies for the challenges identified in task allocation in GSD. To evaluate the challenges that influence the task allocation process in GSD and perform an assessment to find the mitigation strategies for these challenges. Research Method: This is a systematic literature review of studies on empirical research on GSD, covering publications from 1999 to 2022. The focus of this research is specifically on challenges associated with task allocation in GSD projects. A survey was conducted to validate the identified challenges and gather suggestive mitigation strategies. This research aimed to identify task allocation challenges in GSD and suggest mitigation strategies in GSD projects. Results: This research has identified a total of 20 challenges related to task allocation in Global software development based on a review of publications from 1999 to 2022. Additionally, the mitigation strategies for the 20 identified challenges were found in a survey conducted as part of the research. The challenges and mitigation strategies are discussed in the paper.
|
52 |
Benchmarking algorithms and methods for task assignment of autonomous vehicles at Volvo Autonomous SolutionsBerglund, Jonas, Gärling, Ida January 2022 (has links)
For unmanned vehicles, autonomy means that the vehicles route can be planned and executed according to some pre-defined rules in the absence of human intervention. Autonomous vehicles (AVs) have become a common type of vehicle for various kinds of transport, for example autonomous forklifts within a warehouse environment. Volvo Autonomous Solution (VAS) works with autonomous vehicles in different areas. To better understand how different methods can be used for planning of autonomous vehicles, VAS initiated this project. To increase the efficiency of AVs, several problems can be examined. One such problem is the allocation problem, also called Multi-Robot Task Allocation, which aims to find out which vehicle should execute which task to achieve a global goal cooperatively. The AVs used by VAS handle Planning Missions (PMs). A PM is, for example, to move goods from a loading point to an unloading point. So, the problem examined in this study is how to assign PMs to vehicles in the most efficient way. The thesis also includes a collection of publications on the area. The problem is solved by using three methods: Mixed Integer Linear Programming (MILP), a Genetic Algorithm that was originally proposed for task assignment in a warehouse environment (GA Warehouse), and a Genetic Algorithm that was initially proposed for train scheduling (GA Train). With the MILP method, the problem has been formulated mathematically and the method guarantees an optimal solution. However, the major drawback of this approach is the large computational time required to retrieve a solution. The GA Warehouse method has a quite simple allocation process but a more complicated path planning part and is, in its entirety, not as flexible as the other methods. The GA Train method has a lower computational time and can consider many different aspects. All three methods generate similar solutions for the limited set of simple scenarios in this study, but an optimal solution can only be guaranteed by the MILP method. Regardless of which method is used, there is always a trade-off: a guarantee of the optimal solution at the expense of high computational time or a result where no optimal solution can be guaranteed but can be generated quickly. Which method to use depends on the context, what resources are available, and what requirements are placed on the solution. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
|
53 |
Strategic Stochastic Coordination and Learning In Regular Network GamesWei, Yi 19 May 2023 (has links)
Coordination is a desirable feature in many multi-agent systems, such as robotic, social and economic networks, allowing the execution of tasks that would be impossible by individual agents. This thesis addresses two problems in stochastic coordination where each agent make decisions strategically, taking into account the decisions of its neighbors over a regular network.
In the first problem, we study the coordination in a team of strategic agents choosing to undertake one of the multiple tasks. We adopt a stochastic framework where the agents decide between two distinct tasks whose difficulty is randomly distributed and partially observed. We show that a Nash equilibrium with a simple and intuitive linear structure exists for textit{diffuse} prior distributions on the task difficulties. Additionally, we show that the best response of any agent to an affine strategy profile can be nonlinear when the prior distribution is not diffuse. Then, we state an algorithm that allows us to efficiently compute a data-driven Nash equilibrium within the class of affine policies.
In the second problem, we assume that the payoff structure of the coordination game corresponds to a single task allocation scenario whose difficulty is perfectly observed. Since there are multiple Nash equilibria in this game, the agents must use a distributed stochastic algorithm know as textit{log linear learning} to play it multiple times.
First, we show that this networked coordination game is a potential game. Moreover, we establish that for regular networks, the convergence to a Nash equilibrium depends on the ratio between the task-difficulty parameter and the connectivity degree according to a threshold rule. We investigate via simulations the interplay between rationality and the degree of connectivity of the network. Our results show counter-intuitive behaviors such as the existence of regimes in which agents in a network with larger connectivity require less rational agents to converge to the Nash equilibrium with high probability. Simultaneously, we examined the characteristics of both regular graphical coordination games and non-regular graphical games using this particular bi-matrix game model. / Master of Science / This thesis focuses on addressing two problems in stochastic coordination among strategic agents in multi-agent systems, such as robotic, social, and economic networks. The first problem studies the coordination among agents when they need to choose between multiple tasks whose difficulties are randomly distributed and partially observed. The thesis shows the existence of a Nash equilibrium with a linear structure for certain prior distributions, and presents an algorithm to efficiently compute a data-driven Nash equilibrium within a specific class of policies. The second problem assumes a single task allocation scenario, whose difficulty is perfectly observed, and investigates the use of a distributed stochastic algorithm known as log-linear learning to converge to a Nash equilibrium. The thesis shows that the convergence to a Nash equilibrium depends on the task-difficulty parameter and the connectivity degree of the network, and explores the influence of rationality of the agents and the connectivity of the network on the learning process. Overall, the thesis provides insights into the challenges and opportunities in achieving coordination among strategic agents in multi-agent systems.
|
54 |
Task-Driven Integrity Assessment and Control for Vehicular Hybrid Localization SystemsDrawil, Nabil 17 January 2013 (has links)
Throughout the last decade, vehicle localization has been attracting significant attention in a wide range of applications, including Navigation Systems, Road Tolling, Smart Parking, and Collision Avoidance. To deliver on their requirements, these applications need specific localization accuracy. However, current localization techniques lack the required accuracy, especially for mission critical applications. Although various approaches for improving localization accuracy have been reported in the literature, there is still a need for more efficient and more effective measures that can ascribe some level of accuracy to the localization process. These measures will enable localization systems to manage the localization process and resources so as to achieve the highest accuracy possible, and to mitigate the impact of inadequate accuracy on the target application.
In this thesis, a framework for fusing different localization techniques is introduced in order to estimate the location of a vehicle along with location integrity assessment that captures the impact of the measurement conditions on the localization quality. Knowledge about estimate integrity allows the system to plan the use of its localization resources so as to match the target accuracy of the application. The framework introduced provides the tools that would allow for modeling the impact of the operation conditions on estimate accuracy and integrity, as such it enables more robust system performance in three steps.
First, localization system parameters are utilized to contrive a feature space that constitutes probable accuracy classes. Due to the strong overlap among accuracy classes in the feature space, a hierarchical classification strategy is developed to address the class ambiguity problem via the class unfolding approach (HCCU). HCCU strategy is proven to be superior with respect to other hierarchical configuration. Furthermore, a Context Based Accuracy Classification (CBAC) algorithm is introduced to enhance the performance of the classification process. In this algorithm, knowledge about the surrounding environment is utilized to optimize classification performance as a function of the observation conditions.
Second, a task-driven integrity (TDI) model is developed to enable the applications modules to be aware of the trust level of the localization output. Typically, this trust level functions in the measurement conditions; therefore, the TDI model monitors specific parameter(s) in the localization technique and, accordingly, infers the impact of the change in the environmental conditions on the quality of the localization process. A generalized TDI solution is also introduced to handle the cases where sufficient information about the sensing parameters is unavailable.
Finally, the produce of the employed localization techniques (i.e., location estimates, accuracy, and integrity level assessment) needs to be fused. Nevertheless, these techniques are hybrid and their pieces of information are conflicting in many situations. Therefore, a novel evidence structure model called Spatial Evidence Structure Model (SESM) is developed and used in constructing a frame of discernment comprising discretized spatial data. SESM-based fusion paradigms are capable of performing a fusion process using the information provided by the techniques employed. Both the location estimate accuracy and aggregated integrity resultant from the fusion process demonstrate superiority over the employing localization techniques. Furthermore, a context aware task-driven resource allocation mechanism is developed to manage the fusion process. The main objective of this mechanism is to optimize the usage of system resources and achieve a task-driven performance.
Extensive experimental work is conducted on real-life and simulated data to validate models developed in this thesis. It is evident from the experimental results that task-driven integrity assessment and control is applicable and effective on hybrid localization systems.
|
55 |
Task-Driven Integrity Assessment and Control for Vehicular Hybrid Localization SystemsDrawil, Nabil 17 January 2013 (has links)
Throughout the last decade, vehicle localization has been attracting significant attention in a wide range of applications, including Navigation Systems, Road Tolling, Smart Parking, and Collision Avoidance. To deliver on their requirements, these applications need specific localization accuracy. However, current localization techniques lack the required accuracy, especially for mission critical applications. Although various approaches for improving localization accuracy have been reported in the literature, there is still a need for more efficient and more effective measures that can ascribe some level of accuracy to the localization process. These measures will enable localization systems to manage the localization process and resources so as to achieve the highest accuracy possible, and to mitigate the impact of inadequate accuracy on the target application.
In this thesis, a framework for fusing different localization techniques is introduced in order to estimate the location of a vehicle along with location integrity assessment that captures the impact of the measurement conditions on the localization quality. Knowledge about estimate integrity allows the system to plan the use of its localization resources so as to match the target accuracy of the application. The framework introduced provides the tools that would allow for modeling the impact of the operation conditions on estimate accuracy and integrity, as such it enables more robust system performance in three steps.
First, localization system parameters are utilized to contrive a feature space that constitutes probable accuracy classes. Due to the strong overlap among accuracy classes in the feature space, a hierarchical classification strategy is developed to address the class ambiguity problem via the class unfolding approach (HCCU). HCCU strategy is proven to be superior with respect to other hierarchical configuration. Furthermore, a Context Based Accuracy Classification (CBAC) algorithm is introduced to enhance the performance of the classification process. In this algorithm, knowledge about the surrounding environment is utilized to optimize classification performance as a function of the observation conditions.
Second, a task-driven integrity (TDI) model is developed to enable the applications modules to be aware of the trust level of the localization output. Typically, this trust level functions in the measurement conditions; therefore, the TDI model monitors specific parameter(s) in the localization technique and, accordingly, infers the impact of the change in the environmental conditions on the quality of the localization process. A generalized TDI solution is also introduced to handle the cases where sufficient information about the sensing parameters is unavailable.
Finally, the produce of the employed localization techniques (i.e., location estimates, accuracy, and integrity level assessment) needs to be fused. Nevertheless, these techniques are hybrid and their pieces of information are conflicting in many situations. Therefore, a novel evidence structure model called Spatial Evidence Structure Model (SESM) is developed and used in constructing a frame of discernment comprising discretized spatial data. SESM-based fusion paradigms are capable of performing a fusion process using the information provided by the techniques employed. Both the location estimate accuracy and aggregated integrity resultant from the fusion process demonstrate superiority over the employing localization techniques. Furthermore, a context aware task-driven resource allocation mechanism is developed to manage the fusion process. The main objective of this mechanism is to optimize the usage of system resources and achieve a task-driven performance.
Extensive experimental work is conducted on real-life and simulated data to validate models developed in this thesis. It is evident from the experimental results that task-driven integrity assessment and control is applicable and effective on hybrid localization systems.
|
56 |
Safety-aware autonomous robot navigation, mapping and control by optimization techniquesLei, Tingjun 08 December 2023 (has links) (PDF)
The realm of autonomous robotics has seen impressive advancements in recent years, with robots taking on essential roles in various sectors, including disaster response, environmental monitoring, agriculture, and healthcare. As these highly intelligent machines continue to integrate into our daily lives, the pressing imperative is to elevate and refine their performance, enabling them to adeptly manage complex tasks with remarkable efficiency, adaptability, and keen decision-making abilities, all while prioritizing safety-aware navigation, mapping, and control systems. Ensuring the safety-awareness of these robotic systems is of paramount importance in their development and deployment. In this research, bio-inspired neural networks, nature-inspired intelligence, deep learning, heuristic algorithm and optimization techniques are developed for safety-aware autonomous robots navigation, mapping and control. A bio-inspired neural network (BNN) local navigator coupled with dynamic moving windows (DMW) is developed in this research to enhance obstacle avoidance and refines safe trajectories. A hybrid model is proposed to optimize trajectory of the global path of a mobile robot that maintains a safe distance from obstacles using a graph-based search algorithm associated with an improved seagull optimization algorithm (iSOA). A Bat-Pigeon algorithm (BPA) is proposed to undertake adjustable speed navigation of autonomous vehicles in light of object detection for safety-aware vehicle path planning, which can automatically adjust the speed in different road conditions. In order to perform effective collision avoidance in multi-robot task allocation, a spatial dislocation scheme is developed by introduction of an additional dimension for UAVs at different altitudes, whereas UAVs avoid collision at the same altitude using a proposed velocity profile paradigm. A multi-layer robot navigation system is developed to explore row-based environment. A directed coverage path planning (DCPP) fused with an informative planning protocol (IPP) method is proposed to efficiently and safely search the entire workspace. A human-autonomy teaming strategy is proposed to facilitate cooperation between autonomous robots and human expertise for safe navigation to desired areas. Simulation, comparison studies and on-going experimental results of optimization algorithms applied for autonomous robot systems demonstrate their effectiveness, efficiency and robustness of the proposed methodologies.
|
57 |
Investigación de nuevas metodologías para la planificación de sistemas de tiempo real multinúcleo mediante técnicas no convencionalesAceituno Peinado, José María 28 March 2024 (has links)
Tesis por compendio / [ES] Los sistemas de tiempo real se caracterizan por exigir el cumplimento de unos requisitos temporales que garanticen el funcionamiento aceptable de un sistema. Especialmente, en los sistemas de tiempo real estricto estos requisitos temporales deben ser inviolables. Estos sistemas suelen aplicarse en áreas como la aviación, la seguridad ferroviaria, satélites y control de procesos, entre otros. Por tanto, el incumplimiento de un requisito temporal en un sistema de tiempo real estricto puede ocasionar un fallo catastrófico.
La planificación de sistemas de tiempo real es una área en la que se estudian y aplican diversas metodologías, heurísticas y algoritmos que intentan asignar el recurso de la CPU sin pérdidas de plazo.
El uso de sistemas de computación multinúcleo es una opción cada vez más recurrente en los sistemas de tiempo real estrictos. Esto se debe, entre otras causas, a su alto rendimiento a nivel de computación gracias a su capacidad de ejecutar varios procesos en paralelo.
Por otro lado, los sistemas multinúcleo presentan un nuevo problema, la contención que ocurre debido a la compartición de los recursos de hardware. El origen de esta contención es la interferencia que en ocasiones ocurre entre tareas asignadas en distintos núcleos que pretenden acceder al mismo recurso compartido simultáneamente, típicamente acceso a memoria compartida. Esta interferencia añadida puede suponer un incumplimiento de los requisitos temporales, y por tanto, la planificación no sería viable.
En este trabajo se proponen nuevas metodologías y estrategias de planificación no convencionales para aportar soluciones al problema de la interferencia en sistemas multinúcleo. Estas metodologías y estrategias abarcan algoritmos de planificación, algoritmos de asignación de tareas a núcleos, modelos temporales y análisis de planificabilidad.
El resultado del trabajo realizado se ha publicado en diversos artículos en revistas del área. En ellos se presentan estas nuevas propuestas que afrontan los retos de la planificación de tareas. En la mayoría de los artículos presentados la estructura es similar: se introduce el contexto en el que nos situamos, se plantea la problemática existente, se expone una propuesta para solventar o mejorar los resultados de la planificación, después se realiza una experimentación para evaluar de forma práctica la metodología propuesta, se analizan los resultados obtenidos y finalmente se exponen unas conclusiones sobre la propuesta.
Los resultados de las metodologías no convencionales propuestas en los artículos que conforman esta tesis muestran una mejora del rendimiento de las planificaciones en comparación con algoritmos clásicos del área. Especialmente la mejora se produce en términos de disminución de la interferencia producida y mejora de la tasa de planificabilidad. / [CA] Els sistemes de temps real es caracteritzen per exigir el compliment d'uns requisits temporals que garantisquen el funcionament acceptable d'un sistema. Especialment, en els sistemes de temps real estricte aquests requisits temporals han de ser inviolables. Aquests sistemes solen aplicar-se en àrees com l'aviació, la seguretat ferroviària, satèl·lits i control de processos, entre altres. Per tant, l'incompliment d'un requisit temporal en un sistema de temps real estricte pot ocasionar un error catastròfic.
La planificació de sistemes de temps real és una àrea en la qual s'estudien i apliquen diverses metodologies, heurístiques i algorismes que intenten assignar el recurs de la CPU sense pèrdues de termini.
L'ús de sistemes de computació multinucli és una opció cada vegada més recurrent en els sistemes de temps real estrictes. Això es deu, entre altres causes, al seu alt rendiment a nivell de computació gràcies a la seua capacitat d'executar diversos processos en paral·lel.
D'altra banda, els sistemes multinucli presenten un nou problema, la contenció que ocorre a causa de la compartició dels recursos de hardware. L'origen d'aquesta contenció és la interferència que a vegades ocorre entre tasques assignades en diferents nuclis que pretenen accedir al mateix recurs compartit simultàniament, típicament accés a memòria compartida. Aquesta interferència afegida pot suposar un incompliment dels requisits temporals, i per tant, la planificació no seria viable.
En aquest treball es proposen noves metodologies i estratègies de planificació no convencionals per aportar solucions al problema de la interferència en sistemes multinucli. Aquestes metodologies i estratègies comprenen algorismes de planificació, algorismes d'assignació de tasques a nuclis, models temporals i anàlisis de planificabilitat.
El resultat del treball realitzat s'ha publicat en diversos articles en revistes de l'àrea. En ells es presenten aquestes noves propostes que afronten els reptes de la planificació de tasques. En la majoria dels articles presentats l'estructura és similar: s'introdueix el context en el qual ens situem, es planteja la problemàtica existent, s'exposa una proposta per a solucionar o millorar els resultats de la planificació, després es realitza una experimentació per a avaluar de manera pràctica la metodologia proposada, s'analitzen els resultats obtinguts i finalment s'exposen unes conclusions sobre la proposta.
Els resultats de les metodologies no convencionals proposades en els articles que conformen aquesta tesi mostren una millora del rendiment de les planificacions en comparació amb algorismes clàssics de l'àrea. Especialment, la millora es produeix en termes de disminució de la interferència produïda i millora de la taxa de planificabilitat. / [EN] Real-time systems are characterised by the demand for temporal constraints that guarantee acceptable operation and feasibility of a system. Especially, in hard real-time systems these temporal constraints must be respected. These systems are typically applied in areas such as aviation, railway safety, satellites and process control, among others. Therefore, a missed deadline in a hard-real time system can lead to a catastrophic failure.
The scheduling of real-time systems is an area where various methodologies, heuristics and algorithms are studied and applied in an attempt to allocate the CPU resources without missing any deadline.
The use of multicore computing systems is an increasingly recurrent option in hard real-time systems. This is due, among other reasons, to its high computational performance thanks to the ability to run multiple processes in parallel.
On the other hand, multicore systems present a new problem, the contention that occurs due to the sharing of hardware resources. The source of this contention is the interference that sometimes happens between tasks allocated in different cores that try to access the same shared resource simultaneously, typically shared memory access. This added interference can lead to miss a deadline, and therefore, the scheduling would not be feasible.
This paper proposes new non-conventional scheduling methodologies and strategies to provide solutions to the interference problem in multicore systems. These methodologies and strategies include scheduling algorithms, task allocation algorithms, temporal models and schedulability analysis.
The results of this work have been published in several journal articles in the field. In these articles the new proposals are presented, they face the challenges of task scheduling. In the majority of these articles the structure is similar: the context is introduced, the existing problem is identified, a proposal to solve or improve the results of the scheduling is presented, then the proposed methodology is experimented in order to evaluate it in practical terms, the results obtained are analysed and finally conclusions about the proposal are expressed.
The results of the non-conventional methodologies proposed in the articles that comprise this thesis show an improvement in the performance of the scheduling compared to classical algorithms in the area. In particular, the improvement is produced in terms of reducing the interference and a higher schedulability rate. / Esta tesis se ha realizado en el marco de dos proyectos de investigación de carácter nacional. Uno
de ellos es el proyecto es PRECON-I4. Consiste en la búsqueda de sistemas informáticos predecibles y confiables para la industria 4.0. El otro proyecto es PRESECREL, que consiste en la
búsqueda de modelos y plataformas para sistemas informáticos industriales predecibles, seguros
y confiables. Tanto PRECON-I4 como PRESECREL son proyectos coordinados financiados por
el Ministerio de Ciencia, Innovación y Universidades y los fondos FEDER (AEI/FEDER, UE).
En ambos proyectos participa la Universidad Politécnica de Valencia, la Universidad de Cantabria y la Universidad Politécnica de Madrid. Además, en PRESECREL también participa
IKERLAN S. COOP I.P. Además, parte de los resultados de esta tesis también han servido
para validar la asignación de recursos temporales en sistemas críticos en el marco del proyecto
METROPOLIS (PLEC2021-007609). / Aceituno Peinado, JM. (2024). Investigación de nuevas metodologías para la planificación de sistemas de tiempo real multinúcleo mediante técnicas no convencionales [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/203212 / Compendio
|
58 |
Designing and combining mid-air interaction techniques in large display environments / Conception et combinaisons de techniques d'interaction mid-air dans les environnements à grands écransNancel, Mathieu 05 December 2012 (has links)
Les environnements à grands écrans (Large Display Environments, LDE) sont des espaces de travail interactifs contenant un ou plusieurs grands écrans fixes et divers dispositifs d'entrée ayant pour but de permettre la visualisation et la manipulation de très grands jeux de données. La recherche s'est de plus en plus intéressée à ces environnements durant ces dix dernières années, et il existe d'ores-et-déjà un certain nombre de techniques d'interaction correspondant à la plupart des tâches élémentaires comme le pointage, la navigation et la sélection de commandes. Cependant, ces techniques sont souvent conçues et évaluées séparément, dans des environnements et des cas d'utilisations spécifiques. Il est donc difficile de les comparer et de les combiner.Dans ce manuscrit, je propose un ensemble de guides pour l'analyse et la combinaison des canaux d'entrée et de sortie disponibles dans les LDEs. Je présente d'abord une étude de leurs caractéristiques selon deux axes: (1) le retour visuel, et la manière dont il affecte l'utilisabilité des techniques d'interaction et la collaboration co-localisée, et (2) les canaux d'entrée, et comment les combiner en d'efficaces ensembles de techniques d'interaction. Grâce à ces analyses, j'ai développé quatre pré-requis de conception destinés à assurer que des techniques d'interaction peuvent être utilisées (i) à distance, (ii) en même temps que d'autres techniques et (iii) avec d'autres utilisateurs. Suivant ces pré-requis, j'ai conçu et évalué un ensemble de techniques de navigation, d'invocation de commandes tout en pointant, et de pointage haute-précision avec des moyens d'entrée limités. J'ai également développé deux méthodes de calibration de techniques de pointage, l'une spécifique aux techniques ayant deux niveaux de précision et l'autre adaptée aux fonctions d'accélération. En conclusion, j'introduis deux considérations de plus haut niveau sur la combinaison de techniques d'interaction dans des environnements aux canaux d'entrée limités : (1) il existe un compromis entre le fait de minimiser l'utilisation des membres de l'utilisateur et celui d'effectuer des actions en parallèle qui affecte les performances de l'ensemble ; (2) changer la fonction de transfert d'une technique de pointage durant son utilisation peut avoir un effet négatif sur les performances. / Large display environments (LDEs) are interactive physical workspaces featuring one or more static large displays as well as rich interaction capabilities, and are meant to visualize and manipulate very large datasets. Research about mid-air interactions in such environments has emerged over the past decade, and a number of interaction techniques are now available for most elementary tasks such as pointing, navigating and command selection. However these techniques are often designed and evaluated separately on specific platforms and for specific use-cases or operationalizations, which makes it hard to choose, compare and combine them.In this dissertation I propose a framework and a set of guidelines for analyzing and combining the input and output channels available in LDEs. I analyze the characteristics of LDEs in terms of (1) visual output and how it affects usability and collaboration and (2) input channels and how to combine them in rich sets of mid-air interaction techniques. These analyses lead to four design requirements intended to ensure that a set of interaction techniques can be used (i) at a distance, (ii) together with other interaction techniques and (iii) when collaborating with other users. In accordance with these requirements, I designed and evaluated a set of mid-air interaction techniques for panning and zooming, for invoking commands while pointing and for performing difficult pointing tasks with limited input requirements. For the latter I also developed two methods, one for calibrating high-precision techniques with two levels of precision and one for tuning velocity-based transfer functions. Finally, I introduce two higher-level design considerations for combining interaction techniques in input-constrained environments. Designers should take into account (1) the trade-off between minimizing limb usage and performing actions in parallel that affects overall performance, and (2) the decision and adaptation costs incurred by changing the resolution function of a pointing technique during a pointing task.
|
59 |
Nuevas metodologías para la asignación de tareas y formación de coaliciones en sistemas multi-robotGuerrero Sastre, José 31 March 2011 (has links)
Este trabajo analiza la idoneidad de dos de los principales métodos de asignación de tareas en entornos con restricciones temporales. Se pondrá de manifiesto que ambos tipos de mecanismos presentan carencias para tratar tareas con deadlines, especialmente cuando los robots han de formar coaliciones. Uno de los aspectos a los que esta tesis dedica mayor atención es la predicción del tiempo de ejecución, que depende, entre otros factores, de la interferencia física entre robots. Este fenómeno no se ha tenido en cuenta en los mecanismos actuales de asignación basados en subastas.
Así, esta tesis presenta el primer mecanismo de subastas para la creación de coaliciones que tiene en cuenta la interferencia entre robots. Para ello, se ha desarrollado un modelo de predicción del tiempo de ejecución y un nuevo paradigma llamado subasta doble. Además, se han propuesto nuevos mecanismos basados en swarm
|
60 |
Utilisation d'une hiérarchie de compétences pour l'optimisation de sélection de tâches en crowdsourcing / Using hierarchical skills for optimized task selection in crowdsourcingMavridis, Panagiotis 17 November 2017 (has links)
Des nombreuses applications participatives, commerciales et académiques se appuient sur des volontaires ("la foule") pour acquérir, désambiguiser et nettoyer des données. Ces applications participatives sont largement connues sous le nom de plates-formes de crowdsourcing où des amateurs peuvent participer à de véritables projets scientifiques ou commerciaux. Ainsi, des demandeurs sous-traitent des tâches en les proposant sur des plates-formes telles que Amazon MTurk ou Crowdflower. Puis, des participants en ligne sélectionnent et exécutent ces tâches, appelés microtasks, acceptant un micropaiement en retour. Ces plates-formes sont confrontées à des défis tels qu'assurer la qualité des réponses acquises, aider les participants à trouver des tâches pertinentes et intéressantes, tirer parti des compétences expertes parmi la foule, respecter les délais des tâches et promouvoir les participants qui accomplissent le plus de tâches. Cependant, la plupart des plates-formes ne modélisent pas explicitement les compétences des participants, ou se basent simplement sur une description en terme de mots-clés. Dans ce travail, nous proposons de formaliser les compétences des participants au moyen d'une structure hiérarchique, une taxonomie, qui permet naturellement de raisonner sur les compétences (détecter des compétences équivalentes, substituer des participants, ...). Nous montrons comment optimiser la sélection de tâches au moyen de cette taxonomie. Par de nombreuses expériences synthétiques et réelles, nous montrons qu'il existe une amélioration significative de la qualité lorsque l'on considère une structure hiérarchique de compétences au lieu de mots-clés purs. Dans une seconde partie, nous étudions le problème du choix des tâches par les participants. En effet, choisir parmi une interminable liste de tâches possibles peut s'avérer difficile et prend beaucoup de temps, et s’avère avoir une incidence sur la qualité des réponses. Nous proposons une méthode de réduction du nombre de propositions. L'état de l'art n'utilise ni une taxonomie ni des méthodes de classement. Nous proposons un nouveau modèle de classement qui tient compte de la diversité des compétences du participant et l'urgence de la tâche. À notre connaissance, nous sommes les premiers à combiner les échéances des tâches en une métrique d'urgence avec la proposition de tâches pour le crowdsourcing. Des expériences synthétiques et réelles montre que nous pouvons respecter les délais, obtenir des réponses de haute qualité, garder l'intérêt des participants tout en leur donnant un choix de tâches ciblé. / A large number of commercial and academic participative applications rely on a crowd to acquire, disambiguate and clean data. These participative applications are widely known as crowdsourcing platforms where amateur enthusiasts are involved in real scientific or commercial projects. Requesters are outsourcing tasks by posting them on online commercial crowdsourcing platforms such as Amazon MTurk or Crowdflower. There, online participants select and perform these tasks, called microtasks, accepting a micropayment in return. These platforms face challenges such as reassuring the quality of the acquired answers, assisting participants to find relevant and interesting tasks, leveraging expert skills among the crowd, meeting tasks' deadlines and satisfying participants that will happily perform more tasks. However, related work mainly focuses on modeling skills as keywords to improve quality, in this work we formalize skills with the use a hierarchical structure, a taxonomy, that can inherently provide with a natural way to substitute tasks with similar skills. It also takes advantage of the whole crowd workforce. With extensive synthetic and real datasets, we show that there is a significant improvement in quality when someone considers a hierarchical structure of skills instead of pure keywords. On the other hand, we extend our work to study the impact of a participant’s choice given a list of tasks. While our previous solution focused on improving an overall one-to-one matching for tasks and participants we examine how participants can choose from a ranked list of tasks. Selecting from an enormous list of tasks can be challenging and time consuming and has been proved to affect the quality of answers to crowdsourcing platforms. Existing related work concerning crowdsourcing does not use either a taxonomy or ranking methods, that exist in other similar domains, to assist participants. We propose a new model that takes advantage of the diversity of the parcipant's skills and proposes him a smart list of tasks, taking into account their deadlines as well. To the best of our knowledge, we are the first to combine the deadlines of tasks into an urgency metric with the task proposition for knowledge-intensive crowdsourcing. Our extensive synthetic and real experimentation show that we can meet deadlines, get high quality answers, keep the interest of participants while giving them a choice of well selected tasks.
|
Page generated in 0.094 seconds