• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 6
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 73
  • 73
  • 20
  • 20
  • 14
  • 14
  • 11
  • 10
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

The development of a Hardware-in-the-Loop test setup for event-based vision near-space space objects.

van den Boogaard, Rik January 2023 (has links)
The purpose of this thesis work was to develop a Hardware-in-the-Loop imaging setup that enables experimenting with an event-based and frame-based camera under simulated space conditions. The generated data sets were used to compare visual navigation algorithms in terms of an event-based and frame-based feature detection and tracking algorithm. The comparative analyses of the feature detection and tracking algorithms were used to get insights into the feasibility of event-based vision near-space space objects. Event-based cameras differ from frame-based cameras by how they produce an asynchronous and independent stream of events caused by brightness changes at each pixel instead of capturing images at a fixed rate. The setup design is based on a theoretical framework incorporating optical calculations. These calculations indicating the asteroid model needed to be scaled down by a factor of 3192 to fit inside the camera depth-of-view. This resulted in a scaled Bennu asteroid with a size of 16.44 centimeters.The cameras under testing conducted three experiments to generate data sets. The utilization of a feature detection and tracking algorithm on both camera data sets revealed that the absolute number of tracked features, computation time, and robustness in various scenarios of the frame-based camera algorithm outperforms the event-based camera algorithm. However, when considering the percentages of tracked features relative to the total detected features, the event-based algorithm tracks a significantly higher percentage of features for at least one key frame than the frame-based algorithm.  The comparative analysis of the experiments performed in space-simulated conditions during this project showed that the feasibility of an event-based camera using solely events is low compared to the frame-based camera.
52

Knowledge Transfer for Person Detection in Event-Based Vision

Suihko, Gabriel January 2024 (has links)
This thesis investigates the application of knowledge transfer techniques to process event-based data forperson detection in area surveillance. A teacher-student model setup is employed, where both modelsare pretrained on conventional visual data. The teacher model processes visual images to generate targetlabels for the student model trained on event-based data, forming the baseline model. Building onthis, the project incorporates feature-based knowledge transfer, specifically transferring features fromthe Feature Pyramid Network (FPN) component of the Faster R-CNN ResNet-50 FPN network. Resultsindicate that response-based knowledge transfer can effectively finetune models for event-based data.However, feature-based knowledge transfer yields mixed results, requiring more refined techniques forconsistent improvement. The study identifies limitations, including the need for a more diverse dataset,improved preprocessing methods, labeling techniques, and refined feature-based knowledge transfermethods. This research bridges the gap between conventional object detection methods and event-baseddata, enhancing the applicability of event cameras in surveillance applications.
53

Vers une gestion coopérative des infrastructures virtualisées à large échelle : le cas de l'ordonnancement / Toward cooperative management of large-scale virtualized infrastructures : the case of scheduling

Quesnel, Flavien 20 February 2013 (has links)
Les besoins croissants en puissance de calcul sont généralement satisfaits en fédérant de plus en plus d’ordinateurs (ou noeuds) pour former des infrastructures distribuées. La tendance actuelle est d’utiliser la virtualisation système dans ces infrastructures, afin de découpler les logiciels des noeuds sous-jacents en les encapsulant dans des machines virtuelles. Pour gérer efficacement ces infrastructures virtualisées, de nouveaux gestionnaires logiciels ont été mis en place. Ces gestionnaires sont pour la plupart hautement centralisés (les tâches de gestion sont effectuées par un nombre restreint de nœuds dédiés). Cela limite leur capacité à passer à l’échelle, autrement dit à gérer de manière réactive des infrastructures de grande taille, qui sont de plus en plus courantes. Au cours de cette thèse, nous nous sommes intéressés aux façons d’améliorer cet aspect ; l’une d’entre elles consiste à décentraliser le traitement des tâches de gestion, lorsque cela s’avère judicieux. Notre réflexion s’est concentrée plus particulièrement sur l’ordonnancement dynamique des machines virtuelles, pour donner naissance à la proposition DVMS (Distributed Virtual Machine Scheduler). Nous avons mis en œuvre un prototype, que nous avons validé au travers de simulations (notamment via l’outil SimGrid), et d’expériences sur le banc de test Grid’5000. Nous avons pu constater que DVMS se montrait particulièrement réactif pour gérer des infrastructures virtualisées constituées de dizaines de milliers de machines virtuelles réparties sur des milliers de nœuds. Nous nous sommes ensuite penchés sur les perspectives d’extension et d’amélioration de DVMS. L’objectif est de disposer à terme d’un gestionnaire décentralisé complet, objectif qui devrait être atteint au travers de l’initiative Discovery qui fait suite à ces travaux. / The increasing need in computing power has been satisfied by federating more and more computers (called nodes) to build the so-called distributed infrastructures. Over the past few years, system virtualization has been introduced in these infrastructures (the software is decoupled from the hardware by packaging it in virtual machines), which has lead to the development of software managers in charge of operating these virtualized infrastructures. Most of these managers are highly centralized (management tasks are performed by a restricted set of dedicated nodes). As established, this restricts the scalability of managers, in other words their ability to be reactive to manage large-scale infrastructures, that are more and more common. During this Ph.D., we studied how to mitigate these concerns ; one solution is to decentralize the processing of management tasks, when appropriate. Our work focused in particular on the dynamic scheduling of virtual machines, resulting in the DVMS (Distributed Virtual Machine Scheduler) proposal. We implemented a prototype, that was validated by means of simulations (especially with the SimGrid tool) and with experiments on the Grid’5000 test bed. We observed that DVMS was very reactive to schedule tens of thousands of virtual machines distributed over thousands of nodes. We then took an interest in the perspectives to improve and extend DVMS. The final goal is to build a full decentralized manager. This goal should be reached by the Discovery initiative,that will leverage this work.
54

Architecture événementielle pour les environnements virtuels collaboratifs sur le web : application à la manipulation et à la visualisation d'objets en 3D / Event-based architecture for web-based virtual collaborative environments : application to manipulation and visualisation of 3D objects

Desprat, Caroline 01 December 2017 (has links)
L’évolution technologique du web durant ces dernières années a favorisé l’arrivée d’environnements virtuels collaboratifs pour la modélisation 3D à grande échelle. Alors que la collaboration réunit dans un même espace partagé des utilisateurs distants géographiquement pour un objectif de collaboration commun, les ressources matérielles qu'ils apportent (calcul, stockage, 3D ...) avec leurs connaissances sont encore trop rarement utilisées et cela constitue un défi. Il s'agit en effet de proposer un système simple, performant et transparent pour les utilisateurs afin de permettre une collaboration efficace à la fois sur le volet computationnel mais aussi, bien entendu, sur l'aspect métier lié à la modélisation 3D sur le web. Pour rendre efficace le passage à l’échelle, de nombreux systèmes utilisent une architecture réseau dite "hybride", combinant client serveur et pair-à-pair. La réplication optimiste s'adapte bien aux propriétés de ces environnements répartis : la dynamicité des utilisateurs et leur nombre, le type de donnée traitées (3D) et leur taille. Cette thèse présente un modèle pour les systèmes d’édition collaborative en 3D sur le web. L'architecture cliente (3DEvent) permet de déporter les aspects métiers de la 3D au plus près de l’utilisateur sous la forme d’évènements. Cette architecture orientée événements repose sur le constat d’un fort besoin de traçabilité et d’historique sur les données 3D lors de l’assemblage d’un modèle. Cet aspect est porté intrinsèquement par le patron de conception event-sourcing. Ce modèle est complété par la définition d’un intergiciel en pair-à-pair. Sur ce dernier point, nous proposons d'utiliser la technologie WebRTC qui présente une API familière aux développeurs de services en infonuagique. Une évaluation portant sur deux études utilisateur concernant l’acceptance du modèle proposé a été menée dans le cadre de tâches d’assemblage de modèles 3D sur plusieurs groupes d’utilisateurs. / Web technologies evolutions during last decades fostered the development of collaborative virtual environments for 3D design at large scale. Despite the fact that collaborative environments gather in a same shared space geographically distant users in a common objective, the hardware ressources of their clients (calcul, storage, graphics ...) are often underused because of the challenge it represents. It is indeed a matter of offering an easy-to-use, efficient and transparent collaborative system to the user supporting both computationnal and 3D design visualisation and business logic needs in heterogeneous web environments. To scale well, numerous systems use a network architecture called "hybrid", combining both client-server and peer-to-peer. Optimistic replication is well adapted to distributed application such as 3D collaborative envionments : the dynamicity of users and their numbers, the 3D data type used and the large amount and size of it.This document presents a model for 3D web-based collaborative editing systems. This model integrates 3DEvent, an client-based architecture allowing us to bring 3D business logic closer to the user using events. Indeed, the need of traceability and history awareness is required during 3D design especially when several experts are involved during the process. This aspect is intrinsec to event-sourcing design pattern. This architecture is completed by a peer-to-peer middleware responsible for the synchronisation and the consistency of the system. To implement it, we propose to use the recent web standard API called WebRTC, close to cloud development services know by developers. To evaluate the model, two user studies were conducted on several group of users concerning its responsiveness and the acceptance by users in the frame of cooperative assembly tasks of 3D models.
55

RECONSTRUCTION OF HIGH-SPEED EVENT-BASED VIDEO USING PLUG AND PLAY

Trevor D. Moore (5930756) 16 January 2019 (has links)
<div>Event-Based cameras, also known as neuromophic cameras or dynamic vision sensors, are an imaging modality that attempt to mimic human eyes by asynchronously measuring contrast over time. If the contrast changes sufficiently then a 1-bit event is output, indicating whether the contrast has gone up or down. This stream of events is sparse, and its asynchronous nature allows the pixels to have a high dynamic range and high temporal resolution. However, these events do not encode the intensity of the scene, resulting in an inverse problem to estimate intensity images from the event stream. Hybrid event-based cameras, such as the DAVIS camera, provide a reference intensity image that can be leveraged when estimating the intensity at each pixel during an event. Normally, inverse problems are solved by formulating a forward and prior model and minimizing the associated cost, however, for this problem, the Plug and Play (P&P) algorithm is used to solve the inverse problem. In this case, P&P replaces the prior model subproblem with a denoiser, making the algorithm modular, easier to implement. We propose an idealized forward model that assumes the contrast steps measured by the DAVIS camera are uniform in size to simplify the problem. We show that the algorithm can swiftly reconstruct the scene intensity at a user-specified frame rate, depending on the chosen denoiser’s computational complexity and the selected frame rate.</div>
56

Using Event-Based and Rule-Based Paradigms to Develop Context-Aware Reactive Applications.

Le, Truong Giang 30 September 2013 (has links) (PDF)
Context-aware pervasive computing has attracted a significant research interest from both academy and industry worldwide. It covers a broad range of applications that support many manufacturing and daily life activities. For instance, industrial robots detect the changes of the working environment in the factory to adapt their operations to the requirements. Automotive control systems may observe other vehicles, detect obstacles, and monitor the essence level or the air quality in order to warn the drivers in case of emergency. Another example is power-aware embedded systems that need to work based on current power/energy availability since power consumption is an important issue. Those kinds of systems can also be considered as smart applications. In practice, successful implementation and deployment of context-aware systems depend on the mechanism to recognize and react to variabilities happening in the environment. In other words, we need a well-defined and efficient adaptation approach so that the systems' behavior can be dynamically customized at runtime. Moreover, concurrency should be exploited to improve the performance and responsiveness of the systems. All those requirements, along with the need for safety, dependability, and reliability pose a big challenge for developers.In this thesis, we propose a novel programming language called INI, which supports both event-based and rule-based programming paradigms and is suitable for building concurrent and context-aware reactive applications. In our language, both events and rules can be defined explicitly, in a stand-alone way or in combination. Events in INI run in parallel (synchronously or asynchronously) in order to handle multiple tasks concurrently and may trigger the actions defined in rules. Besides, events can interact with the execution environment to adjust their behavior if necessary and respond to unpredictable changes. We apply INI in both academic and industrial case studies, namely an object tracking program running on the humanoid robot Nao and a M2M gateway. This demonstrates the soundness of our approach as well as INI's capabilities for constructing context-aware systems. Additionally, since context-aware programs are wide applicable and more complex than regular ones, this poses a higher demand for quality assurance with those kinds of applications. Therefore, we formalize several aspects of INI, including its type system and operational semantics. Furthermore, we develop a tool called INICheck, which can convert a significant subset of INI to Promela, the input modeling language of the model checker SPIN. Hence, SPIN can be applied to verify properties or constraints that need to be satisfied by INI programs. Our tool allows the programmers to have insurance on their code and its behavior.
57

Einstellung von PI-Reglern bei Send-on-Delta-Abtastung

Hensel, Burkhard 08 December 2017 (has links) (PDF)
Energieeffizienz hat in Forschung und Alltag eine zentrale Bedeutung. Arbeiten verschiedene elektronische Geräte zusammen, um gemeinsam eine Regelungsaufgabe zu lösen, müssen sie miteinander kommunizieren. Ein Beispiel aus dem Alltag sind Funk-Raumtemperaturregler, bei denen ein batteriebetriebener Temperatursensor und ein Heizungsaktor (Stellantrieb am Heizungsventil) über drahtlose Kommunikation zusammenarbeiten. Diese Kommunikation benötigt oft mehr Energie als der Betrieb der eigentlichen (elektronischen) Funktionalität der Teilsysteme. Energieeffizienter als die in Regelkreisen übliche periodische (äquidistante) Abtastung ist – durch eine Verringerung der Nachrichtenrate – eine ereignisbasierte Abtastung. Send-on-Delta-Abtastung ist die am weitesten verbreitete Art der ereignisbasierten Abtastung. Dabei wird der Wert der Regelgröße (im Beispiel die Raumtemperatur) nicht in konstanten Zeitintervallen übertragen, sondern nur dann, wenn er sich um einen bestimmten Betrag geändert hat. Der mit einem Anteil von über 90 % im Praxiseinsatz am weitesten verbreitete Reglertyp ist der PID-Regler, wobei die meisten als „PID-Regler“ bezeichneten Regler aus verschiedenen Gründen keinen D-Anteil (Differential-Anteil) verwenden und daher als „PI-Regler“ bezeichnet werden können. Die vorliegende Arbeit verfolgt das Ziel, systematisch zu untersuchen, wie man PI-Regler einstellen sollte, um neben dem Erreichen einer hohen Regelgüte auch die Vorteile der Send-on-Delta-Abtastung bezüglich der Netzlastreduktion und Energieeffizienz bestmöglich auszunutzen. Die „Gewichtung“ dieser sich teilweise widersprechenden Kriterien ist anwendungsspezifisch einstellbar. / Energy efficiency is very important both in science and everyday life. If different electronic devices work together, for example for solving a control task together, they have to communicate with each other. An everyday life example are room temperature controllers using radio communication between a battery-powered temperature sensor and a heating actuator. This communication often needs more energy than the operation of the actual (electronic) functionality of the components. More energy-efficient than the commonly used periodic sampling is event-based sampling, due to the reduction of the message rate. Send-on-delta sampling is the most widely-known kind of event-based sampling. In that case, the value of the controlled variable (e.g. the room temperature) is not transmitted equidistantly but only when it has changed by a specific amount. The most successful controller in practice is the PID controller. The most so-called “PID controllers” do not use the D part (differential action) for several reasons and can therefore be called “PI controllers”. This work analyses systematically how the parameters of a PI controller should be tuned to reach besides a high control quality also a good exploitation of the advantages of send-on-delta sampling regarding network load reduction and energy efficiency. The “weighting” of these partially contradicting criteria is application specifically adjustable.
58

Models and algorithms to study the common evolutionary history of hosts and symbionts / Modèles et algorithmes pour étudier l'histoire évolutive commune des hôtes et des symbiotes

Urbini, Laura 23 October 2017 (has links)
Lors de cette thèse, je me suis intéressée aux modèles et aux algorithmes pour étudier l'histoire évolutive commune des hôtes et des symbiotes. Le premier objectif était d'analyser la robustesse des méthodes de réconciliation des arbres phylogénétiques, qui sont très utilisées dans ce type d'étude. Celles-ci associent (ou lient) un arbre, d'habitude celui des symbiotes, à l'autre, en utilisant un modèle dit basé sur des évènements. Les évènements les plus utilisés sont la cospéciation, la duplication, le saut et la perte. Les phylogénies des hôtes et des symbiotes sont généralement considérés comme donnés, et sans aucune erreur. L'objectif était de comprendre les forces et les faiblesses du modèle parcimonieux utilisé et comprendre comment les résultats finaux peuvent être influencés en présence de petites perturbations ou d'erreurs dans les données en entrée. Ici deux cas sont considérés, le premier est le choix erroné d'une association entre les feuilles des hôtes et des symbiotes dans le cas où plusieurs existent, le deuxième est lié au mauvais choix de l'enracinement de l'arbre des symbiotes. Nos résultats montrent que le choix des associations entre feuilles et le choix de l'enracinement peuvent avoir un fort impact sur la variabilité de la réconciliation obtenue. Nous avons également remarqué que l'evènement appelé “saut” joue un rôle important dans l'étude de la robustesse, surtout pour le problème de l'enracinement. Le deuxième objectif de cette thèse était d'introduire certains evènements peu ou pas formellement considérés dans la littérature. L'un d'entre eux est la “propagation”, qui correspond à l'invasion de différents hôtes par un même symbiote. Dans ce cas, lorsque les propagations ne sont pas considérés, les réconciliations optimales sont obtenues en tenant compte seulement des coûts des évènements classiques (cospeciation, duplication, saut, perte). La nécessité de développer des méthodes statistiques pour assigner les coûts les plus appropriés est toujours d'actualité. Deux types de propagations sont introduites : verticaux et horizontaux. Le premier type correspond à ce qu'on pourrait appeler aussi un gel, à savoir que l'évolution du symbiote s'arrête et “gèle” alors que le symbiote continue d'être associé à un hôte et aux nouvelles espèces qui descendent de cet hôte. Le second comprend à la fois une invasion, du symbiote qui reste associé à l'hôte initial, mais qui en même temps s'associe (“envahit”) un autre hôte incomparable avec le premier, et un gel par rapport à l'évolution des deux l'hôtes, celui auquel il était associé au début et celui qu'il a envahi. Nos résultats montrent que l'introduction de ces evènements rend le modèle plus réaliste, mais aussi que désormais il est possible d'utiliser directement des jeux de données avec un symbiote qui est associé plusieurs hôtes au même temps, ce qui n'était pas faisable auparavant / In this Ph.D. work, we proposed models and algorithms to study the common evolutionary history of hosts and symbionts. The first goal was to analyse the robustness of the methods of phylogenetic tree reconciliations, which are a common way of performing such study. This involves mapping one tree, most often the symbiont’s, to the other using a so-called event-based model. The events considered in general are cospeciation, duplication, host switch, and loss. The host and the symbiont phylogenies are usually considered as given and without any errors. The objective here was to understand the strengths and weaknesses of the parsimonious model used in such mappings of one tree to another, and how the final results may be influenced when small errors are present, or are introduced in the input datasets. This may correspond either to a wrong choice of present-day symbiont-host associations in the case where multiple ones exist, or to small errors related to a wrong rooting of the symbiont tree. Our results show that the choice of leaf associations and of root placement may have a strong impact on the variability of the reconciliation output. We also noticed that the host switch event has an important role in particular for the rooting problem. The second goal of this Ph.D. was to introduce some events that are little or not formally considered in the literature. One of them is the spread, which corresponds to the invasion of different hosts by a same symbiont. In this case, as when spreads are not considered, the optimal reconciliations obtained will depend on the choice made for the costs of the events. The need to develop statistical methods to assign the most appropriate ones therefore remains of actuality. Two types of spread are introduced: vertical and horizontal. The first case corresponds to what could be called also a freeze in the sense that the evolution of the symbiont “freezes” while the symbiont continues to be associated with a host and with the new species that descend from this host. The second includes both an invasion, of the symbiont which remains with the initial host but at the same time gets associated with (“invades”) another one incomparable with the first, and a freeze, actually a double freeze as the evolution of the symbiont “freezes” in relation to the evolution of the host to which it was initially associated and in relation to the evolution of the second one it “invaded”. Our results show that the introduction of these events makes the model more realistic, but also that it is now possible to directly use datasets with a symbiont that is associated with more than one host at the same time, which was not feasible before
59

Investigating the role of personality on prospective memory performance in young adults using a multi-trait multi-method approach

Talbot, Karley-Dale 31 August 2020 (has links)
Prospective memory (PM) refers to a person’s ability to remember to do something in the future. It is a complex behaviour that is essential for the daily functioning of young and old alike. Despite its importance in everyday life, few studies have sought to examine the role of personality on PM performance using a multi-trait multi-method approach in young adults. The current study aimed to investigate the differential roles of the Big 5 personality traits on event- and time-based PM performance using multiple measurement methods. In addition, the study aimed to add to the current PM and personality literature by addressing several of the identified methodological limitations of the literature as outlined by Uttl and colleagues (2013). Results demonstrated few strong relationships between PM subtypes (event and time-based) performance indicators, though performance on the lab-based event-based PM task was stronger than on the lab-based time-based PM task even after controlling for ongoing task performance. Participants were also found to perform better on lab-based rather than naturalistic PM tasks. Naturalistic and self-report PM measures were significantly related to each other, but not to lab-based PM. Regarding personality, the relationship between specific personality traits and PM performance differed depending on the PM subtype and/or measurement method being investigated with conscientiousness, memory aid strategy use, and substance use engagement being found to best predict self-reported PM errors in daily life. The current study demonstrated that each PM measurement method taps into different aspects of behavioural and cognitive functioning. Without the use of all three measurement methods, whilst also considering the individuality of the client, researchers and clinicians may be doing a disservice to individuals with true PM difficulties as they may overlook important factors contributing to their poorer performance. / Graduate
60

Generative Models of Link Formation and Community Detection in Continuous-Time Dynamic Networks

Arastuie, Makan January 2020 (has links)
No description available.

Page generated in 0.0455 seconds