• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 111
  • 20
  • 18
  • 10
  • 9
  • 8
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 234
  • 36
  • 36
  • 32
  • 31
  • 31
  • 29
  • 25
  • 21
  • 19
  • 19
  • 17
  • 16
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Use of simulators for side-channel analysis: Leakage detection and analysis of cryptographic systems in early stages of development

Veshchikov, Nikita 23 August 2017 (has links) (PDF)
Cryptography is the foundation of modern IT security,it provides algorithms and protocols that can be usedfor secure communications. Cryptographic algorithmsensure properties such as confidentiality and data integrity.Confidentiality can be ensured using encryption algorithms.Encryption algorithms require a secret information called a key.These algorithms are implemented in cryptographic devices.There exist many types of attacks against such cryptosystems,the main goal of these attacks is the extraction of the secret key.Side-channel attacks are among the strongest types of attacksagainst cryptosystems. Side-channel attacks focus on the attacked device, they measure its physicalproperties in order to extract the secret key. Thus, these attacks targetweaknesses in an implementation of an algorithm rather than the abstract algorithm itself.Power analysis is a type of side-channel attacks that can be used to extract a secretkey from a cryptosystem through the analysis of its power consumption whilethe target device executes an encryption algorithm. We can say that the secret information is leaking from the device through itspower consumption. One of the biggest challenges in the domain of side-channel analysisis the evaluation of a device from the perspective of side-channel attacksor in other words the detection of information leakage.A device can be subject to several sources of information leakageand it is actually relatively easy to find just one side-channel attack that works(by exploiting just one source of leakage),however it is very difficult to find all sources of information leakage or to show that there is no information leakage in the givenimplementation of an encryption algorithm. Evaluators use various statistical tests during the analysis of a cryptographic device to checkthat it does not leak the secret key. However, in order to performsuch tests the evaluation lab needs the device to acquire the measurementsand analyse them. Unfortunately, the development process of cryptographicsystems is rather long and has to go through several stages. Thus, an information leakagethat can lead to a side-channel attackcan be discovered by an evaluation lab at the very last stage using the finalproduct. In such case, the whole process has to be restarted in order to fix the issue,this can lead to significant time and budget overheads. The rationale is that developers of cryptographic systems would like to be able to detect issues related to side-channel analysis during the development of the system,preferably on the early stages of its development. However, it is far from beinga trivial task because the end product is not yet available andthe nature of side-channel attacks is such that it exploits the properties ofthe final version of the cryptographic device that is actually available to the end user. The goal of this work is to show how simulators can be used for the detection of issues related to side-channel analysis during the development of cryptosystems.This work lists the advantages of simulators compared to physical experimentsand suggests a classification of simulators for side-channel analysis.This work presents existing simulators that were created for side-channel analysis,more specifically we show that there is a lack of available simulation toolsand that therefore simulators are rarely used in the domain. We present threenew open-source simulators called Silk, Ascold and Savrasca.These simulators are working at different levels of abstraction,they can be used by developers to perform side-channel analysisof the device during different stages of development of a cryptosystem.We show how Silk can be used during the preliminary analysisand development of cryptographic algorithms using simulations based on high level of abstraction source code. We used it to compare S-boxesas well as to compare shuffling countermeasures against side-channel analysis.Then, we present the tool called Ascold that can be used to find side-channel leakagein implementations with masking countermeasure using the analysis of assembly code of the encryption.Finally, we demonstrate how our simulator called Savrasca can be used to find side-channelleakage using simulations based on compiled executable binaries. We use Savrascato analyse masked implementation of a well-known contest on side-channel analysis (the 4th edition of DPA Contest),as a result we demonstrate that the analysed implementation contains a previouslyundiscovered information leakage. Through this work we alsocompared results of our simulated experiments with real experiments comingfrom implementations on microcontrollers and showed that issues found using our simulatorsare also present in the final product. Overall, this work emphasises that simulatorsare very useful for the detection of side-channel leakages in early stages of developmentof cryptographic systems. / Option Informatique du Doctorat en Sciences / info:eu-repo/semantics/nonPublished
222

Contribution à l’étude de l’expression aromatique fruitée des vins rouges : Importance du niveau pré-sensoriel dans les interactions perceptives. / Study of red wines fruity aromatic expression : Importance of pre-sensorial level in perceptive interactions.

Cameleyre, Margaux 20 December 2017 (has links)
L’expression aromatique fruitée des vins rouges a été le sujet de nombreuses études qui démontrent qu’au moins une composante de cette expression est le reflet d’interactions perceptives impliquant des esters. La plupart des travaux concernant les interactions perceptives jusqu’à ce jour se sont avérés descriptifs, très peu ayant cherché à déterminer leurs origines. Dans ce but, un outil analytique a été développé afin d’apprécier les changements de volatilité d’esters représentatifs de l’arôme fruité des vins rouges. Ainsi, les coefficients de partage de 9 esters ont pu être déterminés aussi bien dans une solution hydroalcoolique que dans un vin rouge désaromatisé. L’application de cet outil analytique aux interactions perceptives préalablement mises en évidence a permis d'observer des changements de volatilité des esters lors de leur mise en mélange avec d'autres composés volatils en solution. Ces changements de volatilité, synonymes de potentiels effets pré-sensoriels, vont dans le même sens que ceux observés lors de l’analyse sensorielle. L’utilisation d’un verre de dégustation possédant deux compartiments a permis de mettre en lumière le fait que certaines modifications sensorielles pouvaient être expliquées, pour partie au moins, par des effets pré-sensoriels. L'impact olfactif de 5 alcools supérieurs ainsi que de 15 composés issus du bois de chêne a pu être démontré grâce à de nombreuses reconstitutions aromatiques, et leur rôle de masquage de l’arôme fruité des vins rouges a pu être souligné. Le calcul des coefficients de partage des esters a permis de montrer que des changements de volatilité ont lieu au sein de la solution. Ces modifications peuvent être corrélées aux résultats obtenus lors de l’analyse sensorielle. Ainsi, il est possible d’expliquer, en partie, les effets de masquage de l’arôme fruité observés grâce aux seuils de détection et aux profils sensoriels, du fait de la diminution de la présence d’esters dans l’espace de tête venant stimuler le dégustateur. Globalement, nos travaux ont permis de mettre en évidence que la mise en mélange en solution de composés volatils pouvait se traduire par la modification de la volatilité des constituants du mélange et que certaines de ces interactions pré-sensorielles pouvaient conditionner l'expression aromatique fruitée due aux esters. / A lot of studies highlighted the perceptual role of esters in fruity aromatic expression of red wines, demonstrating that at least partially it was due to perceptive interactions. Indeed, a lot of synergistic and masking effects have been brought to light in the past. However, the origin of these interactions remains unknown, although some authors suggested several levels where they can take place. In this goal, an analytical tool was developed to study the possible occurrence of esters volatility modifications. The application of this tool allowed determining partition coefficients of 9 esters in dilute alcohol solution and in dearomatized red wine. Thanks to perceptive interactions previously demonstrated by various authors, the application of this analytical tool highlighted modifications of esters volatility when compounds were mixed together in the solution. These modifications support the observations made with sensory analysis, indicating the existence of pre-sensorial effects. The use of a new tool consisting in a tasting glass with 2 compartments, reveals that these volatility changes may led to true sensorial modifications. Masking effect of fruity aroma due to 5 higher alcohols but also 15 wood by-products was highlighted using various aromatic reconstitutions. Esters partition coefficients calculation showed volatility modifications from the matrix to the gas phase. These data may be correlated with sensorial analysis results. Thus, it is possible to explain, at least partially, fruity aroma masking effect highlighted through detection threshold and sensory profile thanks to decrease in esters presence in headspace, and so a decline of taster’s olfactory stimulation. To conclude, our work showed that the mixture of volatile compounds in solution may result in modification of molecules volatility, and furthermore highlighted that these pre-sensorial interactions may impact fruity aromatic expression related to esters
223

The New Orleans Voodooscape. Ethnography of Contemporary Voodoo Traditions of New Orleans, Louisiana

Dorsman, Roos 23 September 2021 (has links) (PDF)
In New Orleans, Louisiana, voodoo is omnipresent. There is voodoo in a more religious sense, that is generally more secretive, and there is a highly visible side to voodoo, that is shown in the many references to voodoo in a commercial or political sense throughout the city.Based on ethnographic fieldwork, this dissertation demonstrates that the criteria that define the boundaries of what is voodoo are debated by the practitioners and the authenticity of certain events or practices is often internally contested. To include all these debates, the broader concept of ‘voodooscape’ is introduced in this dissertation.The concept of voodooscape is a useful tool for the analysis of voodoo in New Orleans, because it includes these debates and the large domain where negotiations on voodoo take place. This dissertation contains ethnographic descriptions of these negotiations, with a focus on the ways in which the ‘voodooscape’ embodies memories of the history of slavery and the ways of coping with these memories. The voodooscape both mobilizes these memories and how to cope with these memories at the same time. In a similar way, the voodooscape mobilizes the memories of more recent events, of which hurricane Katrina and the current violence that caused the rise of the Black Lives Matter Movement are the most important ones in New Orleans. The theoretical contribution of this work lies in the introduction of the concept of voodooscape, that allows a nuanced analysis and understanding of voodoo, through which several socially relevant dimensions are displayed and connected, namely: race, politics, music, art, heritage, tourism and commerce. / À la Nouvelle-Orléans, en Louisiane, le Vaudou est omniprésent. On y trouve le Vaudou dans sa signification religieuse, qui est généralement plutôt secrète, et son visage plus visible, qui s'illustre à travers la ville dans les nombreuses références aux incarnations commerciales ou politiques du Vaudou. À l'appui d'une enquête ethnographique, cette thèse démontre que les critères qui définissent les frontières de ce qui relève du Vaudou sont débattues par ses différents praticiens, de même qu'ils débattent fréquemment entre eux de l'authenticité de certaines pratiques ou événements. Pour rendre compte de tous ces débats, on a introduit le concept plus large de « Vaudousphère » [voodooscape]. Le concept de Vaudousphère est utile à l'analyse du Vaudou à la Nouvelle-Orléans en ce qu'il incorpore ces débats et les nombreux espaces où prennent place ces négociations sur le Vaudou. Cette thèse inclut des descriptions ethnographiques de ces négociations, en se focalisant sur la manière dont la « Vaudousphère » incarne la mémoire collective de l'histoire de l'esclavage et les stratégies d’accommodation avec cette mémoire. De même, la Vaudousphère mobilise les souvenirs d'événements plus récents, dont les plus importants à la Nouvelle-Orléans sont l'ouragan Katrina et la violence contemporaine qui a conduit à l'émergence du mouvement «Black Lives Matter». L'apport théorique de ce travail repose sur l'introduction du concept de Vaudousphère qui permet de conduire une analyse nuancée et compréhensive du Vadou, et à travers lequel plusieurs dimensions sociales pertinentes sont mises en évidence et en connexion, telles que: la race, la politique, la musique, l'art, l'héritage, le tourisme et le commerce. / Doctorat en Sciences politiques et sociales / info:eu-repo/semantics/nonPublished
224

The Human B Cell Response to a Multi-Antigen Complex (Bexsero)

Yalley, Prince 04 July 2019 (has links)
Multi-Antigen-Komplexe wurden in der Vakzinologie als effizientes Modell genutzt, um eine breite Impfstoffabdeckung gegen mehrere Stämme desselben Pathogens zu erzielen. Hier werden die Ergebnisse zur menschlichen B-Zell-Reaktion auf einen Multi-Antigen-Komplex (Bexsero) in drei Impfstoffen (Vax1, Vax2 und Vax3) dargestellt. Bexsero ist ein Impfstoff, der aus vier Antigenen (fHbp-GNA2091, NHBA-GNA1030, NadA und OMV (NZ98-254)) für Neisseria meningitidis (Nm) B besteht. Bei allen drei Impfstoffen konnten außerordentlich diverse Immunglobuline (Ig) beobachtet werden, die als Reaktion auf Bexsero mit einzigartigen Ig-Genselektionsmustern erzeugt wurden. Die Daten zeigen auch Igs, die eine Reihe von Spezifitäten aufweisen (Bexsero-spezifisch-reaktive Igs (nur Vax3) oder polyreaktive Igs (Vax2, Vax3 und Vax4)) und Affinitäten (hochbindende, mäßig bindende, schwach bindende und nicht reaktive Igs). Es wurde keine eindeutige Korrelation zwischen spezifischen Ig-Genmerkmalen und Ig-Reaktivitätseigenschaften beobachtet, obwohl Igs von allen Impfstoffen kollektiv unterschiedliche Affinitäten innerhalb/zwischen Cluster-Igs und zwischen Nicht-Clustern von Bexsero aufweisen, was potenzielle Vorteile für einen breiten Schutz mit sich bringt. Ig-Gen-Merkmale und Antigen-Reaktivitätseigenschaften von Igs, die gegen NHBA (22 Igs), fHbp (2 Igs) und NadA (2 Igs) erzeugt wurden, sind ebenfalls gezeigt. Diese Ig zeigten schwache Bindungsaffinitäten, wenn sie an endogen exprimierten Antigenen auf Nm mc58 getestet wurden, möglicherweise aufgrund eines ungeordneten N-Terminus von NHBA. Es wurde eine Anreicherung von hochmutierten polyreaktiven Ig beobachtet. Es werden unterschiedliche Immunoselektivitätsgrade für die verschiedenen Antigene beobachtet, was auf eine Antigenimmunodominanz sowie auf Hinweise auf eine Epitopmaskierung hindeutet. Mit einem kontrollierbaren System von 4 Antigenen eröffnen die Daten die Möglichkeit die menschliche B-Zell-Reaktion auf Multi-Antigen-Komplexe zu verstehen und zeigen, dass ein umfassendes Verständnis über die feinen zellulären und humoralen Einzelheiten der Immunantworten des Impfstoffs während klinischer Studien erforderlich ist. / Multi-antigen complexes have been exploited in vaccinology as an efficient model, to achieve broad vaccine coverage against multiple strains of the same pathogen. Here, the findings on the human B cell response to a multi-antigen complex (Bexsero) in three vaccinees (Vax1, Vax2 and Vax3) are shown. Bexsero is a vaccine comprising of four antigens (fHbp-GNA2091, NHBA-GNA1030, NadA and OMV (NZ98-254)) for Neisseria meningitidis (Nm) B. Immensely diverse (isotype distribution, IgVH and IgJH gene usage, CDR3 length distribution and clonal selection) immunoglobulins (Igs) generated in response to Bexsero with unique Ig gene selection patterns in all three vaccinees was observed. The data also shows Igs that exhibit a range of specificities {Bexsero-specific-reactive Igs (Vax3 Only) or polyreactive Igs (Vax2, Vax3 and Vax4)} and affinities (highly binding, moderately binding, weakly binding and unreactive Igs). No unique correlation between specific Ig gene features and Ig reactivity properties was observed, albeit Igs from all vaccinees collectively exhibit varied affinities within/between cluster Igs, and amongst non-clusters to Bexsero, with potential advantages for broad protection. Ig gene features and antigen-reactivity properties of Igs generated against NHBA (22 Igs), fHbp (2 Igs) and NadA (2 Igs) are also shown. These Igs exhibited weak binding affinities when tested on endogenously expressed antigens on Nm mc58, potentially due to disordered N-terminal of NHBA. Enrichment of highly mutated polyreactive Igs was observed. Varying degrees of immunoselectivity to the different antigens, suggesting antigen immunodominance as well as evidence of epitope masking are observed. With a controllable system of 4 antigens, the data opens a potential window to understanding the human B cell response to multi-antigen complexes and evinces the need for expansive understanding of the fine cellular and humoral details of vaccine immune responses during clinical trials.
225

Analyzing Action Masking in the MiniHack Reinforcement Learning Environment

Cannon, Ian 20 December 2022 (has links)
No description available.
226

Représentations géométriques de détails fins pour la simulation d’éclairage

Tamisier, Elsa 10 1900 (has links)
Cotutelle avec l'Université de Poitiers, France / Lors du processus de création d’une image de synthèse photoréaliste, l’objectif principal recherché est de reproduire le transport de la lumière dans un environnement virtuel, en prenant en compte aussi précisément que possible les caractéristiques des objets de la scène 3D. Dans la perception de notre environnement, les détails très fins ont une grande importance sur l’apparence des objets, tels que des rayures sur un morceau de métal, des particules dans du vernis, ou encore les fibres d'un tissu. Il est primordial de pouvoir les reproduire à tout niveau d'échelle. Créer ces détails grâce à des informations géométriques, par exemple un maillage, mène à une trop forte complexité en termes de construction, de stockage, de manipulation et de temps de rendu. Il est donc nécessaire d’utiliser des modèles mathématiques qui permettent d’approcher au mieux les comportements lumineux induits par ces détails. Le travail de cette thèse s'inscrit dans cette problématique de gestion des détails fins par la théorie des microfacettes. En particulier, nous nous sommes intéressés à la notion de masquage-ombrage permettant de calculer la proportion de surface qui est à la fois visible de l’observateur et éclairée. Pour cela, nous étudions le modèle théorique proposé par Smith et par Ashikhmin et al. dans lequel la représentation mathématique est basée sur des contraintes liées à la position des facettes, leur orientation, leur aire et les corrélations entre ces caractéristiques. Nous avons éprouvé le modèle sur plus de 400 maillages 3D reconstruits à partir de surfaces réelles qui ne respectent pas nécessairement les contraintes imposées du modèle. Quelques maillages sont également générés à partir de distributions des orientations de microfacettes de Beckmann et GGX largement utilisées dans les moteurs de simulation académiques et industriels. Pour chacun des maillages, une fonction de masquage de référence est mesurée grâce à un algorithme de tracer de rayons. Nous pouvons ainsi comparer le masquage réel d'une microsurface prenant en compte la donnée dans son entièreté, à son masquage théorique calculé seulement par la distribution de ses micronormales. Cette étude met en évidence un lien entre l'erreur du masquage théorique et certaines caractéristiques de la microsurface, telles que sa rugosité, son anisotropie, ou le non respect des contraintes du modèle. Nous proposons une méthode pour développer un modèle prédictif de l'erreur calculable à partir de ces caractéristiques et sans avoir recours au lourd processus de tracer de rayons. L’analyse montre également le lien entre l'erreur au niveau du terme de masquage et sa répercussion dans le rendu final d'une image de synthèse. La possibilité de prédire l'erreur grâce à un processus rapide permet d'estimer la complexité de l'usage d'une microgéométrie dans un rendu photoréaliste. Nous complétons nos travaux en proposant un facteur correctif au masquage théorique pour les surfaces isotropes, là encore calculable directement à partir des caractéristiques du maillage. Nous montrons le gain de précision que cette correction apporte, tant au niveau du masquage lui-même qu'au niveau des rendus d'images de synthèse. La thèse est conclue avec une discussion présentant les limites actuelles de notre étude et ses perspectives futures. / During the creation process of a photorealistic image, the main goal is to reproduce light transport in a virtual environment by considering as accurately as possible the characteristics of the surfaces from the 3D scene. In the real world, very fine details may have a tremendous impact on the visual aspect. For instance, scratches over metal, particles within varnish, or fibers of a fabric, will visually alter surface appearance. It is therefore crucial to be able to simulate such effects at every level of detail. However, creating such microgeometry for a given 3D mesh is a complex task that results in very high memory requirements and computation time. Mathematical models must be used to approximate as precisely as possible light effects produced by these details. This thesis considers fine details from the microfacet theory, and in particular, the masking-shadowing factor that corresponds to the proportion of microsurfaces that are both visible and illuminated. We study the commonly used theoretical model of Smith and Ashikhmin et al. where the mathematical representation is derived from constraints about microfacets positions, orientations, areas, and correlations between those characteristics. The proposed model has been confronted to more than 400 3D meshes, built from real-world measured surfaces that do not necessarily fulfill the theory constraints. Some of them have also been generated from the widely used Beckmann and GGX distributions. For each mesh, the ground-truth masking effect is measured using ray tracing, and compared with the theoretical masking computed only from the distribution of micronormals. Our study highlights a connection between the theoretical masking's error and some microsurface's characteristics, such as roughness, anisotropy, or non-compliance with required constraints. We provide a method for deriving a predictive model for this error. The mesh characteristics are sufficient to compute this model without requiring heavy ray tracing computation. Our analysis shows how the masking error impacts the rendering process. We also derive a model capable of predicting rendering errors from surface characteristics. With the opportunity to predict the error with a fast computation from a 3D mesh, one can estimate the complexity to use a given microgeometry for a photorealistic rendering. Our study concludes with the formulation of a correction function added to the theoretical masking term for isotropic surfaces. This correction is computed directly from the 3D mesh characteristics without any ray tracing involved. We show gains in the accuracy of the model when corrected with our formula, both for the masking effect itself and its impact on the exactness of the renderings. This thesis is concluded with a discussion about the current limitations of our study and some future perspectives.
227

Visual attention in primates and for machines - neuronal mechanisms

Beuth, Frederik 09 December 2020 (has links)
Visual attention is an important cognitive concept for the daily life of humans, but still not fully understood. Due to this, it is also rarely utilized in computer vision systems. However, understanding visual attention is challenging as it has many and seemingly-different aspects, both at neuronal and behavioral level. Thus, it is very hard to give a uniform explanation of visual attention that can account for all aspects. To tackle this problem, this thesis has the goal to identify a common set of neuronal mechanisms, which underlie both neuronal and behavioral aspects. The mechanisms are simulated by neuro-computational models, thus, resulting in a single modeling approach to explain a wide range of phenomena at once. In the thesis, the chosen aspects are multiple neurophysiological effects, real-world object localization, and a visual masking paradigm (OSM). In each of the considered fields, the work also advances the current state-of-the-art to better understand this aspect of attention itself. The three chosen aspects highlight that the approach can account for crucial neurophysiological, functional, and behavioral properties, thus the mechanisms might constitute the general neuronal substrate of visual attention in the cortex. As outlook, our work provides for computer vision a deeper understanding and a concrete prototype of attention to incorporate this crucial aspect of human perception in future systems.:1. General introduction 2. The state-of-the-art in modeling visual attention 3. Microcircuit model of attention 4. Object localization with a model of visual attention 5. Object substitution masking 6. General conclusion / Visuelle Aufmerksamkeit ist ein wichtiges kognitives Konzept für das tägliche Leben des Menschen. Es ist aber immer noch nicht komplett verstanden, so dass es ein langjähriges Ziel der Neurowissenschaften ist, das Phänomen grundlegend zu durchdringen. Gleichzeitig wird es aufgrund des mangelnden Verständnisses nur selten in maschinellen Sehsystemen in der Informatik eingesetzt. Das Verständnis von visueller Aufmerksamkeit ist jedoch eine komplexe Herausforderung, da Aufmerksamkeit äußerst vielfältige und scheinbar unterschiedliche Aspekte besitzt. Sie verändert multipel sowohl die neuronalen Feuerraten als auch das menschliche Verhalten. Daher ist es sehr schwierig, eine einheitliche Erklärung von visueller Aufmerksamkeit zu finden, welche für alle Aspekte gleichermaßen gilt. Um dieses Problem anzugehen, hat diese Arbeit das Ziel, einen gemeinsamen Satz neuronaler Mechanismen zu identifizieren, welche sowohl den neuronalen als auch den verhaltenstechnischen Aspekten zugrunde liegen. Die Mechanismen werden in neuro-computationalen Modellen simuliert, wodurch ein einzelnes Modellierungsframework entsteht, welches zum ersten Mal viele und verschiedenste Phänomene von visueller Aufmerksamkeit auf einmal erklären kann. Als Aspekte wurden in dieser Dissertation multiple neurophysiologische Effekte, Realwelt Objektlokalisation und ein visuelles Maskierungsparadigma (OSM) gewählt. In jedem dieser betrachteten Felder wird gleichzeitig der State-of-the-Art verbessert, um auch diesen Teilbereich von Aufmerksamkeit selbst besser zu verstehen. Die drei gewählten Gebiete zeigen, dass der Ansatz grundlegende neurophysiologische, funktionale und verhaltensbezogene Eigenschaften von visueller Aufmerksamkeit erklären kann. Da die gefundenen Mechanismen somit ausreichend sind, das Phänomen so umfassend zu erklären, könnten die Mechanismen vielleicht sogar das essentielle neuronale Substrat von visueller Aufmerksamkeit im Cortex darstellen. Für die Informatik stellt die Arbeit damit ein tiefergehendes Verständnis von visueller Aufmerksamkeit dar. Darüber hinaus liefert das Framework mit seinen neuronalen Mechanismen sogar eine Referenzimplementierung um Aufmerksamkeit in zukünftige Systeme integrieren zu können. Aufmerksamkeit könnte laut der vorliegenden Forschung sehr nützlich für diese sein, da es im Gehirn eine Aufgabenspezifische Optimierung des visuellen Systems bereitstellt. Dieser Aspekt menschlicher Wahrnehmung fehlt meist in den aktuellen, starken Computervisionssystemen, so dass eine Integration in aktuelle Systeme deren Leistung sprunghaft erhöhen und eine neue Klasse definieren dürfte.:1. General introduction 2. The state-of-the-art in modeling visual attention 3. Microcircuit model of attention 4. Object localization with a model of visual attention 5. Object substitution masking 6. General conclusion
228

Use of Coherent Point Drift in computer vision applications

Saravi, Sara January 2013 (has links)
This thesis presents the novel use of Coherent Point Drift in improving the robustness of a number of computer vision applications. CPD approach includes two methods for registering two images - rigid and non-rigid point set approaches which are based on the transformation model used. The key characteristic of a rigid transformation is that the distance between points is preserved, which means it can be used in the presence of translation, rotation, and scaling. Non-rigid transformations - or affine transforms - provide the opportunity of registering under non-uniform scaling and skew. The idea is to move one point set coherently to align with the second point set. The CPD method finds both the non-rigid transformation and the correspondence distance between two point sets at the same time without having to use a-priori declaration of the transformation model used. The first part of this thesis is focused on speaker identification in video conferencing. A real-time, audio-coupled video based approach is presented, which focuses more on the video analysis side, rather than the audio analysis that is known to be prone to errors. CPD is effectively utilised for lip movement detection and a temporal face detection approach is used to minimise false positives if face detection algorithm fails to perform. The second part of the thesis is focused on multi-exposure and multi-focus image fusion with compensation for camera shake. Scale Invariant Feature Transforms (SIFT) are first used to detect keypoints in images being fused. Subsequently this point set is reduced to remove outliers, using RANSAC (RANdom Sample Consensus) and finally the point sets are registered using CPD with non-rigid transformations. The registered images are then fused with a Contourlet based image fusion algorithm that makes use of a novel alpha blending and filtering technique to minimise artefacts. The thesis evaluates the performance of the algorithm in comparison to a number of state-of-the-art approaches, including the key commercial products available in the market at present, showing significantly improved subjective quality in the fused images. The final part of the thesis presents a novel approach to Vehicle Make & Model Recognition in CCTV video footage. CPD is used to effectively remove skew of vehicles detected as CCTV cameras are not specifically configured for the VMMR task and may capture vehicles at different approaching angles. A LESH (Local Energy Shape Histogram) feature based approach is used for vehicle make and model recognition with the novelty that temporal processing is used to improve reliability. A number of further algorithms are used to maximise the reliability of the final outcome. Experimental results are provided to prove that the proposed system demonstrates an accuracy in excess of 95% when tested on real CCTV footage with no prior camera calibration.
229

Untersuchungen zur Variabilität der Ausbildung hyperdermaler Wasserspeichergewebe unter Berücksichtigung variegater Periklinalchimären

Faßmann, Natalie 09 June 2008 (has links)
Die Arbeit ist in drei Teile untergliedert: Die Struktur "Hypodermales Wasserspeichergewebe" wird unter anatomischen, ökomorphologischen und evolutionsbiologischen Gesichtspunkten betrachtet. Die Anwesenheit eines farblosen Hypoderms erschwert bei der Musteranalyse variegater Periklinalchimären die Bestimmung der Konstitution der L2. Variegate Periklinalchimären mit Hypodermbildung wurden auf die Möglichkeiten der Bestimmung der L2 hin untersucht. Es werden verschiedene Entstehungsformen von maskierenden Mustern und die noch nicht beschriebenen Ringzellen vorgestellt, die den Idiotyp der L2-bürtigen Schicht anzeigen können. Ringzellen sind die Zellen, die im Bereich der Schließzellen an den substomatären Interzellularraum grenzen. Sie bilden dabei einen Ring um die Schließzellen, der im Flächenschnitt zu erkennen ist. Hypodermale Wasserspeichergewebe sind hauptsächlich bei tropischen Arten verbreitet. Die xeromorphe Struktur kommt sowohl bei den epiphytischen Bromelien als auch bei den hygromorphen Schattenpflanzen des tropischen Regenwaldes vor. Die beiden Selektionsfaktoren Trockenheit und Lichtintensität werden als mögliche Einflussfaktoren auf die Hypodermbildung diskutiert. Beispiele dafür, dass der Faktor Licht auch einen modifikativen Einfluss auf die Differenzierung der Hypodermzellen zu haben scheint, werden vorgestellt. Die Struktur "Hypodermales Wasserspeichergewebe" ist sowohl bei Monokotylen als auch Dikotylen gleichermaßen verbreitet. Es wird daher vermutet, dass es sich um eine analoge Struktur handelt, die mehrmals voneinander unabhängig zu verschiedenen Zeiten bei verschiedenen Arten entstanden ist. Innerhalb einer Gruppe verwandter Arten konnte sie mithilfe der Homologiekriterien als homolog eingestuft werden. / This paper contains three different issues: The structure "hypodermal water storage tissue" is considered from the anatomical, the ecomorphological and evolutionary aspect. Because hypodermal layers are non-green, it is difficult to make a pattern analysis of variegated periclinal chimeras and to determine the constitution of L2. Variegated periclinal chimeras with hypodermal layers were examined to the possibilities of determining L2. Different origins of masking patterns and the non-yet described ring cells are presented. Both structures are able to show the L2-genotype. Ring cells are those cells bordering the intercellular space near the stomata. In a cut parallel to the surface the ring built by ring cells is seen. The hypodermal water storage tissue is mainly distributed among tropical species. The xeromorphic structure occurs both to the epiphytic bromeliads and to the hygromorphic shadow plants of the tropical rainforest. The environmental factors humidity and solar radiation are discussed as possible influences on the development of hypodermal layers. Examples for the apparent modifying influence of solar radiation on the development of hypodermal cells are presented. The structure "hypodermal water storage tissue" occurs both to monocots and dicots. That indicates that it is an analogues structure and that it evolved several times independent of each other in different species. Among a group of nearly related species it could be classified by the aid of the criteria of homology as a homologues structure.
230

L'encadrement juridique de la gestion électronique des données médicales. / Legal framework for the electronic management of medical data

Etien-Gnoan, N'Da Brigitte 18 December 2014 (has links)
La gestion électronique des données médicales consiste autant dans le simple traitement automatisé des données personnelles que dans le partage et l'échange de données relatives à la santé. Son encadrement juridique est assuré, à la fois, par les règles communes au traitement automatisé de toutes les données personnelles et par celles spécifiques au traitement des données médicales. Cette gestion, même si elle constitue une source d'économie, engendre des problèmes de protection de la vie privée auxquels le gouvernement français tente de faire face en créant l'un des meilleurs cadres juridiques au monde, en la matière. Mais, de grands chantiers comme celui du dossier médical personnel attendent toujours d'être réalisés et le droit de la santé se voit devancer et entraîner par les progrès technologiques. Le développement de la télésanté bouleverse les relations au sein du colloque singulier entre le soignant et le soigné. L'extension des droits des patients, le partage de responsabilité, l'augmentation du nombre d'intervenants, le secret médical partagé constituent de nouveaux enjeux avec lesquels il faut, désormais compter. Une autre question cruciale est celle posée par le manque d'harmonisation des législations augmentant les risques en cas de partage transfrontalier de données médicales / The electronic management of medical data is as much in the simple automated processing of personal data in the sharing and exchange of health data . Its legal framework is provided both by the common rules to the automated processing of all personal data and those specific to the processing of medical data . This management , even if it is a source of economy, creates protection issues of privacy which the French government tries to cope by creating one of the best legal framework in the world in this field. However , major projects such as the personal health record still waiting to be made and the right to health is seen ahead and lead by technological advances . The development of e-health disrupts relationships within one dialogue between the caregiver and the patient . The extension of the rights of patients , sharing responsibility , increasing the number of players , the shared medical confidentiality pose new challenges with which we must now count. Another crucial question is posed by the lack of harmonization of legislation increasing the risks in cross-border sharing of medical

Page generated in 0.4497 seconds