• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 35
  • 17
  • 12
  • 10
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

On the Relevance of Temporal Information in Multimedia Forensics Applications in the Age of A.I.

Montibeller, Andrea 24 January 2024 (has links)
The proliferation of multimedia data, including digital images and videos, has led to an increase in their misuse, such as the unauthorized sharing of sensitive content, the spread of fake news, and the dissemination of misleading propaganda. To address these issues, the research field of multimedia forensics has developed tools to distinguish genuine multimedia from fakes and identify the sources of those who share sensitive content. However, the accuracy and reliability of multimedia forensics tools are threatened by recent technological advancements in new multimedia processing software and camera devices. For example, source attribution involves attributing an image or video to a specific camera device, which is crucial for addressing privacy violations, cases of revenge porn, and instances of child pornography. These tools exploit forensic traces unique to each camera’s manufacturing process, such as Photo Response Non-Uniformity (PRNU). Nevertheless, image and video processing transformations can disrupt the consistency of PRNU, necessitating the development of new methods for its recovery. Conversely, to distinguish genuine multimedia from fakes, AI-based image and video forgery localization methods have also emerged. However, they constantly face challenges from new, more sophisticated AI-forgery techniques and are hindered by factors like AI-aided post-processing and, in the case of videos, lower resolutions, and stronger compression. This doctoral study investigates the relevance of exploiting temporal information during the parameters estimation used to reverse complex spatial transformations for source attribution, and video forgery localization in low-resolution H.264 post-processed inpainted videos. Two novel methods will be presented that model the set of parameters involved in reversing in-camera and out-camera complex spatial transformations applied to images and videos as time series, improving source attribution accuracy and computational efficiency. Regarding video inpainting localization, a novel dataset of videos inpainted and post-processed with Temporal Consistency Networks will be introduced, and we will present our solution to improve video inpainting localization by taking into account spatial and temporal inconsistencies at dense optical flow level. The research presented in this dissertation has resulted in several publications that contribute to the field of multimedia forensics, addressing challenges related to source attribution and video forgery localization.
12

Des systèmes à base de composants aux implémentations cadencées par le temps : une approche correcte par conception / From timed component-based systems to time-triggered implementations : a correct-by-design approach

Guesmi, Hela 27 October 2017 (has links)
Dans le domaine des systèmes temps-réel embarqués critiques, les méthodes de conception et de spécification et leurs outils associés doivent permettre le développement de systèmes au comportement temporel déterministe et, par conséquent, reproductible afin de garantir leur sûreté de fonctionnement. Pour atteindre cet objectif, on s’intéresse aux méthodologies de développement basées sur le paradigme Time-Triggered (TT). Dans ce contexte, nombre de propriétés et, en particulier, les contraintes temps-réel de-bout-en-bout, se voient satisfaites par construction. Toutefois, garantir la sûreté de fonctionnement de tels systèmes reste un défi. En général, les outils de développement existants n’assurent pas par construction le respect de l’intégralité des spécifications, celles-ci doivent, en général, être vérifiées à posteriori. Avec la complexité croissante des applications embarquées, celle de leur validation à posteriori devient, au mieux, un facteur majeur dans les coûts de développement et, au pire, tout simplement impossible. Il faut, donc, définir une méthode qui, tout en permettant le développement des systèmes corrects par constructions, structure et simplifie le processus de spécification. Les méthodologies de conception de haut niveau à base de composants, qui permettent la conception et la vérification des systèmes temps-réels critiques, présentent une solution ultime pour la structuration et la simplification du processus de spécification de tels systèmes.L’objectif de cette thèse est d'associer la méthodologie BIP (Behaviour-Interaction-Priority) qui est une approche de conception basée sur composants avec la plateforme d'exécution PharOS, qui est un système d'exploitation temps-réel déterministe orienté sûreté de fonctionnement. Le flot de conception proposé dans cette thèse est une approche transformationnelle qui permet de conserver les propriétés fonctionnelles des modèles originaux de BIP. Il est composé essentiellement de deux étapes. La première étape, paramétrée par un mapping de tâche défini par l'utilisateur, permet de transformer un modèle BIP en un modèle plus restreint qui représente une description haut niveau des implémentations basées sur des primitives de communication TT. La deuxième étape permet la génération du code pour la plateforme PharOS à partir de ce modèle restreint.Un ensemble d'outils a été développé dans cette thèse afin d'automatiser la plupart des étapes du flot de conception proposé. Ceci a permis de tester cette approche sur deux cas d'étude industriels ; un simulateur de vol et un relais de protection moyenne tension. Dans les deux applications, on vise à comparer les fonctionnalités du modèle BIP avec celles du modèle intermédiaire et du code généré. On fait varier les stratégies de mapping de tâche dans la première application, afin de tester leur impact sur le code généré. Dans la deuxième application, on étudie l'impact de la transformation sur le code généré en comparant quelques aspects de performance du code générer avec ceux d'une version de l'application qui a été développée manuellement. / In hard real-time embedded systems, design and specification methods and their associated tools must allow development of temporally deterministic systems to ensure their safety. To achieve this goal, we are specifically interested in methodologies based on the Time-Triggered (TT) paradigm. This paradigm allows preserving by construction number of properties, in particular, end-to-end real-time constraints. However, ensuring correctness and safety of such systems remains a challenging task. Existing development tools do not guarantee by construction specification respect. Thus, a-posteriori verification of the application is generally a must. With the increasing complexity of embedded applications, their a-posteriori validation becomes, at best, a major factor in the development costs and, at worst, simply impossible. It is necessary, therefore, to define a method that allows the development of correct-by-construction systems while simplifying the specification process.High-level component-based design frameworks that allow design and verification of hard real-time systems are very good candidates for structuring the specification process as well as verifying the high-level model.The goal of this thesis is to couple a high-level component-based design approach based on the BIP (Behaviour-Interaction-Priority) framework with a safety-oriented real-time execution platform implementing the TT approach (the PharOS Real-Time Operating System). To this end, we propose an automatic transformation process from BIPmodels into applications for the target platform (i.e. PharOS).The process consists in a two-step semantics-preserving transformation. The first step transforms a BIP model coupled to a user-defined task mapping into a restricted one, which lends itself well to an implementation based on TT communication primitives. The second step transforms the resulting model into the TT implementation provided by the PharOS RTOS.We provide a tool-flow that automates most of the steps of the proposed approach and illustrate its use on an industrial case study for a flight Simulator application and a medium voltage protection relay application. In both applications, we compare functionalities of both original, intermediate and final model in order to confirm the correctness of the transformation. For the first application, we study the impact of the task mapping on the generated implementation. And for the second application, we study the impact of the transformation on some performance aspects compared to a manually written version.
13

Implémentations Centralisée et Répartie de Systèmes Corrects par construction à base des Composants par Transformations Source-à-source dans BIP

Jaber, Mohamad 28 October 2010 (has links) (PDF)
The thesis studies theory and methods for generating automatically centralized and distributed implementations from a high-level model of an application software in BIP. BIP (Behavior, Interaction, Priority) is a component framework with formal operational semantics. Coordination between components is achieved by using multiparty interactions and dynamic priorities for scheduling interactions. A key idea is to use a set of correct source-to-source transformations preserving the functional properties of a given application software. By application of these transformations we can generate a full range of implementations from centralized to fully distributed. Centralized Implementation: the implementation method transforms the interactions of an application software described in BIP and generates a functionally equivalent program. The method is based on the successive application of three types of source-to-source transformations: flattening of components, flattening of connectors and composition of atomic components. We shown that the system of the transformations is confluent and terminates. By exhaustive application of the transformations, any BIP component can be transformed into an equivalent monolithic component. From this component, efficient standalone C++ code can be generated. Distributed Implementation: the implementation method transforms an application software described in BIP for a given partition of its interactions, into a Send/Receive BIP model. Send/Receive BIP models consist of components coordinated by using asynchronous message passing (Send/Receive primitives). The method leads to 3-layer architectures. The bottom layer includes the components of the application software where atomic strong synchronization is implemented by sequences of Send/Receive primitives. The second layer includes a set of interaction protocols. Each protocol handles the interactions of a class of the given partition. The third layer implements a conflict resolution protocol used to resolve conflicts between conflicting interactions of the second layer. Depending on the given partition, the execution of obtained Send/Receive BIP model range from centralized (all interactions in the same class) to fully distributed (each class has a single interaction). From Send/Receive BIP models and a given mapping of their components on a platform providing Send/Receive primitives, an implementation is automatically generated. For each class of the partition we generate C++ code implementing the global behavior of its components. The transformations have been fully implemented and integrated into BIP tool-set. The experimental results on non trivial examples and case studies show the novelty and the efficiency of our approach.
14

High-level programming languages translator / High-level programming languages translator

Tonchev, Ognyan, Salih, Mohammed January 2008 (has links)
This paper discusses a high level language translator. If we divide translators of programming languages in two types: those working for two specific languages and universal translators that can be used for translation between different programming languages, the solution that will be presented in this work can be classified as both, specific language oriented and an universal translator. For the purpose of the research it was limited to translate only from Java to C++, but it can easily be extended to translate between any other high level languages. For simplifying the process of translation the project uses an intermediate step. All programs in the input language are first compiled to an abstract XML language and then to the desired output language. That way it is not necessary to translate directly from one programming language to another which is a very tricky and difficult task and could make the solution difficult to be maintained and extended. Hence the translator can also be used to translate from any high level language to XML. That gives another advantage to our solution: an XML representation of a computer program is valuable information by itself. We describe the design and implementation of the solution, demonstrate how it works and also give information on how it can be extended to work for any other programming language. / This paper discusses a high level language translator. If we divide translators of programming languages in two types: those working for two specific languages and universal translators that can be used for translation between different programming languages, the solution that will be presented in this work can be classified as both, specific language oriented and an universal translator. For the purpose of the research it was limited to translate only from Java to C++, but it can easily be extended to translate between any other high level languages. For simplifying the process of translation the project uses an intermediate step. All programs in the input language are first compiled to an abstract XML language and then to the desired output language. That way it is not necessary to translate directly from one programming language to another which is a very tricky and difficult task and could make the solution difficult to be maintained and extended. Hence the translator can also be used to translate from any high level language to XML. That gives another advantage to our solution: an XML representation of a computer program is valuable information by itself. We describe the design and implementation of the solution, demonstrate how it works and also give information on how it can be extended to work for any other programming language.
15

A source-to-source compiler for the PRAM language Fork to the REPLICA many-core architecture

Zhou, Cheng January 2012 (has links)
This thesis describes the implementation of a source to source compiler that translates Fork language to REPLICA baseline language. The Fork language is a high-level programming language designed for the PRAM (Parallel Random Access Machine) model. The baseline language is a low-level parallel programming language for the REPLICA architecture which implements the PRAM computing model. To support the Fork language on REPLICA, a compiler that translates Fork to baseline is built.  The Fork to baseline compiler is built in compatibility with the Fork implementation for SB-PRAM. Moreover, the libraries that support Fork's features are built using baseline language.The evaluation result verifies that the features of the Fork language are supported in the implementation. The evaluation also shows the scalability of our implementation and shows that the overhead introduced by Fork-to-baseline translation is small.
16

SkePU 2: Language Embedding and Compiler Support for Flexible and Type-Safe Skeleton Programming

Ernstsson, August January 2016 (has links)
This thesis presents SkePU 2, the next generation of the SkePU C++ framework for programming of heterogeneous parallel systems using the skeleton programming concept. SkePU 2 is presented after a thorough study of the state of parallel programming models, frameworks and tools, including other skeleton programming systems. The advancements in SkePU 2 include a modern C++11 foundation, a native syntax for skeleton parameterization with user functions, and an entirely new source-to-source translator based on Clang compiler front-end libraries. SkePU 2 extends the functionality of SkePU 1 by embracing metaprogramming techniques and C++11 features, such as variadic templates and lambda expressions. The results are improved programmability and performance in many situations, as shown in both a usability survey and performance evaluations on high-performance computing hardware. SkePU’s skeleton programming model is also extended with a new construct, Call, unique in the sense that it does not impose any predefined skeleton structure and can encapsulate arbitrary user-defined multi-backend computations. We conclude that SkePU 2 is a promising new direction for the SkePU project, and a solid basis for future work, for example in performance optimization.
17

Řízení a monitorování statické zdrojovny přes Ethernet / A static power source control and monitoring via Ethernet

Rücker, Jan January 2009 (has links)
This project deals with design and implementation of management and monitoring of static power source via ethernet. Description of individual objects and instalations which are located in static power source. Design and implementation of a system for access control of individual users.
18

Optimizing Threads of Computation in Constraint Logic Programs

Pippin, William E., Jr. 29 January 2003 (has links)
No description available.
19

Towards putting abstract interpretation of Prolog into practice : design, implementation and evaluation of a tool to verify and optimise Prolog programs

Gobert, François 11 December 2007 (has links)
Logic programming is appealing since it allows the programmer to concentrate on the meaning of the problem to be solved. Unfortunately, for efficiency reasons, the declarative and operational natures of Prolog do not coincide. Prolog uses an incomplete depth-first search rule, unifications and negations may be unsound, and there are extralogical features like the cut or dynamic predicates. Methodologies have been proposed to construct operationally correct and efficient Prolog code. Researchers have designed methods to automate the verification of operational properties on which optimisation of logic programs can be based. A few tools have been implemented but there is a lack of a unified framework. <P> The goal and topic of this thesis is the design, implementation and evaluation of an abstract interpretation framework of Prolog to integrate state-of-the-art techniques. The analyser is based on an original proposal that defines the notion of abstract sequence, which allows one to verify many desirable operational properties of a logic procedure. The properties include types, modes, sharing of terms, proving termination, linear relations between the size of input/output terms and the number of solutions to a call. A single global analysis is performed, and abstract sequences are derived at each program point. <P> In this thesis, we implement and evaluate the original framework, and, more importantly, we overcome its limitations to make it accurate and usable in practice: the improved framework accepts any Prolog code with modules, new abstract domains and operations are added, and the language of specifications is more expressive. We also design and implement an optimiser that generates specialised code. The optimiser uses the abstract information to safely apply source-to-source transformations. Code transformations include clause and literal reordering, introduction of cuts, and removal of redundant literals. The optimiser follows a precise strategy to choose the most rewarding transformations in best order.
20

Transformations de programmes et optimisations de l'architecture mémoire pour la synthèse de haut niveau d'accélérateurs matériels

Plesco, Alexandru 27 September 2010 (has links) (PDF)
Une grande variété de produits vendus, notamment de télécommunication et multimédia, proposent des fonctionnalités de plus en plus avancées. Celles-ci induisent une augmentation de la complexité de conception. Pour satisfaire un budget de performance et de consommation d'énergie, ces fonctionnalités peuvent être accélérées par l'utilisation d'accélérateurs matériels dédiés. Pour respecter les délais nécessaires de mise sur le marché et le prix de développement, les méthodes traditionnelles de conception de matériel ne sont plus suffisantes et l'utilisation d'outils de synthèse de haut niveau (HLS) est une alternative intéressante. Ces outils sont maintenant plus aboutis et permettent de générer des accélérateurs matériels possédant une structure interne optimisée, grâce à des techniques d'ordonnancement efficaces, de partage des ressources et de génération de machines d'états. Cependant, les interfacer avec le monde extérieur, c'est-à-dire intégrer des accélérateurs matériels générés automatiquement dans une conception complète, avec des communications optimisées pour atteindre le meilleur débit, reste une tâche très ardue, réservée aux concepteurs experts. Le leitmotiv de cette thèse était d'étudier et d'élaborer des stratégies source-à-source pour améliorer la conception de ces interfaces, en essayant d'envisager l'outil HLS comme back-end pour des transformations front-end plus avancées. Dans la première partie de la thèse, comme étude de cas, nous avons conçu à la main, en VHDL, une logique intelligente permettant l'interfaçage d'un accélérateur, calculant la multiplication de deux matrices, généré par l'outil de synthèse MMAlpha. En utilisant des informations sur les dépendances de données, nous avons implanté des techniques de double tampon et de calcul/transfert par bloc (pavage), pour des mémoires locales SRAM de type scratchpad, pour améliorer la réutilisation des données. Ceci a permis d'augmenter de manière significative les performances du système, mais a également exigé un effort important de développement. Nous avons ensuite montré, sur plusieurs applications de type multimédia, avec un autre outil de HLS, Spark, que le même avantage pouvait être obtenu avec une étape préliminaire semi-automatique de transformations source-à-source (ici de C vers C). Pour cela, nous avons utilisé le front-end d'un compilateur avancé, basé sur le compilateur Open64 et l'outil WRaP-IT de transformations polyédriques. Des améliorations significatives ont été présentées, en particulier pour la synthèse de la conversion de l'espace couleur (extrait d'un benchmark de MediaBench II), dont les données étaient transmises via une mémoire cache. Cette étude a démontré l'importance des transformations des boucles comme étape de pré-traitement pour les outils HLS, mais aussi la difficulté de les utiliser en fonction des caractéristiques de l'outil HLS pour exprimer les communications externes. Dans la deuxième partie de la thèse, en utilisant l'outil C2H HLS d'Altera qui peut synthétiser des accélérateurs matériels communiquant avec une mémoire externe DDR-SDRAM, nous avons montré qu'il était possible de restructurer automatiquement le code de l'application, de générer des processus de communication adéquats, écrits entièrement en C, et de les compiler avec C2H, afin que l'application résultante soit hautement optimisée, avec utilisation maximale de la bande passante mémoire. Ces transformations et optimisations, qui combinent des techniques telles que l'utilisation de double tampon, la contraction de tableaux, le pavage, le pipeline logiciel, entre autres, ont été intégrées dans un outil de transformation automatique source-à-source, appelé Chuba et basé sur la représentation du modèle polyédrique. Notre étude montre que ainsi qu'il est possible d'utiliser certains outils HLS comme des optimiseurs de niveau back-end pour les optimisations effectuées au niveau front-end, comme c'est le cas pour la compilation standard où des transformations de haut niveau sont développées en amont des optimiseurs au niveau assembleur. Nous pensons que ceci est la voie à suivre pour que les outils HLS deviennent viables.

Page generated in 0.048 seconds