• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 1
  • 1
  • Tagged with
  • 15
  • 15
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Contributions à la fiabilisation du transport de la vidéo / Contributions to the improvement of the reliability in the video-transport context

Bouabdallah, Amine 03 December 2010 (has links)
Les applications vidéo rencontrent un franc succes dans les nouveaux réseaux de communication. Leur utilisation dans des contextes de plus en plus difficiles : réseaux de paquets non fiables (internet), diffusion vers des récepteurs mobiles via des canaux sans fil, ont requis le développement de nouvelles solutions plus efficaces et mieux adaptées. Les travaux de cette thèse sont une tentative de réponse à ces besoins. Les solutions qui ont été développées peuvent être regroupées en deux ensembles : des solutions issues de travaux nouveaux développés dans un contexte d'utilisation ordinaire et des solutions issues de l'amélioration et l'optimisation de travaux existants développés pour des contextes extrêmes.Le canal de Bernoulli a représenté pour nous le cadre de travail pour le développement des nouvelles solutions. Ainsi pour les applications de diffusion vidéo, nous avons ciblé la protection inégale et avons développé un mécanisme à protection inégale des données vidéo (DA-UEP). Ce mécanisme se situe à proximité de la source vidéo et adapte le niveau de protection des données à leur degré d'importance. Son originalité réside dans sa manière d'intégrer la particularité d'interdépendances des données vidéo dans le générateur de la protection inégale. Dans un travail d'approfondissement et d'exploration, nous avons combiné la protection inégale des couches hautes produite par DA-UEP avec de la protection inégale de la couche physique produite par de la modulation hiérarchique. L'optimisation de ce système a permis d'obtenir des gains significatifs et a validé le bien fondé de cette piste de recherche. Pour les communications vidéo interactives, nous avons évalué les performances du mécanisme Tetrys pour les communications vidéo. Ce mécanisme de codage à la volée avec intégration des acquittements a permis d'obtenir des résultats à la hauteur de ceux obtenus par la protection inégale dans un cadre de diffusion. Ces résultats ont aussi permis de mettre en avant tout le potentiel de ce mécanisme.Pour les canaux satellites mobiles, nous nous sommes intéressés à la diffusion vidéo vers des récepteurs mobiles. Dans ce cadre, nous avons évalué des mécanismes tels que les codes correcteurs d'erreurs, les entrelaceurs de la couche physique et de la couche liaison et les codes à effacement de niveau intermédiaire. Nous avons travaillé sur un canal réaliste en prenant en compte les contraintes pratiques telles que les temps de zapping et la vitesse de déplacement des récepteurs. Nous avons révélé les relations qui existent entre vitesse de déplacement, étalement spatial et qualité de réception. Ainsi, nous avons pu mettre en évidence les combinaisons de mécanismes qui permettent d'obtenir les meilleurs résultats en termes de fiabilité et de temps de zapping dans ce contexte particulier. / Video applications are growing more and more successful in the new communication networks. Their utilization in growing harder context as lossy packet network (Interne), satellitemobile broadcasting wireless channel, call for the developments of more ecient and well adapted solutions. The work done in this thesis is an attempt to answer those new needs. The proposed solutions can be grouped into two sets : solutions based on new works developed for medium context and solutions based on the improvement and optimization of existing works developed for extremes contexts. The Bernoulli channel represented the working environment to develop new solutions. So for video streaming application, we targeted unequal protection mechanisms and developed dependency-aware unequal protection codes (DA-UEP). This mechanism is located near the source application and adapt the protection level to the importance of the data. Its originality comes from its ability to integrate video data dependencies into the protection generator. In a forward work of improvement and exploration, we combined DA-UEP unequal protection from high layers with hierarchical-modulation unequal protection from lower layer. The system optimization achieves substantial gains and validate the righteous of this research area. For conversational video applications, we evaluated the performances of Tetrys in the video communication context. This On-the-y coding mechanism with acknowledgment integration achieves performances as high as those obtained by unequal protection in streaming context. Those performances also advances the high potential of this mechanism. The land mobile satellite channels represented the working environment to improve and optimize existing solutions. We particulary focus on satellite to mobile video broadcasting applications. In this context, we evaluated mechanisms such as forward errors correcting codes (FEC), data interleaving at physical or link layers and forward erasures correcting codes at intermediates layers. The evaluation is made on a realistic satellite channel and takes into account practical constraints such as the maximum zapping time and the user mobility at several speeds. We reveal the existing relations between user velocity, data spreading and reception quality. Consequently, We identied the combinations of mechanisms that give the best performance in terms of reliability and zapping time in this particular framework.
2

A Model for Managing Data Integrity

Mallur, Vikram 22 September 2011 (has links)
Consistent, accurate and timely data are essential to the functioning of a modern organization. Managing the integrity of an organization’s data assets in a systematic manner is a challenging task in the face of continuous update, transformation and processing to support business operations. Classic approaches to constraint-based integrity focus on logical consistency within a database and reject any transaction that violates consistency, but leave unresolved how to fix or manage violations. More ad hoc approaches focus on the accuracy of the data and attempt to clean data assets after the fact, using queries to flag records with potential violations and using manual efforts to repair. Neither approach satisfactorily addresses the problem from an organizational point of view. In this thesis, we provide a conceptual model of constraint-based integrity management (CBIM) that flexibly combines both approaches in a systematic manner to provide improved integrity management. We perform a gap analysis that examines the criteria that are desirable for efficient management of data integrity. Our approach involves creating a Data Integrity Zone and an On Deck Zone in the database for separating the clean data from data that violates integrity constraints. We provide tool support for specifying constraints in a tabular form and generating triggers that flag violations of dependencies. We validate this by performing case studies on two systems used to manage healthcare data: PAL-IS and iMED-Learn. Our case studies show that using views to implement the zones does not cause any significant increase in the running time of a process.
3

A Model for Managing Data Integrity

Mallur, Vikram 22 September 2011 (has links)
Consistent, accurate and timely data are essential to the functioning of a modern organization. Managing the integrity of an organization’s data assets in a systematic manner is a challenging task in the face of continuous update, transformation and processing to support business operations. Classic approaches to constraint-based integrity focus on logical consistency within a database and reject any transaction that violates consistency, but leave unresolved how to fix or manage violations. More ad hoc approaches focus on the accuracy of the data and attempt to clean data assets after the fact, using queries to flag records with potential violations and using manual efforts to repair. Neither approach satisfactorily addresses the problem from an organizational point of view. In this thesis, we provide a conceptual model of constraint-based integrity management (CBIM) that flexibly combines both approaches in a systematic manner to provide improved integrity management. We perform a gap analysis that examines the criteria that are desirable for efficient management of data integrity. Our approach involves creating a Data Integrity Zone and an On Deck Zone in the database for separating the clean data from data that violates integrity constraints. We provide tool support for specifying constraints in a tabular form and generating triggers that flag violations of dependencies. We validate this by performing case studies on two systems used to manage healthcare data: PAL-IS and iMED-Learn. Our case studies show that using views to implement the zones does not cause any significant increase in the running time of a process.
4

A Model for Managing Data Integrity

Mallur, Vikram 22 September 2011 (has links)
Consistent, accurate and timely data are essential to the functioning of a modern organization. Managing the integrity of an organization’s data assets in a systematic manner is a challenging task in the face of continuous update, transformation and processing to support business operations. Classic approaches to constraint-based integrity focus on logical consistency within a database and reject any transaction that violates consistency, but leave unresolved how to fix or manage violations. More ad hoc approaches focus on the accuracy of the data and attempt to clean data assets after the fact, using queries to flag records with potential violations and using manual efforts to repair. Neither approach satisfactorily addresses the problem from an organizational point of view. In this thesis, we provide a conceptual model of constraint-based integrity management (CBIM) that flexibly combines both approaches in a systematic manner to provide improved integrity management. We perform a gap analysis that examines the criteria that are desirable for efficient management of data integrity. Our approach involves creating a Data Integrity Zone and an On Deck Zone in the database for separating the clean data from data that violates integrity constraints. We provide tool support for specifying constraints in a tabular form and generating triggers that flag violations of dependencies. We validate this by performing case studies on two systems used to manage healthcare data: PAL-IS and iMED-Learn. Our case studies show that using views to implement the zones does not cause any significant increase in the running time of a process.
5

Metamorphic malware identification through Annotated Data Dependency Graphs' datasets indexing

Aguilera, Luis Miguel Rojas, +55 92 982114961 23 March 2018 (has links)
Submitted by Luis Miguel Rojas Aguilera (rojas@icomp.ufam.edu.br) on 2018-09-10T13:04:22Z No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) DissertacaoLuisRojasComFichaCatalograficaEFolhaAprovacao.pdf: 6768066 bytes, checksum: 5c26bd8a9fe369e787ba394d81fd07f3 (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2018-09-10T18:13:42Z (GMT) No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) DissertacaoLuisRojasComFichaCatalograficaEFolhaAprovacao.pdf: 6768066 bytes, checksum: 5c26bd8a9fe369e787ba394d81fd07f3 (MD5) / Rejected by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br), reason: O Campo "Agência de Fomento" deve ser preenchido com o nome (ou sigla) da Agência de Fomento. on 2018-09-10T18:15:16Z (GMT) / Submitted by Luis Miguel Rojas Aguilera (rojas@icomp.ufam.edu.br) on 2018-09-10T18:57:05Z No. of bitstreams: 2 DissertacaoLuisRojasComFichaCatalograficaEFolhaAprovacao.pdf: 6768066 bytes, checksum: 5c26bd8a9fe369e787ba394d81fd07f3 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Secretaria PPGI (secretariappgi@icomp.ufam.edu.br) on 2018-09-10T20:49:15Z (GMT) No. of bitstreams: 2 DissertacaoLuisRojasComFichaCatalograficaEFolhaAprovacao.pdf: 6768066 bytes, checksum: 5c26bd8a9fe369e787ba394d81fd07f3 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2018-09-11T14:07:43Z (GMT) No. of bitstreams: 2 DissertacaoLuisRojasComFichaCatalograficaEFolhaAprovacao.pdf: 6768066 bytes, checksum: 5c26bd8a9fe369e787ba394d81fd07f3 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2018-09-11T14:07:43Z (GMT). No. of bitstreams: 2 DissertacaoLuisRojasComFichaCatalograficaEFolhaAprovacao.pdf: 6768066 bytes, checksum: 5c26bd8a9fe369e787ba394d81fd07f3 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2018-03-23 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Code mutation and metamorphism have been successfully employed to create and proliferate new malware instances from existing malicious code. With such techniques, it is possible to modify a code’s structure without altering its original functions, so, new samples can be made that lack structural and behavioral patterns present in knowledge bases of malware identification systems, which hinders their detection. Previous research endeavors addressing metamorphic malware detection can be grouped into two categories: identification through code signature matching and detection based on models of classification. Matching code signatures presents lower false positive rates in comparison with models of classification, since such structures are resilient to the effects of metamorphism and allow better discrimination among instances, however, temporal complexity of matching algorithms prevents the application of such technique in real detection systems. On the other hand, detection based on classification models present less algorithmic complexity, however, a models’ generalization capacity is affected by the versatility of patterns that can be obtained by applying techniques of metamorphism. In order to overcome such limitations, this work presents methods for metamorphic malware identification through matching annotated data dependency graphs, extracted from known malwares and suspicious instances in the moment of analysis. To deal with comparison algorithms’ complexity, using these methods on real detection systems, the databases of graphs were indexed using machine learning algorithms, resulting in multiclass classification models that discriminated among malware families based on structural features of graphs. Experimental results, employing a prototype of the proposed methods from a database of 40,785 graphs extracted from 4,530 malware instances, presented detection times below 150 seconds for all instances, as well as higher average accuracy than 56 evaluated commercial malware detection systems. / A mutação de código e o metamorfismo têm sido empregados com sucesso para a criação e proliferação de novas instâncias de malware a partir de códigos maliciosos existentes. Com estas técnicas é possível modificar a estrutura de um código sem alterar as funcionalidades originais para obter novas instâncias que não se encaixam nos padrões estruturais e de comportamento presentes em bases de conhecimento dos sistemas de identificação de malware, dificultando assim a detecção. Pesquisas anteriores que abordam a detecção de malware metamórfico podem ser agrupadas em: identificação por meio do matching de assinaturas de código e detecção baseada em modelos de classificação. O matching de assinaturas de código tem apresentado taxas de falsos positivos inferiores às apresentadas pelos modelos de classificação, uma vez que estas estruturas são resilientes aos efeitos do metamorfismo e permitem melhor discriminação entre as instâncias. Entretanto a complexidade temporal dos algoritmos de comparação impedem a aplicação desta técnica em sistemas de detecção reais. Por outro lado, a detecção baseada em modelos de classificação apresenta menor complexidade algorítmica, porém a capacidade de generalização dos modelos se vê afetada pela versatilidade de padrões que podem ser obtidos por médio da aplicação de técnicas de metamorfismo. Para superar estas limitações, este trabalho apresenta uma metodologia para a identificação de malware metamórfico através da comparação de grafos de dependência de dados anotados extraídos de malwares conhecidos e de instâncias suspeitas no momento da análise. Para lidar com a complexidade dos algoritmos de comparação, permitindo assim a utilização da metodologia em sistemas de detecção reais, as bases de grafos são indexadas empregando algoritmos de aprendizagem de máquina, resultando em modelos de classificação multiclasse que discriminam entre famílias de malwares a partir das características estruturais dos grafos. Resultados experimentais, utilizando um protótipo da metodologia proposta sobre uma base composta por 40,785 grafos extraídos de 4,530 instâncias de malwares, mostraram tempos de detecção inferiores aos 150 segundos para processar todas as instâncias e de criação dos modelos inferiores aos 10 minutos, bem como acurácia média superior à maioria de 56 ferramentas comerciais de detecção de malware avaliadas.
6

A Model for Managing Data Integrity

Mallur, Vikram January 2011 (has links)
Consistent, accurate and timely data are essential to the functioning of a modern organization. Managing the integrity of an organization’s data assets in a systematic manner is a challenging task in the face of continuous update, transformation and processing to support business operations. Classic approaches to constraint-based integrity focus on logical consistency within a database and reject any transaction that violates consistency, but leave unresolved how to fix or manage violations. More ad hoc approaches focus on the accuracy of the data and attempt to clean data assets after the fact, using queries to flag records with potential violations and using manual efforts to repair. Neither approach satisfactorily addresses the problem from an organizational point of view. In this thesis, we provide a conceptual model of constraint-based integrity management (CBIM) that flexibly combines both approaches in a systematic manner to provide improved integrity management. We perform a gap analysis that examines the criteria that are desirable for efficient management of data integrity. Our approach involves creating a Data Integrity Zone and an On Deck Zone in the database for separating the clean data from data that violates integrity constraints. We provide tool support for specifying constraints in a tabular form and generating triggers that flag violations of dependencies. We validate this by performing case studies on two systems used to manage healthcare data: PAL-IS and iMED-Learn. Our case studies show that using views to implement the zones does not cause any significant increase in the running time of a process.
7

Impact of data dependencies for real-time high performance computing.

Hossain, M. Alamgir, Kabir, U., Tokhi, M.O. January 2002 (has links)
No / This paper presents an investigation into the impact of data dependencies in real-time high performance sequential and parallel processing. An adaptive active vibration control algorithm is considered to demonstrate the impact of data dependencies in real-time computing. The algorithm is analysed in detail to explore the inherent data dependencies. To minimize the impact of data dependencies, an investigation into reducing memory access in sequential computing is provided. The impact of data dependencies with various interconnections is also explored and demonstrated in real-time parallel processing through a set of experiments.
8

Automatic Parallelization of Simulation Code from Equation Based Simulation Languages

Aronsson, Peter January 2002 (has links)
<p>Modern state-of-the-art equation based object oriented modeling languages such as Modelica have enabled easy modeling of large and complex physical systems. When such complex models are to be simulated, simulation tools typically perform a number of optimizations on the underlying set of equations in the modeled system, with the goal of gaining better simulation performance by decreasing the equation system size and complexity. The tools then typically generate efficient code to obtain fast execution of the simulations. However, with increasing complexity of modeled systems the number of equations and variables are increasing. Therefore, to be able to simulate these large complex systems in an efficient way parallel computing can be exploited.</p><p>This thesis presents the work of building an automatic parallelization tool that produces an efficient parallel version of the simulation code by building a data dependency graph (task graph) from the simulation code and applying efficient scheduling and clustering algorithms on the task graph. Various scheduling and clustering algorithms, adapted for the requirements from this type of simulation code, have been implemented and evaluated. The scheduling and clustering algorithms presented and evaluated can also be used for functional dataflow languages in general, since the algorithms work on a task graph with dataflow edges between nodes.</p><p>Results are given in form of speedup measurements and task graph statistics produced by the tool. The conclusion drawn is that some of the algorithms investigated and adapted in this work give reasonable measured speedup results for some specific Modelica models, e.g. a model of a thermofluid pipe gave a speedup of about 2.5 on 8 processors in a PC-cluster. However, future work lies in finding a good algorithm that works well in general.</p> / Report code: LiU-Tek-Lic-2002:06.
9

Automatic Parallelization of Simulation Code from Equation Based Simulation Languages

Aronsson, Peter January 2002 (has links)
Modern state-of-the-art equation based object oriented modeling languages such as Modelica have enabled easy modeling of large and complex physical systems. When such complex models are to be simulated, simulation tools typically perform a number of optimizations on the underlying set of equations in the modeled system, with the goal of gaining better simulation performance by decreasing the equation system size and complexity. The tools then typically generate efficient code to obtain fast execution of the simulations. However, with increasing complexity of modeled systems the number of equations and variables are increasing. Therefore, to be able to simulate these large complex systems in an efficient way parallel computing can be exploited. This thesis presents the work of building an automatic parallelization tool that produces an efficient parallel version of the simulation code by building a data dependency graph (task graph) from the simulation code and applying efficient scheduling and clustering algorithms on the task graph. Various scheduling and clustering algorithms, adapted for the requirements from this type of simulation code, have been implemented and evaluated. The scheduling and clustering algorithms presented and evaluated can also be used for functional dataflow languages in general, since the algorithms work on a task graph with dataflow edges between nodes. Results are given in form of speedup measurements and task graph statistics produced by the tool. The conclusion drawn is that some of the algorithms investigated and adapted in this work give reasonable measured speedup results for some specific Modelica models, e.g. a model of a thermofluid pipe gave a speedup of about 2.5 on 8 processors in a PC-cluster. However, future work lies in finding a good algorithm that works well in general. / <p>Report code: LiU-Tek-Lic-2002:06.</p>
10

Discovering data quality rules in a master data management context / Fouille de règles de qualité de données dans un contexte de gestion de données de référence

Diallo, Thierno Mahamoudou 17 July 2013 (has links)
Le manque de qualité des données continue d'avoir un impact considérable pour les entreprises. Ces problèmes, aggravés par la quantité de plus en plus croissante de données échangées, entrainent entre autres un surcoût financier et un rallongement des délais. De ce fait, trouver des techniques efficaces de correction des données est un sujet de plus en plus pertinent pour la communauté scientifique des bases de données. Par exemple, certaines classes de contraintes comme les Dépendances Fonctionnelles Conditionnelles (DFCs) ont été récemment introduites pour le nettoyage de données. Les méthodes de nettoyage basées sur les CFDs sont efficaces pour capturer les erreurs mais sont limitées pour les corriger . L’essor récent de la gestion de données de référence plus connu sous le sigle MDM (Master Data Management) a permis l'introduction d'une nouvelle classe de règle de qualité de données: les Règles d’Édition (RE) qui permettent d'identifier les attributs en erreur et de proposer les valeurs correctes correspondantes issues des données de référence. Ces derniers étant de très bonne qualité. Cependant, concevoir ces règles manuellement est un processus long et coûteux. Dans cette thèse nous développons des techniques pour découvrir de manière automatique les RE à partir des données source et des données de référence. Nous proposons une nouvelle sémantique des RE basée sur la satisfaction. Grace à cette nouvelle sémantique le problème de découverte des RE se révèle être une combinaison de la découverte des DFCs et de l'extraction des correspondances entre attributs source et attributs des données de référence. Nous abordons d'abord la découverte des DFCs, en particulier la classe des DFCs constantes très expressives pour la détection d'incohérence. Nous étendons des techniques conçues pour la découverte des traditionnelles dépendances fonctionnelles. Nous proposons ensuite une méthode basée sur les dépendances d'inclusion pour extraire les correspondances entre attributs source et attributs des données de référence avant de construire de manière automatique les RE. Enfin nous proposons quelques heuristiques d'application des ER pour le nettoyage de données. Les techniques ont été implémenté et évalué sur des données synthétiques et réelles montrant la faisabilité et la robustesse de nos propositions. / Dirty data continues to be an important issue for companies. The datawarehouse institute [Eckerson, 2002], [Rockwell, 2012] stated poor data costs US businesses $611 billion dollars annually and erroneously priced data in retail databases costs US customers $2.5 billion each year. Data quality becomes more and more critical. The database community pays a particular attention to this subject where a variety of integrity constraints like Conditional Functional Dependencies (CFD) have been studied for data cleaning. Repair techniques based on these constraints are precise to catch inconsistencies but are limited on how to exactly correct data. Master data brings a new alternative for data cleaning with respect to it quality property. Thanks to the growing importance of Master Data Management (MDM), a new class of data quality rule known as Editing Rules (ER) tells how to fix errors, pointing which attributes are wrong and what values they should take. The intuition is to correct dirty data using high quality data from the master. However, finding data quality rules is an expensive process that involves intensive manual efforts. It remains unrealistic to rely on human designers. In this thesis, we develop pattern mining techniques for discovering ER from existing source relations with respect to master relations. In this set- ting, we propose a new semantics of ER taking advantage of both source and master data. Thanks to the semantics proposed in term of satisfaction, the discovery problem of ER turns out to be strongly related to the discovery of both CFD and one-to-one correspondences between sources and target attributes. We first attack the problem of discovering CFD. We concentrate our attention to the particular class of constant CFD known as very expressive to detect inconsistencies. We extend some well know concepts introduced for traditional Functional Dependencies to solve the discovery problem of CFD. Secondly, we propose a method based on INclusion Dependencies to extract one-to-one correspondences from source to master attributes before automatically building ER. Finally we propose some heuristics of applying ER to clean data. We have implemented and evaluated our techniques on both real life and synthetic databases. Experiments show both the feasibility, the scalability and the robustness of our proposal.

Page generated in 0.0679 seconds